The Blackwell Companion to Phonology 140518423X, 9781405184236

Finally got around to merging all the individual chapters downloaded from the Wiley-Blackwell library into one big, comp

1,728 174 24MB

English Pages 3053 Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The Blackwell Companion to Phonology
 140518423X, 9781405184236

Table of contents :
Front Matter
1 - Underlying Representations
2 - Contrast
3 - Learnability
4 - Markedness
5 - The Atoms of Phonological Representations
6 - Self-organization in Phonology
7 - Feature Specification and Underspecification
8 - Sonorants
9 - Handshape in Sign Language Phonology
10 - The Other Hand in Sign Language Phonology
11 - The Phoneme
12 - Coronals
13 - The Stricture Features
14 - Autosegments
15 - Glides
16 - Affricates
17 - Distinctive Features
18 - The Representation of Clicks
19 - Vowel Place
20 - The Representation of Vowel Length
21 - Vowel Height
22 - Consonantal Place of Articulation
23 - Partially Nasal Segments
24 - The Phonology of Movement in Sign Language
25 - Pharyngeals
26 - Schwa
27 - The Organization of Features
28 - The Representation of Fricatives
29 - Secondary and Double Articulation
30 - The Representation of Rhotics
31 - Lateral Consonants
32 - The Representation of Intonation
33 - Syllable-internal Structure
34 - Precedence Relations in Phonology
35 - Downstep
36 - Final Consonants
37 - Geminates
38 - The Representation of sC Clusters
39 - Stress Phonotactic and Phonetic Evidence
40 - The Foot
41 - The Representation of Word Stress
42 - Pitch Accent Systems
43 - Extrametricality and Non-finality
44 - The Iambictrochaic Law
45 - The Representation of Tone
46 - Positional Effects in Consonant Clusters
47 - Initial Geminates
48 - Stress-timed vs Syllable-timed Languages
49 - Sonority
50 - Tonal Alignment
51 - The Phonological Word
52 - Ternary Rhythm
53 - Syllable Contact
54 - The Skeleton
55 - Onsets
56 - Sign Syllables
57 - Quantity-sensitivity
58 - The Emergence of the Unmarked
59 - Metathesis
60 - Dissimilation
61 - Hiatus Resolution
62 - Constraint Conjunction
63 - Markedness and Faithfulness Constraints
64 - Compensatory Lengthening
65 - Consonant Mutation
66 - Lenition
67 - Vowel Epenthesis
68 - Deletion
69 - Final Devoicing and Final Laryngeal Neutralization
70 - Conspiracies
71 - Palatalization
72 - Consonant Harmony in Child Language
73 - Chain Shifts
74 - Rule Ordering
75 - Consonant-Vowel Place Feature Interactions
76 - Structure Preservation the Resilience of Distinctive Information
77 - Long-distance Assimilation of Consonants
78 - Nasal Harmony
79 - Reduction
80 - Mergers and Neutralization
81 - Local Assimilation
82 - Featural Affixes
83 - Paradigms
84 - Clitics
85 - Cyclicity
86 - Morpheme Structure Constraints
87 - Neighborhood Effects
88 - Derived Environment Effects
89 - Gradience and Categoricality in Phonological Theory
90 - Frequency Effects
91 - Vowel Harmony Opaque and Transparent Vowels
92 - Variability
93 - Sound Change
94 - Lexical Phonology and the Lexical Syndrome
95 - Loanword Phonology
96 - Experimental Approaches in Theoretical Phonology
97 - Tonogenesis
98 - Speech Perception and Phonology
99 - Phonologically Conditioned Allomorph Selection
100 - Reduplication
101 - The Interpretation of Phonological Patterns in First Language Acquisition
102 - Category-specific Effects
103 - Phonological Sensitivity to Morphological Structure
104 - Root-Affix Asymmetries
105 - Tier Segregation
106 - Exceptionality
107 - Chinese Tone Sandhi
108 - Semitic Templates
109 - Polish Syllable Structure
110 - Metaphony in Romance
111 - Laryngeal Contrast in Korean
112 - French Liaison
113 - Flapping in American English
114 - Bantu Tone
115 - Chinese Syllable Structure
116 - Sentential Prominence in English
117 - Celtic Mutations
118 - Turkish Vowel Harmony
119 - Reduplication in Sanskrit
120 - Japanese Pitch Accent
121 - Slavic Palatalization
122 - Slavic Yers
123 - Hungarian Vowel Harmony
124 - Word Stress in Arabic

Citation preview

Front Matter

6

1 - Underlying Representations

36

2 - Contrast

62

3 - Learnability

89

4 - Markedness

114

5 - The Atoms of Phonological Representations

142

6 - Self-organization in Phonology

165

7 - Feature Specification and Underspecification

183

8 - Sonorants

206

9 - Handshape in Sign Language Phonology

230

10 - The Other Hand in Sign Language Phonology

258

11 - The Phoneme

276

12 - Coronals

302

13 - The Stricture Features

323

14 - Autosegments

346

15 - Glides

376

16 - Affricates

402

17 - Distinctive Features

426

18 - The Representation of Clicks

451

19 - Vowel Place

475

20 - The Representation of Vowel Length

500

21 - Vowel Height

526

22 - Consonantal Place of Articulation

554

23 - Partially Nasal Segments

585

24 - The Phonology of Movement in Sign Language

612

25 - Pharyngeals

639

26 - Schwa

663

27 - The Organization of Features

678

28 - The Representation of Fricatives

704

29 - Secondary and Double Articulation

729

30 - The Representation of Rhotics

746

31 - Lateral Consonants

765

32 - The Representation of Intonation

792

33 - Syllable-internal Structure

816

34 - Precedence Relations in Phonology

834

35 - Downstep

859

36 - Final Consonants

883

37 - Geminates

908

38 - The Representation of sC Clusters

933

39 - Stress Phonotactic and Phonetic Evidence

959

40 - The Foot

984

41 - The Representation of Word Stress

1015

42 - Pitch Accent Systems

1038

43 - Extrametricality and Non-finality

1062

44 - The Iambictrochaic Law

1087

45 - The Representation of Tone

1113

46 - Positional Effects in Consonant Clusters

1138

47 - Initial Geminates

1159

48 - Stress-timed vs Syllable-timed Languages

1182

49 - Sonority

1195

50 - Tonal Alignment

1220

51 - The Phonological Word

1239

52 - Ternary Rhythm

1263

53 - Syllable Contact

1280

54 - The Skeleton

1298

55 - Onsets

1320

56 - Sign Syllables

1344

57 - Quantity-sensitivity

1370

58 - The Emergence of the Unmarked

1398

59 - Metathesis

1415

60 - Dissimilation

1443

61 - Hiatus Resolution

1469

62 - Constraint Conjunction

1496

63 - Markedness and Faithfulness Constraints

1526

64 - Compensatory Lengthening

1548

65 - Consonant Mutation

1572

66 - Lenition

1594

67 - Vowel Epenthesis

1611

68 - Deletion

1632

69 - Final Devoicing and Final Laryngeal Neutralization

1657

70 - Conspiracies

1679

71 - Palatalization

1701

72 - Consonant Harmony in Child Language

1726

73 - Chain Shifts

1752

74 - Rule Ordering

1771

75 - Consonant-Vowel Place Feature Interactions

1796

76 - Structure Preservation the Resilience of Distinctive Information

1822

77 - Long-distance Assimilation of Consonants

1846

78 - Nasal Harmony

1873

79 - Reduction

1901

80 - Mergers and Neutralization

1927

81 - Local Assimilation

1954

82 - Featural Affixes

1980

83 - Paradigms

2007

84 - Clitics

2037

85 - Cyclicity

2054

86 - Morpheme Structure Constraints

2084

87 - Neighborhood Effects

2105

88 - Derived Environment Effects

2124

89 - Gradience and Categoricality in Phonological Theory

2150

90 - Frequency Effects

2172

91 - Vowel Harmony Opaque and Transparent Vowels

2199

92 - Variability

2225

93 - Sound Change

2249

94 - Lexical Phonology and the Lexical Syndrome

2271

95 - Loanword Phonology

2293

96 - Experimental Approaches in Theoretical Phonology

2318

97 - Tonogenesis

2339

98 - Speech Perception and Phonology

2369

99 - Phonologically Conditioned Allomorph Selection

2392

100 - Reduplication

2418

101 - The Interpretation of Phonological Patterns in First Language Acquisition

2449

102 - Category-specific Effects

2474

103 - Phonological Sensitivity to Morphological Structure

2499

104 - Root-Affix Asymmetries

2525

105 - Tier Segregation

2551

106 - Exceptionality

2573

107 - Chinese Tone Sandhi

2596

108 - Semitic Templates

2621

109 - Polish Syllable Structure

2644

110 - Metaphony in Romance

2666

111 - Laryngeal Contrast in Korean

2697

112 - French Liaison

2720

113 - Flapping in American English

2746

114 - Bantu Tone

2765

115 - Chinese Syllable Structure

2789

116 - Sentential Prominence in English

2813

117 - Celtic Mutations

2842

118 - Turkish Vowel Harmony

2866

119 - Reduplication in Sanskrit

2890

120 - Japanese Pitch Accent

2914

121 - Slavic Palatalization

2943

122 - Slavic Yers

2971

123 - Hungarian Vowel Harmony

2998

124 - Word Stress in Arabic

3025

This edition first published 2011 © 2011 Black\vell Publishing Ltd Blackwell Publishing \Vas acquired by John \Viley & Sons in February 2007. Blackwell's publishing progran1 has been merged with Wiley's global Sientific, Technical, and Medical business to form Wiley-Blakwell.

Registered Offie john Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, Vest Sussex, P019 8SQ, United Kingdom

Editorial Oices 30 Main Street, fV!alden, MA 02148-5020, USA

9600 Garsington Road, Oxford, OX4 2DQ, UK The Atrium, Southen Gate, Chicnester, West Sussex, P019 SSQ, UK For details of our global editorial offices, for customer services, and for information about how to apply for perms i sion to reuse the copyright material in this book please see our website at W\'fW.\'filey.com/wiley-black\vell. The right of l\1arc van Oostendorp, Colin}. Ewen, Elizabeth Hume, and Keren Rice to be identified as the authors of the editoriaJ material in th.is work nas been asse1ted in accordance with the UK Copyright, Designs and Patents Act 1988. All rights reserved. No part o f this pubication n1ay be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher. Viley also publisJ1es its books in a vadety of electronic formas. t Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trade1narks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative informaton in regard to the subject niatter covered. It is sold on the understanding that the publsher s not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Library of Congress Cataloging-in-Publication Data The Blackwell companion to phonology / edHed by l\1arc van Oostendorp . . . [et al.]. p. cm. - (Blackwell con1panions to linguistics series) Includes bibliographical references and index. JSBN 978-·1-4051-843·6 (hardcover : alk. paper) 1. Phonetics. 2. Grammar, Comparative and general-Phonology. 1. Oostendorp, Marc van, 1967P217.B53 2011 414-dc22 2010042206 A catalogue record for this book s available fron1 the British Library. Set in l0/'12pt .Paatino by Craphicraft Limited, Hong Kong

1

2011

Bahan dengan hak cipta

Brief Contents

VolumeI

Preface

us

General Issues and Segmental Phonology

I .

XXIX

1

VolumeII Suprasegmental and Prosodic Phonology

757

VolumeIII Phonological Processes

1363

VolumeIV Phonological Interfaces

1945

VolumeV Phonology across Languages

2561

q

Bahan dengan hak cipta

Bahan dengan hak cipta

Full e of s

VolumeI

Prfe ace

XXJX

General Issues and Semental Phonoloy 1 Underlying Representations, &fose g I nacio 2 Contrast & 3 Learnability, v 4 Markedness Eliabeth Hume 5 The Atoms of Phonological Representations, Maranne Pou.plier 6 Self-organization in Phonol, 7 Feature Specfication and Underspecfication, r lang i 8 Sonorants 9 Handshape n Sgn Language Phonoloy, Brentari i 10 The Other Hand in Sign Lae Phonoloy, Crasborn 1 1 The Phonen1e 12 Coronats 13 The Stricture Features Ellen 1 4 Autosegments, R. Leben 15 Glides V. 16 Africates 17 Distinctive Features, f{Mielke 18 The Representation of Clicks, 19 Vo-el Place, 20 The Representation of Vo\vel Length, 21 Vovel Height, 22 Consonanta. Place of Articulation i ai 23 Parta i lly 'asal Segments, Riehl & g Cohn 24 The Phonoloyof Movement Sign Language, Sandler 25 Phaeals, 26 Sln\'a

Hua/de

fennifer Cole Daniel Currie Hall Heinz fason gle

Andrew vVedel

Diana A cl el Diane Onno

Bert Botma

B. Elan Dresher T. A. Hall M. Kaisse Willia111 Susannah Levi Yen-Hwei Lin Amanda Miller Bruce vloren-Duolljti David Odden Douglas Pulleyblank Keren Rice Ab l C. Anastasia K. n Wendy Kiman1Shahin Daniel Silvennan ,

1

27 54 79

107 130 148 171

195 223

241

267

288 311 341

367 391 416 440 465 491 519

550

577

604

628 Copyrightd maerial

vrn

27 28 29 30 31

Full Table of Contents Christia. f11ann Bert Vaux & Brett Miller Jeroen van de /eijer Richard Wiese Moira Yip

The Organization of Features, The Representation of Fricatives, Secondary and Double Articulation, The Representation of Rhotics, Lateral Consonants,

643 669 694 711 730

I VolumeI

Suprasemental and Prosodic Phonoloy

32 The Representation of Intonation, Arna/in Arvaniti 33 Syllable-internal Structure, Anna R. K. Bosd1 34 Precedence Relations in Phonol, Charles Cairns Eric Rahlll( 35 Downstep, Bruce Connell 36 Final Consonants,Marie-.He/ene Cote 37 Geminates Stuart Davis 38 The Representation of sC Clusters, Heather Goad 39 Stress: Phonotactic and Phonetic Evidence,Matthew Gordon 40 The Foot Michael Hammond 41 The Representation of Word Stress, Be. Hermans 42 Pitch Accent Systems, Harn1 der Hulst 43 Extrametricality and Non-finaliyt , Brett Hvde 44 The Ian1bic-Trochaic La\v, Brett H11de 45 The Representation of Tone, Larn1M. HI/man 46 Positional Effects in Consonant Clusters,Jong/toJun 47 Initial Geminates,Astrid Kraehennrnnn 48 Stress-timed vs. Syllable-timed Languages, Marina Nespor, Mohinish Shukla &Jacques Mehler 49 Sonority, Steve Parker 50 Tonal Alignment, Pilar Prieto 51 The Phonological /ord, Anthi Revithiadou 52 Ternary Rhythm, Curt Rice 53 Syllable Contact, Misun Seo 54 The Skeleton, Peter Szigetvtfri 55 Onsets, Nina Topintzi 56 Sign Syllables, Ronnie lilbur 57 Quantiyt -sensitivity, Dra.gn Zee &

va.

757 781 799 824 848 873 898 924 949 980 1003 1027 1052 1078 1103 1124 1147 1160 1185 1204 1228 1245 1263 1285 1309 1335

Volume III Phonological Processes &

58 The En1ergence of the Unn1arked, Mich.el Becker Kathryn Finck Potts 59 Metathesis, Eugene Buckley 60 Dissimilation, Patrik Bye 61 Hiatus Resolution, Roderic F. C.sali 62 Constraint Conjunction, Megan . Crowhurst

1363 1380 1408 1434 1461 Copyrighted maerial

. M

i

6 Z 8

2

Q

l

2

. i i 6 2

8 9 Q

ll

Full T.ble of Contents Markedness and Faithfulness Constraints, Paul de Lacv Co1npensatory Lengthening, Randall Gess Consonant Mutation,Janet Gri;zenhout Lenition,Naomi Gurevich Nancv.Hall Deletion,fohn Han·is Final Devo cing and Final Lea Neutralization, GregoryK. Iverson &Joseph C. Salmons Conspiracies, Clzarles W. Kisseberth Palatalization,Alexei Kochetov Consonant Harmony n Child Language, Clara C. Levelt Chai Shifts,Anna Lubo111icz Rule Ordering,Joan Mascaro Consonant Vo el Place Feature Interactions,11e Padgett Vo,vel Epenthesis, i

l

n

-



v

Structure Preservation: The Resilience of Distinctive Information, Carolearadiq&Dar l e n el a C h a r t e

f Consonants, Sharon Rose Harmony, R.chel Walker ,Natasha Varner !ergers and Neutralization, Alan C. L. Yu Local Assimilation, Elizabeth C. Zsi ga Long-distance Assimilation o Nasal Red uction

IX

1

3 7

59

6

7

22

44 66 1

7 6

1

7

11 8

66 2

9

VolumeIV

Phonological Interfaces 2

Akin/Jivi Akinlabi Adam Albright s Stephen R. Anderson Ricardo Ber11 1idez-Otero Geert Booii Admn Buchwald Environment Effects, LuigiBurzio Mirjam Ernestus Frequency Effects, Ste fan A. Frisch o e Harmony: ue and Transparent Vo\vels, Adainantios I. Gafos Amanda. Dve Variability, Gregor(R. Guf Sound Change, Jose nacio Hua/de Lexical Phonoloyand the Lexical Syndrome, Ellen M. Kaisse & April McMahon Loanvord Phonoloy, YoonjungKang Exp r ntal ppr ac he in Theoretical Phonology, Sliigeto Kawahara Tonogenesis, Jolin Kingston Speech Perception and Phonoloy, Andreiu Martin & Sharon Peperkamp

Featural Affixes, . Paradi i ns, . C.tic , t . Cycliciy . Morphen1e Structure Constraints, Z Neighborhood Effects, 8 Derived � Gradience and Categor icalit y in Phonolog ical Theory, ,

0

. V w l . 3

� . 6

Z 8

e

&

ime

o

s

5

2

02

9

9 0 9

15

7 4 0 4 6

8

3 4

34

Copyrightd maerial

x

Full Table of Contents

99 Phonologically Conditioned Allomorph Selection, Reduplication, The Intepretation of Phonological Pattems in First Language Ac uisition, & 1 2 Category-specific Effects, L. Phonological Sensitivity to Morphological Structure, Root-Affix Asymmetries, Tier S tion, Exeptionality

100 101 0 103 104 105 106

Eric Rai111y Yvan Rose Sharon lnkelas q Jennifer Smith Suzanne Llrbanczyk a Adam Ussishkin Matthew Wolf

Andre1v Nevins

,

2357 2383

fochen Tro1111ner

2414 2439

2464 2490

2516 2538

VolumeV

Phonoloy across Lan.ages

Bao Zhiming 108 Semitic Templates, Outi Bat-El 109 Polish Syllable Structure, Christina Y. Bethin 110 Meta hony in Romance, Andrea Calabrese 1 - ee Yu Cho 111 La al Contrast in Korean, Yo g 112 French Liaison,Marie-Helene Cote 113 Flpgn American English, Kenneth /. de Jong 114 Bantu Tone, Laur..Downing 115 Chinese Syllable Structure, San Duanmu 116 Sentential Prominence in En li h, Carlos Gussenhoven 117 Celtic Mutations, S.[. .Hannahs 118 Turkish Vo''el Harmony, Baris Kabak 119 Reduplication n Sanskrit, l{obert Kennedv 120 anese Pitch Accent, Haruo Kubozono 121 Slavic Palatalization,[erz11Rubach 122 Slavic Yers Tobias Scheer 123 Hungar n Vo,vel Harmony, Miklos Torkencz11 124 Word Stress in Arabic,[anel C. E. lalson 107

s

2561 2 8

p

2631 2662 2685 2711 2730 2754 2778 2807 2831 2855 2879

Chine e Tone Sandhi,

e

1

g

a i

s

1

m

56 2609

2908

2936 2963 2990

Copyrighted maerial

Contributors

Akinbiyi Akinlabi is Professor of Linguistics at Rutgers University, New

Bnu1S\\'ick. He is the current President of the vVorld Congress of African Linguistics, and serves on the councils of both the West African Languages Congress and the Annual Conference on African Linguistics (USA). His expertise lies primarily in the phonologies of the Benue-Congo languages of West Africa. He has publshed articles in leading journals of theoretical and descriptive linguistics, including and His forthcon1ing book is titled:

Linguistic Inq11iry, fo11rnal of Linguistics, journal of African Languages and Linguistics. Yoruba: A phonological grammar. Adam Albright is an Associate Professor (Anshen-Chomsky Professorship in

Language, and ilind Career Development Chair) at MIT. His research interests nclude i phonology, 1norphology, and learnability, with an emphasis on using computational modeling and experimental techniques o investigate issues in phonological theory. Stephen R. Anderson is Dorothy R. Diebold Professor of Linguistics at Yale

i n 1994. Aspects of the theory of

University. After receiving s Ph.D. from MIT in he taught at Harvard, UCLA, Stanford, and Johns Hopkins Universities before coming to Yale He is the author of nun1erous articles and six books, including He has done field research on several languages roost recently the Surmiran form of Rumantsch. In addition to phonology and morphology, his research interests include animal comnn1nication systems and the evolution of human language.

1969,

clitics (2005).

,

Dian a Archangeli received her Ph.D. from MIT in

1984. She has been a faculty member at the University of Arizona since 1985 and ''as a Felio'" at the Center for Advanced Study in the Behavioral Sciences at Stanford in 2007-8. She co-authored Grounded phonology (1994) "'ith Douglas Pulleyblank, and co-edited Optima.lij Theory: An overview (1997) with D. Terence Langendoen. She is

Director of the Arizona Phonological In1aging Lab, used for ultrasound study of the articulation of language soi.mds. Amalia Arvaniti s an Associate Professor n the Department of Linguistics at the

University of California, San Diego. She received her Ph.D. fron1 the University of Ca1nbridge and has held research and teaching appointments at the Universities Copyrightd maerial

xii

Contributors

of Cambridge, Oxford, Edinburgh, and Cyprus. Her research focuses on the phonetics and phonology of prosody, with special emphasis on the experin1ental investigation and forn1al representation of rhytlun and into.nation. Bao Zhiming is a linguist working in the Department of English Language and

Literature, National University of Singapore. He has nvo main research interests: Chinese phonology and contact linguistics.

Outi Bat-El is Professor of Lingui tics at Tel-Aviv University. She is engaged in the study of Semitic phonology and morphology. In her & article, she initiated the Semitic root debate, arguing that thre is no consonantal root in Se1nitic n1orphology. Her subsequent \Vork •vithi.n the ran1evork of Optrnality Theory provided further support to this argument. She has also authored articles on Hebrevv truncation and reduplication on blends and hypocoristics as 'ell as on language acquisition

s

Linguistic Theory

1994 Natural Language

(Recherches Linguistiques de Vincennes 2003) (Language 2002) (Linguistic Inquiry 2006), (Phonology 1996) (Phonology 2005), (Langrwge Sciences 2009).

Michael Becker received his Ph.D. from the University of Massachusetts Amherst

2009,

in and is currently a lecturer at Harvard University. His \Vork focuses on the gra1nn1atical prmciples that govern lexicon organization, especially as a \vay to discover Universal biases m the phonolog ica l grammar. He a lo vvorks on the acquisition and learning of lexical patterns. Ricardo Bermudez-Otero is Senior Lecturer in Linguistics and English Language at the University of Manchester. He previously held a Postdoctoral Fellorship at

the British Academy, followed by a Lectureship in Linguistics at the University of Nevvcastle upon Tyne. His research focuses on the morphosyntax-phonology and phonology-phonetics n i terfaces, •vith particular attention to diachronic issues. language change His publications mclude rnapters m

Optilnality Theory and (2003), The handbook of English linguistics (2006), The Cambridge handbook of phonology (2007), Deponency and 111orphological n1is1natches (2007), Optimality-theoretic studies in Spanish plwnology (2007), and Morphology and its interfaces (forthcoming).

Christina Y. Bethin is Professor of Linguistics at Stony Brook Univesity, N' York,

vho 'Orks on prosody and syllable structure. She has 'ritten nun1erous articles on the diachronic and synchronic phonology of the Slavic languages, including Polish, Ukraiian, Russian, Belarusian, Serbian and Croatian, Czech, and Slovene, and published tvo award-vng boo ks and

, Polish syllables: The role of prosody in Slavic prosody: Language change and phonological

phonoloi; and morphology (1992) theory (1998). Geert Booij is Professor of Linguistics at the University of Leiden. He is the author of The phonology of Dutch (1995), The morphology of Dutch (2002), The grammar of words (2005), and Construction morphology (2010), and of a number

of linguistic articles in a \vide range of Dutch and international journals, with focus on the interaction of phonology and morphology, and theoretical issues in morphology. Anna R. K. Bosch has taught linguistics at the University of Kentucky since

1990,

vhere she s now Associate Dean for Undergraduate Progran1s and a faculty n1ember of the Lmguistics Program and English departments. She has written on Copyrightd maerial

Contributors

Xlll

Scottish Gaelic phonology and dialectology, and is currently ''orking on a new project on the history of phonetic transcription n European dialect studies. Bert Botma is associate professor at the Leiden University Centre of Linguistics (LUCL) and a research fello' at the Nethelands Organization of Scientfi ic Research (NWO). His main research interest is segmental phonology. He has published on such topics as nasal harmony, English syllable structure, and on the phonological contrast between obstruents and sonorants. Diane Brentari is Professor of Linguistics and Director of the ASL Program at

Purdue University. She has published widely in the area of sign language phonology and morphology. She is the author of (1998) and editor of (2010). Her current research involves the cross-linguistic analysis of sign languages.

A prosodic rnodel ofsign language Sign languages: A C.mbridge language sun1ei;

phonologt;

Adam Buchwald is an Assistant Professor at Ne" York University in Com1nun­ icative Sciences and Disorders. He holds a Ph.D. in Cognitive Science from Johns Hopkins University and had a post doctoral felJo,vship in the Speech Research Lab at Indiana University. His 'Ork is interdisciplinary and spans topics in linguistics, psycholinguistics, and communication sciences. Eugene Buckley is Associate Professor of Linguistics at the University of

Pennsylvania. His research interests include metrical and syllable structure, and the proper locus of phonetic and functional explanation. Much of his ''Ork focuses on native languages of North America. Luigi Buzio is Profesor Emeritus, Department of Cognitive Science, Jolu1s Hopkins

University. He has also taught at Harvard University, Department of Ron1ance Languages and Literatures. His research interests include syntax, phonology and (1986) and morphology. He is the author of (1994).

Italian syntax

Principles ofEnglish stress

Pa.rik Bye is currently a researcher "'ith the Center for Advanced Study in

Theoretical Linguistics at the University of Troms0. He has published scholarly articles on a variety of topics including the syllable structure, quantity, and stress systems of the Finno-Ugric Saami languages, North Germanic accentology, and phonologically conditioned allomorphy. With Martin Kramer and Sylvia Blaho (2007). he edited F ee o

r d m of analysis?

Charles Cairns is Professor En1eritus of Linguistics at the City University of Ne'" York. He received his Ph.D. in linguistics from Colun1bia University in 1968. Author of a number of articles in phonology, his most recent \VOrk is a co-edited volume with Eric Raimy,

phonology (2009).

Contemporary vie1vs on architecture and represi1tations in

Andrea Calabrese '"as born in Campi Salentina in the southeastern tip of Italy. He obtained his Ph.D. in linguistics at MT in 1988 and is currently teaching at

the University of Connecticut. His main interests are phonology, 1norphology, and historical linguistics. He has published n1ore than articles in books and journals such as Revierv, His recent book, (2005), proposes a theory integrating phonological rules and repairs triggered by 1narkedne.ss con­ straints into a derivational model of phonology.

50 Linguistic .lnquiry, Linguistic Studies in Language, Brain and Language, Journal of Neuro-Linguistics, and Rivista di Linguistica. 1arkedness and econom.y in a derivational rnodel of phonology

Copyrightd maerial

xiv

Contributors

Roderic F. Casali's nterests nclude phonetics, phonological theory, and deriptive vork on the phonology of African languages. His research has focused prmarily on vowel p hno1nena especa i lly vo\vel hiatus resolution and ATR VO\\•el harn1ony. He has done linguistic fieldwork in Ghana with SIL and currently teaclles lin­ guistics at the Canada Institute of Linguistics at Trinity Western University in Langley, British Columbia.

,

Young-mee Yu Cho is Associate Professor of East Asian Languages and Cultures at Ru tgers University and author of

Parmneters of consonantal assin1ilation (1999), Integrated Korean (2000), and Korean phonology and morphology (forthcoming). She has written on Korean .angu age and culture, theoretical lingui stics, and Korean pedagogy. Abigail C. Cohn is Professor of Lingu istics at Cornell University, Ithaca, Nev

Yo rk. Her research addresses the relationship between phonology and phoetics and is informed by laboratory phonology approaches. She also specializes in the description and analysis of a nun1ber of Austronesian languages of Indonesia. Her published Vork includes articles in and a number of edited volun1es and she is co-editor of the forthcoming

Phonology Oxford handbook of laboratory phonology.

Jenn if er Cole is Professor of Linguist ics and Computer Science at the University

of Illinois at Urbana-Champaign, and is a member in the Cognitive cience group at the Beckman Institute for Advanced Science and Technology. She is the and has served on the editorial founding General Editor of boards of the journals and

La.bora.tory Phonology, Language, Linguistic Inquiry,

Phonology.

Bruce Connell is baed at York University, Toronto. His research interests

include the phonetics of African languages, the relationships betveen phonetics and phonology, historical linguistics, and language endangerment and docu­ mentation. H.e makes regula r fieldtrips to Africa for research and is an authority on languages in the Nigeria-Cameroon borderland. He received his Ph.D. in His publications include Linguistics from the University of Edinb urgh in "The perception of lexical tone in Vlan1bila," 43, and "Tone languages and the universality of intrinsic FO: Evidence fron1 Africa," �(

1991. L.nguage and Speech

Phonetics 30.

Journal

Marie-Helene Cote is Associate Professor of Linguistics at the University of Otta va.

Mucll of her reearch n phonology relates to the role of perceptual factors n phono­ logical processes, the status of the syllable, and the treatment of variation. She also specizes in French phonology, Vith a focus on Laurentian (Quebec) French. Onno Crasbon is a Senior Researcher at the Deparhnent of Linguistis of Radboud

University Nijmegen, where he heads the sign language research theme of the Centre for Language Studies. After ompleting a dissertation on phonetic variation in Sign Language of the Netherlands at Leiden University, he has broadened his researcll interests beyond sign phonetics and phonology to sociol inguistics, discourse, and corpus linguistics. In he published the Corpus NGT, the first open access sign language corpus in the \Vorld.

(2001) 208

Megan J. Crowhurst is an Associate Professor in the Department of Linguistics

at the University of Texas at Austin. Her publications in thoretical phonology have concentrated on prosodic phenomena related to \VOrd stress and reduplication. Copyrightd maerial

Contributors

xv

In collaboration "'ith Mark Hewitt, she has also contributed to the literature on constraint conjunction. Her current research, conducted in an experimental paradign1, explores the ways \\1hich humans' perception of rhythm might con­ tribute to the form and frequency of stress patterns found in natural languages.

n

Stuart Davis is Professor of Linguistics at Indiana University, Bloomington.

Hs primary area of research is in phonology and phonological theory with a secondary area of reearch in the early history of linguistics in the USA. His work n phonology has especially focused on issues related to syllable struchrre and '"ord-level prosody. His work has appeared in a Vide variety of edited volumes and journals including and

Linguistic Inquiry, Phonology, Lingua, Linguistics, Ameri11n Speech. Kenneth de Jong is currntly Proessor of Li guistics, Cognitive Science and Second n

Language Studies at Indiana University. He has \vorked extensively on questions of ho"' prosodic organization pervades the details of speech production, and more generally how speech production and perception interact '"ith one another and ho"' this relates to the phonological system. He is author of more than articles on aspects of phonetic behavior and is currently Associate Editor of the

50

Journal of Phonetics.

Paul de Lacy is an Associate Professor n the Department of Linguistics at Rutgers

University, an Associate of the Rutgers Center for Cognitive Science, Co-director of the Rutgers Phonetics and Field\vork Laboratory, and editor of He works on phonology and its interfaces \vith syntax, morphology, and phonetics.

the Cmnbridge

handbook of phonology.

Laura J. Downing s a research fellow at the ZAS, Berlin, leading projects on the

phonology-focus-syntax interface in Bantu languages. She has published several articles on tone and depressor tone, morphologically-conditioned morphology, syllable struchrre and prosodic morphology Bantu languages, and a book,

Canonica/forms in prosodic 11101·phology (2006).

n

B. Elan Dresher is Professor of Linguistics at the University of Toronto. He has published on phonological theory, learnability, historical linguistics, and Vest Germanic and Biblical Hebre"' phonology and prosody. He is the author of and "Chomsky and Halle's revolution in phonology" n the Hs recent books include (co-ed., with Nila Friedberg, (coed., �vith Peter Avery and Keren Rice, and

Old

English and the theory ofphonology (1985) Cambridge companion to Chon1sky (2005). Formal approaches to poetry: IZecenl develop111ents in generative ·metrics 2006), Contrast in phonology: Theory, perception, acquisition 2008), The contrastive hierarchy in phonology (2009). San Duanmu is Professor of Linguistics, University of Michigan, "'here he has taught since 1991. He obtained his Ph.D. in linguistics from MIT in 1990 and from 1981 to 1986 held a teaching post at Fudan University n Shanghai.

Amanda Dye is a graduate student in Linguistics at Ne"' York University. She holds

a B.A. in Linguistics from Harvard University. Her past \Vork has focused on theoretical and experi1nental study of the n1orphophonology and syntax of n1uta­ tion in Welsh. Her research interests are in experimental phonology, segmental phonology, the study of Welsh, and the phonology-semantics interface. Copyrighted maerial

xvi

Contributors

Mirjam Ernestus is an Associate Professor at Radboud University in Nijmegen.

She obtained her Ph.D. rom the Free University Amsterdam in 200and then held several postdoctoral positions at the Max Planck Institute for Psycholinguistics in Nijmegen. Kathryn Flack Potts received her Ph.D. in linguistics fron1 the University of

Massachusetts, An1herst in University.

2007. She is a Lecturer in Linguistics at Stanford

Stefan A. Frisch is an Associate Professor in the Departn1ent of Comnnmication

Sciences and Disorders at the University of South Florida. He uses the tools of laboratory phonology to examine frequency effects in the lexicon and in meta­ linguistic judgments of well-formedness. He is also interested in speech articu­ lation and the use of experirnentally elicited speech errors r1 the study of speech production processes. Adamantios I. Gafos is an Associate Professor at Nev York University's Linguistics

Department and a senior scientist at Haskins Laboratories. His interests lie at the intersection of phonology and cognitive science. Randall Gess s Associate Professor of Linguistics, Cognitive Science, and French,

and Director of the School of Linguistics and Language Studies at Carleton University. His research interests are in historical phonology, the phonetics­ phonology interface, and French and Romance phonology. Heather Goad is an Associate Professor in Lmguistics at McGill University. She

works prmcipally on the acquisition of phonology. Her work has been published ir1 n c i and Her research is currently funded by the Social Sciences and Hun1anities Research Council of Canada and the Fonds quebecois de la recherche sur la societe et la culture. She has been an Associate Editor of smce

Li guist Revie1v, Lingua, Language Acquisition,

Second Language Research.

Language Acquisition 2004. Matthew Gordon is Professor of Lmguistics at the University of California, Santa Barbara. He is the author of Syllable weight: Phonetics, phonology, typoloi; (2006) and co-editor of Topic and focus: Cross-linguistic perspectives on 1 1ean.in.g and intonation (2008). His research focuses on prosody, includmg stress and mtonation, and the phonetic description of endangered languages.

Janet Grijzenhout is Professor of English Lmguistics at the University of Konstanz

and director o.f the Baby Speech Lab there. He.r research focuses on the phonolo­ gical representation of voicing and stricture, the phonology-morphology interface, infant speech perception, and the acquisition of prosody and morphology. Naomi Gurevich received a Ph.D. from the Linguistics Department at the Univer­

2003. 204 Lenition and conrast: F11nctional consequences ofcertain phonetically conditioned sound changes. sity of Illinois at Urbana-Champaign in The dissertation vas published in the Outstanding Dissertations in Linguistics Series ll under the title

Recently Naomi's research interests have shifted from theoretical lr1guistics to neurologically-based language disorders. She is currently vorking on a clinical certification in Speech Langtlage Pathology. Carlos Gussenhoven is Professor of General and Experimental Phonology in

the Departn1ent of Linguistics at Radboud University, Nijn1egen and Professor Copyrightd maerial

..

Contribu.tors

vu

of Linguistics in the School of Languages, Linguistics, and Film at Queen Mary University of London. One of s research topics is the prosodic structure of English, including \vord stress and sentence intonation. Other research has focused on stress and tone in a variety of languages, including Dutch, Japanese, Nubi, and a group of Franconian dialects \vith a lexical tone contra st Amon g his publi ca­ tions are and

The phonology of tone and intonation (2004)

(1998, 2005).

. Understanding phonology

Gregory R. Guy (Ph.D., Pennsylvania) is Professor of Linguistics at Ne\v York

University, and has been on the faculty at Sydney, Temple, Cornell, Stanford, and York. He has tau ght at five Lingtli stic Institutes of the Linguistic Society of America, and three Institutes of the Associa;ao Brasileira de Li ngilistica He speca i lzes n sociolinguistic s and phonological variation and change, and \vorks on Portuguese, English, and Spanish. His current research n i trests include the treat1nent of variation in linguistic theory, the relationship between individual and coonuni ty grammars, and tl1e theoretical treatment of grammatical sim il ar ity. Hs books and

.

include Towards a social science of language (1996)

quantitativa (2007).

Sociolingiifstica

Daniel Currie Hall received his Ph.D. from the University of Toronto in 2007

with a thesis entitled "The role and representation of contrast in phonological theory." He has taught on all three ca1npuses of the University of Toronto and at Queen's University and vorked as a researcher at the Meertens lnstituut of the Royal Netherlands Academy of Arts and Sciences, and is c urrently an Assistant Professor at Saint Mary's University n i Halifax, Nova Scotia. Nancy Hall is an Assistant Professor at California State Univ ersity, Long Beach. She received her Ph.D. from the University of Massachusetts, Amherst in 2003,

and has taught at Rutgers University, the University of Haifa, BenGurion University of the Negev, and Roehampton University. T. A. Hall

is Associate Professor of Germanic Studies and Adjunct Associate Professor of Li ngu isti cs at Indiana Un iversity. He has publ is hed "'idely on a number of topics in general phonological theory and Germanic phonology in journals such as &

Linguistics, Lingua, Natural Language Linguistic Theory, Journal ofGennanic Linguisics, Phonology, Linguistic Review, Journal ofComparative Germanic Linguistics, and Morphology. Michael Ha1nmond is Professor of Linguistics and Dep rtm en t Head at the University of Arzona. He received his Ph.D. in Linguistics rom UCLA in 1984. He is the author of nu1nerous books and articles. His research interests are broad, ncludi g the merical theory of stress, Optimality Theory, poetic meter, language games, computational linguistics, psycholinguistics, mathematical linguistics, s Uable structure, probabi listic p honota ctics, Englis h phonology general l and Welsh. S. J. Hannahs is Senior Lecturer in Lingu istics at Ne"•castle University. Co-author of Introducing phonetics and phonoloi; (2005), much of his research has focused on a

n

y

y

,

proodic structure, particuarly at the interface benveen phonol oy and morpholog y. His recent and ongoing \vork has concentrated on the phonology and norpho­ phonology of n1odern Welsh.

Copyrightd maerial

XVUJ

Contributors

John Harris, Professor of Linguistics at University College London, "'rites on various topics connected with phonology, including phonological theory, the interface \\1.th phonetics, language impairn1ent, and variation and mange in English. lnong his publications are the books (1985) and (1994).

English sound structure

Pl1onological variation and change

Jeffrey Heinz is Assistant Professor in the Department of Linguistics and Cognitive Science at the University of Delaware and has held a joint appoint1nent \v.th the department of Computer and Information Sciences since 2009. He received his Ph.D. in Linguistics from UCLA in 2007 and is keen to help bridge divides behveen theoretical phonology, computational linguistics, theoretical computer science, and cognitive cience.

n

Ben Hermans is a senior researcher at the Meertens Institute Amsterdan1. He was trained as a Slavist and Germanicist. In 1994 he defended his thesis "The con1posite nature of accent" at the Free University. He now focuses on the tonal accents of the Limburg dialects, for example, "The phonological structure of the Limburg tonal accents" (2009), and is particularly interested in their formal representation. He also publishes on the history of the phonology of Dutch and its dialects. Jose Inacio Hualde (Ph.D. in Linguistics, 1988, University of Southern California) is Professor in the Department of Spanish, Italian, and Portuguese and the Department of Linguistics at the University of Illinois at Urbana-Champaign. He is author of (1991) and (2005), co-author (1994) and co-editor of of

Basque phonology The sounds ofSpanish The Basque dialect of Lekeitio Generative studies in Basque linguistics (1993), Towards history oft/1e Basque language (1995), A ram111ar ofBasque (2003) and Laboratory phonologi; 9 (2007), among other books. He has also pub­ n

lished a number of articles on synchronic and diachronic issues in Basque and Ro1nance phonology.

Harry van der Hulst is Professor of Linguistics at the University of Connecticut. He has published four books, ''0 textbooks, and over 130 articles, and has edited 20 books and six journal theme issues in areas including feature systems and

segmental structure, syllable structure, "'ord accent systems, vovvel harn1ony, and sign language phonology. He has been Editor-in-Chief of the international since 1990 .nd he is co-editor of the series Jingtlistic journal Studies in Generative Grammar.

The Linguistic .Review

Elizabeth Hume is Professor and Chair of the Department of Linguistics at Ohio

State University. She holds a Ph.D. and M.A. in Linguistics fron1 Cornell Univer­ sity, an V.A. in French and Social Psychology of Language from McMaster University (Canada), and a B.A. in Frencl1 from Universite Laval (Quebec). Her research interests lie n language sound systems, cognitive science, language variation, and language change. She has published widely on topics including consonant-vowel n i teraction, feature theory, gemi.nates, n1arkedness, metathesis, ound change, the interplay of speech perception and phonology, and Maltese phonetics and phonology. Brett Hyde received his Ph.D. n Linguistics from Rutgers University in 201 and

currently has an appointment at Washington Univrsity in St. Louis. His primary researl1 interests are in n1etrical stress theory and related areas. Copyrightd maerial

Contributors

X1X

Larry M. Hyman is Professor of Lnguistics at the University of California, Berkeley He received a Ph.D. n Linguistics from UCLA in He has published several books (e.g. a nd numerous theoretical articles in such journals as & and .

1972. Phonology theory and analysis, A theory of phonological weight) anguage, Linguistic Inquiry, Natural Language Linguistic Theory, Phonology, Studies in African Linguistis, Journal ofAfrican Languages and Linguistics.

Sharon Inkelas is Professor of Linguistics at the Universiy of California, Berkeley. She specializes in the phonology-morphology interface and has branched out in recent years into child phonology. With co-author Cheryl Zoll she publshed

Reduplication: Doubling in morphology in 2005.

Gregory K. Iverson is Professor of Linguistics at the University of Wisconsin­ M i lwau k ee and Research Professor at the University of Maryland Center for Advanced Study of Language. His research interests span the ield s of historical linguisti s especialy Gern1anic, the phoneti s and phonology of laryngeal systems, especally Korean, and the acquisition of second language phonological patterns.

i

c,

c

Jongho Jun is Profe sor of Lingu istics at Seoul National University His research interests are p honeti s n phonology variation in phonology and the forn1al prop­ erties of Optimality Theory.

s c

,

.

,

Barr� Kabak is currently an Assistant Professor of English and General Linguistics at the University of Konstanz. Ellen M. Kaisse is Professor of Linguistics at the University of Washington. She has co-edited the journal since Her research concentrates on the interactions of phonology ' ith morphology and syntax, on distinctive features and on the phonology of Modern Greek, Turkish, and Spanish

Phonology "

1988.

.

,

Yoonjung Kang is an Assistant Professor in the Departn1ent of Humanities at the Univer i ty of Toronto, Scarborough and the Departn1ent of Linguistics at the University of Toronto. Her area of sp ecialization is phonology and its interface "ith phonetics and morphology, with a spe a i l focus on Korean.

s

c

Shigeto Kawahara is an Assistant Professor in Linguistics and Rutgers Center for Cognitive Science (RuCCs) at R utgers University. He was a\"arded his Ph.D. from the University of Massachusetts, Amherst His research focuses on the phonetics-phonology interface, experimental investigations of phonological judg1nents, corpus-base. studies of verbal art, and studies on intonation and accents.

in 2007.

Robert Kennedy is a lecturer the Department of Linguistics at the University of California Santa Barbara. His areas of expertise include redupliation, prosodic morphology, hypocoristics, vovel syste1ns of English varieties, and articulatory phonology.

,

n

in

John Kingston is a Professor the Linguistics Department, University of .vlassachusetts, Amherst. His publications include: "Phonetic kno\'ledge," (vith R. L. Diehl, "Lenition," "The phoneticpho ology "On the internal perceptual structure of distinctive features: The [voice] contrast," (\vith R . L. Diehl, C. J. rk, and W. A. Castleman, "Contextual effects on the perception of

Language 1994); Proceedings of the Third Conference on Laboratory Approaches to Spanish Phonology (2007); interface," Cambridge handbook of phonology (2007); journal of Phonetics 2008);

Copyrightd maerial

Contributors duration," fournal of Phonetics (vith S. Ka,vahara, D. Chambless, D. Mash, and E. Brenner-Alsop, 2009); "Auditory contrast versus compensation for coarticulation: Data fron1 Japanese and English listeners," Language and Speech (vith S. Ka\vahara, D. Mash, and D. Cha 1bless, in press). xx

n

Charles W. Kisseberth has just retired from his position as Professor of Linguistics at Tel Aviv University (vhere he taught from 1996 to 2010) and is also E1neritus Professor at the University of fllinois (\vhere he taught from 1969 to 1996). He is best known for his vork in theoretical phonology, '"here .is work on "conspiracies" laid the foundations for Optimality Theory, and his vork on Chimwiini prosody helped to lead the vay to current studies in the phonology­ syntax interface. His work over the last 30 years has focused on Bantu tonal systems, and Optimal Domains Thory has evolved out of that \Vork. He is co-author vith Michael Kenstowicz of (1979), a standard introduction to classical generative phonology.

Generative phonology

Alexei Kochetov is currently an Assistant Professor in the Linguistics Department

at the University of Toronto. He received his Ph.D. at the University of Toronto vith the thesis "Production, perception, and emergent patterns of pala talization" (published n i 2002). His research deals with various issues n i the phonetics­ phonology interface, cross-language speech production, and Slavic phonology and phonetics. Asrid Kraehenmann holds a Ph.D. n i Theoretical Linguistics rom Konstanz Univer­ sity, Germany. Her n reearch interests are phonology, phonetics, the phonology­ phonetics interface, historical linguistics, and Germanic languages. Haruo Kubozono is Professor and Director at the National Institute for Japanese Language and Linguistics in Tokyo. His main publications include (1993), "Mora and syllable" (in 1999), and "Where does loanword prosody coo�e .from?" 116, 206).

ofJanese prosody

Darlene LaCharite

The or.nization The handbook ofJapanese linguistics, (Lingua

is Professor of Phonetics and Phonology at Laval University.

Her areas of research include loanword phonology (in collaboration with Carole Paradis), the L2 acquisition of phonetics and phonology, and creole phonology and morphology (in collaboration with Silvia Kouwenberg). She has published in a variety of ln i guistics journals, including and

of Linguistics

Linguistic Inquiry, Phonology, Journal Journal ofPidgin and Creole Languages.

William R. Leben is Professor E1nerihls of Linguistics at Stanford University. He has worked on the phonology of tone in languages of West Arica and has also co-authored pedagogical '"'orks on .ausa. He continues to ''ork on phonology in Kwa languages of Cote d'Ivoire. The econd edition of a textbook he co-authored, was published n 2007.

ele111ents,

English vocabulary

Clara C. Levelt is Associate Professor at the Linguistics Departn1ent of Leiden University, and affiliated to the Leiden University Centre for Linguistics (LUCL) and the Leiden Institute for Bra in and Cognition (LIBC). She received her Ph.D. in 1994, and h.s '"orke. on child language phonology ever since. In 2007 she received a prestigious research grant from the Netherlands Organization for Scientific Research (NWO), \Vhich enabled her to set up a baby lab. Combining insights fro1n perception and production data, phonology and phonetics, she tries to uncover the source of children's deviating productions. .

Copyrightd maerial

Contributors

.

(1

Susannah V. Levi is an Assistant Professor in the Department of Communicative Sciences and Disorders at Ne"' York University. She received her Ph.D. n i 2004 fron1 the Departn1ent of Linguistics at the University of Washington. She then con1pleted a three-year postdoctoral fello"'srup at Indiana University in the Department of Psychological and Brain Sciences. She has "'Orked on glides, Turkish stress and intonation, and most recently on speech perception and language processing in children and adults. Yen-Hwei Lin is Professor of Linguistics at M c i higan State University, and taught at the 1997 and 203 Linguistic Society of America Linguistic Institutes. Her reearch has focused on phonological representations and constraints the theoretical context of non linear phonology /morphology and Optimality Theory. She is a uthor of (2007) and editor of 5.4, 2004). -

The sounds of Chinese and PhonoloJ (Language and Linguistics

n Special Issue on Phonetics

Anna .ubowicz is Assistant Professor of Linguistics at the Universiy of Southern CaJifornia. Her research interests lie in the i nvesigation of the role of contrast in phonology and morphology, lexical phonology, and the morphology-phonology interface. She specializes in Slavic languages and holds a Courtesy Appointment in the Department of Slavic Languages at the University of Southern California.

Andrew Martin recei ved his Ph.D. in Linguistics fro m UCLA in 2007. His research is focused on understanding early phonological and lexical learning. He is also interested in hov a language's lexicon changes over time, and hoiv a "'Ord's phonological properties affect that word's ability to survive and spread n i a speech coununity. He is currently a post-doctoral researcher at the Laboratory for Lan­ gt1age Lcuning an d Development in the RIKEN Brain Science Institute nec1r Tokyo.

Joan Mascaro studied at the Universitat de Barcelona and at vIT '"'here he got his Ph.D. in lingu istic in 1976. He has taught a t Cornell University and at the Universitat Autonoma de Barcelona, where he is currently fuJI professor. His main research areas are linguistic theory, phonological theory, the phonology-n1orphology interface, and the phonological and n1orphological analysis of Romance languages.

s

.

April McMahon is Forbes Professor of English Language, and Vice Principal for Planning, Resources and Research Policy, at the University of Edinburgh. She co-edits the journal Her research focuses on the interaction of phonological theory and sound change, and methods for the con1parison and classfication of accents and languages.

English Language and Linguistics.

Jacques Mehler is the director of the Language, Cognition, and Development laboratory at the International chool for Advanced Studies, Trieste, Italy (SISSA). After obtng a Ph.D. from Harvard University in 1964, he "'orked at the National Center for Scientific Research (CNRS) in Paris, France, from 1967 until 2001. In 1972, an international journal of cognitive science, and acted as he founded Editor-in-Chief until 2007. He became Directeur de Recherche at CNRS in 1980 and was elected Directeur d'Etudes at the Ecole des Hautes Etudes en Sciences Sociales n 1982. He has published influential experimental studies of language acquisition

Cognition,

in the first year of life, and has also explored early bilngualism. He is a member of the American Academy of Arts and Sciences (2001), the American Philosophical Society (2008) and the Academic Europaea. He \Vas ararded a Doctor Honoris Causa ron1 Utrecht University (2009), and from Universite Libre de Bruxelles (1995). His publications are available at: \V"'".sissa.it/01s/Icd/publications.html.

Copyrightd maerial

xxii

Contributors

Jeff Mielke is Assistant Professor of Linguistics at the University of Otta"a and Co-director of the Sound Patterns Laboratory. He completed his Ph.D. at Ohio State University n 2004 and undertook postdoctoral research at the Arizona Phonological Imaging Laboratory at the University of Arizona before 11oving to Otta"a in 2006. His 'vork focuses on the way phonological patterns relect influ­ ences such as physiology, cognition, and social factors. He is the author of (2008).

The

efl1erge1ce of distinctivefeatures

Amanda Miller is a Visiting Assistant Professor at Ohio State University. Her research focuses on the phonetics and phonology of African languages, particu­ larly Khoesan languages. She has studied gutturals, complex segments (clicks and labial-velars), and contour segments (affricates and airstream contours). She has investigated acoustic voice quality cues and the role of acoustic similarity in a Guttural Obligatory Contour Principle constraint; as \veil as tongue root retraction in dicks and labial-velars, and its role in C-V co-occu.r.rence patterns. She has also published papers on reduplication and the tonal phonology of Khoesan languages. Brett Miller is a graduate student at the University of Cambridge. His interests include Inda-European phonology, especially stops; the phonetics-phonology interface; feature contrast, representations, and interaction in rules and constraints; and typology the comparative method.

vis-a-vis

Bruce Moren-Duollja is a Senior Researcher at the Center for Advanced Study in Theoretical Linguistics at the University of Tron1s0. He has published on synchronic and diachronic phonology, includng Slavic palatalization, Icelandic preaspiration and Thai tones. He is the author of (2001).

sonority: A uniied theory of weight

Distinctiveness, coercion and

n

Marina Nespor is Professor of General L gustics at the University of Milano­ Bicocca. She has focused her research on ho1v the phonological shape of an utterance conveys information about its syntactic structure, the so-called theory of prosodic phonology. She has also investigated how prosody is used in com­ prehension and during language acquisition. Her book (co­ authored with I. Vogel, 1986) s a citation classic and she has numerous articles in peer-revie1ved journals.

Prosodic phonology

Andrew Nevins is a Reader in Linguistics at University College London. His research has primarily focused on phonological and morphological theory, and the relations betreen different modules of the grammar. He has "orked on local­ ity, markedness, contrast, reduplication, the nature of underlying representations, and the structure of the morphological con1ponent. He is the author of (2010) and the co-editor of two books, with Bert Vaux, (2008), and with Asaf Bachrach, (2008).

in vowel harmony constraints, and phonological pheno11rena identity

LocaliIJ Rules, Inlectional

David Odden is Professor of Linguistics at Ohio State University. His areas of research specialization include phonological theory and language description, especially the structure of African languages. He served as Editor of from from 2003 to 2009 and \Vas Associate Editor of 1998 to 2008. Recent publications include (2005), "Ordering," (2008), and in Vaux and Nevins, "Tachoni verbal tonology," (2009).

Studies in Phonoloi; Arican Linguistics Introducing phonology Rules, constra.ints, and phonological phenonr.ena Language Sciences

Material com direios autorais

Contributors

XXlll

Jaye Padgett is Professor of Linguistics at the University of California, Santa Cruz.

At the broadest level his research is on the interplay of phonology and phonetics, with special attention to the role that perception plays in shaping phonological patterns. Though his research bears on larger questions it often focuses particu­ larly on Russian phonetics and phonology. Carole Paradis is Professor of Phonology at Laval University. She proposed a theory of constraints in 1987, as a visiting scholar at MIT. She co-edited a book on the special status of the coronals \Vith J.-F. Prunet in 1991. With R. Beland's

colaboration, she has drawn a parallel between aphasic errors and loan,vord adaptations. Th.is study, extended to other speech error types, has been p.lbLished and has led to the construction of more optimal in journals such as speech pathology exercises.

in

Aphasiology

Steve Parker is a Professor Vith the Graduate Institute of Applied Linguistics

n

(GIAL) Dallas. He graduated fron1 the doctoral prograin linguistics at the University of Massachusetts, Amherst in He has served as a teacher and consul tant '"ith the Summer Institute of Linguistics (SIL I nternational for 30 years In that capacity e has carried out direct fieldwork and research on a number of indigenous languages of South America and Papua Ne" Guinea, two of vhich are no" extinct.

2002.

)

.

Sharon Peperkamp is a senior researcher in the Laboratoire de Sciences ,

Cognitives et Psycholinguistique at the Ecole Normale Superieure in Paris. Her main research interests are n experimental and computational approaches to speech perception and phonological acquisition. Marianne Pouplier obtained her Ph.D. from Yale Linguistics in

2003 and is

no\v a principal investigator of an En1y-Noether Research Group funded by the Deutsche Forschungsgen1einschaft at the Institute of Phonetics and Speech Processing, University o.f Munich. She has al.so been affiliated with Haskins Laboratories for a number of years. Her research focuses on e interaction of speech planning and motor control and the phonetics-phonology interface. In particular, she has vvorked on speech errors, phonetic correlates of syllable struc­ ture, and word boundary assin1ilation. Pilar Prieto holds a Ph.D. in Romance Linguistics at the University of Illinois

at Urban.-Cha1npaign, and is currently "'orking as the coordinator of the Grup d'Estudis de Prosodia at the Universitat Pompeu Fabra in Barcelona. Her main research interests focus on ho'" melody and prosody vork in language and how they interact with other types of linguistic knorledge. She is also interested in ho\\' babies acquire sounds i1d melody together with gran1mar, and hovv these components integrate in the process of language acquisition. Douglas Pulleyblank is Professor of Linguistics at the University of British

Colun1bia. His research has focused on the phonology and n1orphology of Nigerian languages, particularly Yoruba. He has "'Orked extensively in auto­ segmental and optimality-theoretic frameworks, examining phenomena such as tone and vovel harmony. Eric Raimy is an Associate Professor at the University of Wisconsin-Madison. He received his Ph.D. fro1n the University of Delaware in 1999. He is the author

of

The phonology .nd morphology ofreduplication (2000) i1d co-edited Contemporary Material com direios autorais

Contributors views on architecture and representations in phonolo>y (2009) and Handbook of the StJllable (in press) both "'ith Charles Cairns. XXlV

Anthi Revithiadou is an Assistant Professor of Linguistics at the Aristotle Univer­

sity of Theory, "'ith emphasis on the structure of phonological representations and the issue of parallel grammars, and has published in journals and edited volumes.

(Lingua, Linguistic

Revieiu, journal of Greek Linguistics) Curt Rice

is the Vice President (Prorektor) for Research and Developn1ent at the University of Tron1s0 in Tron1s0, Non.vay. He vas the founding Director of the Center for Advanced Study in Theoretical Linguistics: A Norwegian Center of Excellence (CASTL) from 2002 to 2008, at the same university. Keren Rice is University Professor and Canada Research Chair n i Linguistics

and Aboriginal Studies at the University of Toronto. She holds an M.A. and Ph.D. in Linguistics rom the University of Toronto a.nd a B.A. in Li.ngu.istics froo� Cornell University. Her research interests lie in language sound systems, contrast and l markedness, interfaces '"ith phonology, anguage variation and language change, and Athabaskan languages as well as ethics and responsibilities of linguists in fieldwork. She has published on topics including feature theory, sonorants, markedness language change, and Athabaskan phonology and morphology.

,

Anastasia K. Rieh l is Director of the Strathy Language Unit at Queen's University in Kingston, Canada, vhere she also teaches n the Linguistics Progran1. She received

a Ph.D. n i Linguistics from Cornell University. Her research interests include the phonology-phonetics interface, endangered language docun1entation, Austronesian languages and varieties of English.

,

Jason Riggle is Assistant Professor of Linguistics at the University of Chicago,

and his main research areas are phonology, learnability, and computational linguistics. Much of his research focuses on the \vays that specific n1odels of grammar, learning, and communication interact to make predictions about lin­ guistic typology vith special emphasis on the frequencies "'ith which patterns are observed ivithin and across languages. Sharon Rose is Associate Professor of Linguistics at the University of California,

San Diego. She has published journal articles covering topics such as consonant har.mony, syllable contact, Semitic root structure, redu.p.ca.tion, phonoactics, and the interaction between tone and syllable structure. She specali es in African languages spoken in Ethiopia, Eritrea, and Sudan; her current research is an n i vestigation of the Sudanese language, Moro, funded by the National Science Foundation. t

z

Jezy Rubach holds an appointment as Professor of Linguistics at hvo universities: the University of Iowa n the United States and the University of Warsaiv n Poland. His expertise is primarily in Germanic and Slavic languages. He has published six books and 75 articles. His work has appeared in nun1erous journals, n i cluding t

Linguistic Inquiry, Language, Phonology, and Natural anguage Linguistic Theory. Joseph C. Salmons is Professor of German at the University of Wisconsin.:Madison and Executive Editor of Diachro.ica: Intenational Journal for Historical Linguistics. '

His research interests include language change, sound systems, language contact, and language shift, all often involving data from the Gern1anic languages.

Material com direios autorais

Contribu.tors

xxv

Wendy Sandler has been investigating the phonology, morphology, and prosody

of An1erican Sign Language and Israeli Sign Language for many years, beginning with her graduate \Vork at the University of Texas at Austin, \Vhere she earned her Ph.D. Sandler has authored or co-authored three books on sign language linguistics. Currently, "'ith colleagues Mark Aronoff, lrit Meir, and Carol Padden, she is investigating a new sign language that arose in an nsulated Bedouin community in the Negev desert of Israel. Tobias Scheer is currently Di.recteur de Recherche at the Centre National de la

Recherche Sci.entfique in France. He works at the laboratory Bases, Corpt.1s, Langage (UMR in Nice, of which he is the director. Being a phonologist, his main interests lie n syllable structure thematically speaking, in the (Western) Slavic family as far as languages are concerned, and n i diachronic study. In he published a book on a particular developn1ent of GP, so-called CVCV (or strict CV), and . book on the Q�jstory of the) interace of phonology and roorph.o-synta.x.

6039)

2004,

Misun Seo is an Assistant Profesor in the Department of English Language

and Literature at Hannam University in Daejeon, Korea. She received her Ph.D. Linguistics at Ohio State University; her research interests n i clude phonology, its interface vith phonetics, and L2 acquisition.

n

Kimary Shahin is a phonetician/phonologist 'vho specializes in Arabic and

Salish. She also investigates first language acquisition and contributes to the documentation and revitalization of indigenous languages n Canada. llohlnish Shukla has a background in n1olecular genetics, and holds a Ph.D. in

Cognitive Neuroscience from the Scuola Internazi.onale Superiore di Studi Ava.nzati (SISSA), Italy. He is currently doing postdoctoral 'vork at the University of Rochester, Ne' York. He is interested in human cognition, 'v.th a focus on infant cognitive development, particularly the development of linguistic abilities from phonology to syntax. He is also interested in the neural bases of cognitive beha­ vior, vhich he primarily explores using near-infra.red spectroscopy in inf.nts and adults.

1995 under the tutelage of Danca Steriade and Peter Ladefoged. He has published \videly on phonology and phonetics, including A critical introducion to phonology: Of sound, mind, and body (2006), and Neutra.zation: Rhyme and reason in phonology (forthcoming). He Daniel Silverman earned his degree at UCLA in

is currently on the faculty of San Jose State University.

Jennifer L. Smith teaches phonology, phonetics, and Japanese linguistics at

the University of North Carolina, Chapel Hill. Her research explores the 'vays in 'vhich interfaces between phonology and other doma.ins of linguistics affect phonological constraints and representations. Peter Szigetvari gives courses on phonology, linguistics, information technology,

and typography at E6tv6s Lorand University, Budapest. His n1a.n research areas include phonotactics and consonant lenition. Nina Topintzi has taught as a Teaching Fellov in England and Greece

(2006-2010) and will soon commence her appointment as Assistant Professor in Phonology at the English Department, Aristotle University of Thessaloniki She has published in journals including Natural anguage Linguistic Theory and the &

Material com direios autorais

Contributors Journal of Greek Linguistics. Beyond onsets, her research interests include stress, xxv1

the syllable, and various ''eight-based phenon1ena from a typological perspective. Mik16s Torkenczy is a Professor at the Department of English Lnguistics and

the Theoretical Linguistics Department of Eotvos Lorand University Budapest and a senior researcher at the Research Institute for Linguistics of the Hungarian Academy of Sciences. He is co-author of

The phonology ofHungarian (2000).

Jochen Tromer received his Ph.D. ro1n the University of Potsdan1, has \vorked

as lecturer at the University of Osnabruck and is currently Lecturer n Phonology at the Deparllnent of Linguistics at the University of Leipzig. His work focuses on the theoretical and typ ological aspects of phonology and morphology, especially prosody, non-concatenative morphology, hierarchy effects in agreement morpho­ logy, and affix order. His publications include "Case suffixes, postpositions, and the phonological \Vord in Hungarian," 46(1) "Hierarl1y-baed con1petition and en1ergence of tvo-argument agreen1ent in Dumi,"

Linguistics

(2008);

Linguistics 4.4(5) (26); and "The interaction of morphology and syntax in affix order," Yearbook ofll1orpholo; 2002 (2003). Suz e Urbanczyk is an Associate Professor in the Linguistics Department at the University of Victoria. She received her Ph.D. from e University of Massachusetts Amherst and has published on Salish reduplication and non-concatenative n1orpho­ logy n i Natural Language and Linguistic Theory and Linguistic Inquiry. Her current n

,

researl1 focuses on the role repetition plays in structuring language, and she is developing . model of morphology vhich is gro. nded in the mental lexicon, ''ith an empirical focus on \VOrd-formation in Salish and lakashan languages. Adam Ussishkin is an Associate Professor in the Linguistics Department at the

University of Arizona, where he also holds joint appointments n Cognitive Science

and Near Eastern Studies. His areas of research include phonology, morphology, and lexical access, vith an empirical focus on Semitic languages. Bet Vaux is University Reader n Phonology and Nlorphology at the University

of Cambridge and a Fellovv of King's College, Cambridge. He is primarily interested in phenomena that shed light on the structure and origins of the pho ologica l

component of the graar, especially in the realms of psychophonology, historical linguistics, and sociolinguistics. He also enjoys working 'vith native speakers to docun1ent endangered languages, especially dialects of Armenian, Abkhaz, and

English. Rachel Walker is Associate Professor of Linguistics at the University of Southern

California. She is the author of

Vowel patterns in language

Nasalzation, neutral seg1nents and 01incity ffects (2000)

(forthcoming). She has published 'videly on topics and involving long-distance assimilation and copy, as seen in systems of harmony and reduplication. Her current research investigates the interaction of VOvel patterns vith positions of prominence n the vord. Natasha Wamet has been a factllty n1ember in Linguistics at the University of

Arizona since

2001 and has vvorked at the Max Planck Institute for Psycholinguistics.

Her interests are in the three-way interface of phonetics, experimental phonology, and psycholinguistics, vvith language nterests in Dutch, Japanese, Korean, and English. She also \Vorks on revitalization of the Mutsun language.

Material com direios autorais

Contributors

..

vu

Janet C. E. Watson is Professor of Arabic Linguistc i s at the U niversity of Salford,

UK. She has previously \VOrked at the universities of Edinburgh and Durham, and hel d visiting posts at the universities of Heidelberg and Oslo. She has pub l ished extensively on Yemeni Arabic, and has been \vorking on the documentation of the Modern South Arabian language, Mehri. She is currently preparing a syntax of Mehri for publication. Her publ icato i ns include Phonoloy

since 2006

and 1norpltology ofArabic (2002) and A syntax of San'ani Arabic (1993).

Andrew Wedel is on the facul ty of the Departn1ent of Linguistics and the Cogn itive Science Program at the U niversity of Arizona. His primary interests lie in exploring the causes for, and the interaction of, hvo opposing tendencies evident in language change: a tendency toward pattern-coherence, and a tend­

ency to preserve emantically relevant contrasts. Relevant \Vork n i this domain includes "Feedback and regularity in and with

the lexicon," Phonology 24,

Juliette Blevins, ".1�hibited sou.nd change: A.n evolut.onary approach to lexical competition,"

Diaclironica 26 (2009).

Jeroen van de Weijer is Full Professor of English Linguistics at Shanghai

International Studies University (College of English Language and Literature) vithin the Chinese "2 -Project. He is a Distinguished University Professor of Shanghai Oriental Scholar), an award besto\ved by the Shanghai v!unicipal Education Commission. He has published widely on segmenta l structure, Optmality Theory, English phonology, and East Asian languages. His current research is focused on con1bining phonol ogical and psycholinguistic theories.

(

"

11

Richard Wiese has been a Professor of Gern1an Linguistics at the Philipps­ H is 1vork concentrates on theoretical Un iversitit, Marburg, Germany, linguistc i s, phonology, and psycholinguistc i s He has written a monograph, The

since 1996.

.

phonologi; of Ge 1an (2000) and nu1nerous articles on issues of (German) phono­ logy, morphology, psycholinguistics, and orthography, and is Co-editor of the n1

book series Linguistische Arbeiten.

Ronnie Wilbur, Professor and Director of Linguis tics and Professor of Spee�,

Language, and Hearing Sciences, received her Ph.D n Linguistics a t the University of Illinois, Urbana-Champaign. She has taught at the U niversity of Southern Cfornia Boston University, University of An1sterda1n, University dissertation "The phonology of Graz, Austria, and Uni ver sity of agreb. Her of reduplication" \Vas reissued as a "Classic in Linguistc i s" in She was Founding Editor of the journal & and

.

,

1973 1997. Sign Language Linguistics Editor fronl 1998 to 2006. Matthew Wolf received his Ph.D. in 2008 from the University of Massachusetts,

Amherst. After having been a Visiting Assistant Professor at Georgetown Univer­ sity and a Visiting Lectu rer at Yale University, e is currentl y a Postdoctoral Associate at Yale. His research has focused priarily on aspects of the phonology­ morphology interface, including process morphology allomorph selection, and paradign1 gaps, as \Veil as on opacity and serial versions of Optin1ality Theory.

,

Moira Yip is Professor of Linguistics Emerita) at University College London. Previously she 'as at the University of California, Irvine, and Brandeis Univer­

(

sity. She received her Ph.D. ro1n MIT in studying under Morris Halle. She has published extensively on tone, red uplication, distinctive feature theory, and

1980,

Copyrighted maerial

XXVll

Contributors

loanword phonology. Much of her 'Ork has been on Chinese languages. Recently she has become interestd n comparisons benveen birdsong and hu1nan phonology, and is no' publishing in this area. Alan C. L. Yu is Associate Professor of Linguistics at the University of Chicago. He is the author of and co-editor (with John Goldsmith and Jason Riggle) of the 2nd edition (forthcoming). His work on phonetics and phonology has appeared n i and

A natural history of inixation (2007) I·Iandbook of phonolo.cal theory, Language, Phonology, Journal of Phonetics, PLoS One.

Draga Zee i s Professor of Linguistics at Cornell University. She has "'Orked in several areas of phonology and its interfaces: the moraic theory of syllable struc­ ture, the representation of pitch accent, and both the phonology-morphology and the phonology-syntax interfaces.

Elizabeth C. Zsiga is an Associate Professor .in the Department of Ling uistics at GeorgetO\Vn University. Her research primarily addresses the phonetic and phonological patterns that occur at and across 'Ord boundaries in conncted speech. Her publialions have examined a range of processes in different languages, includ­ ing final consonant deletion and palatalization n English, VOvel assJnilation in Igbo, palatalization i n Russia n tone simplification in Thai, voicing in Setsvana, pitch accent in Serbian, and most recently nasalization and voicing in Korean and Korean-accented English. ,

Copyrightd maerial

Preface

Progress n phonology, like other disciplnes grows out of debate. Every journal article, every conference paper, every book chapter, every book on phonology, can be seen as a contribution to one or more discussions on son1e theoretical topic. New data are sought to sharpen theoretical claims and new theories are proposed to accommodate previously undescribed data. Already familiar data paradigms are requently appealed to n arguing that some new theory fares better than its co petitors the description of various phenoena. Arguably, the synchronic study of phonology is currently celebrating its first centena ry: Ferdinand de Saussure \Vas teaching his Cours de linguistique generate (it 'vas published posthumously in One hundred up to his death in years of debates have yielded many insights into the sound structure of human language. ve nov know much more about a \vhole range of specific phono­ logical phenomena, including topics such as vo\vel harn1ony, the typology of word stress systems, and the structure of affricate sounds. We kno\v more about how sound systems interact with morphological and syntactic systems, and about the importance of taking actors like variation and frequency into account in the study of phonology . Phonologists have not developed a theory \Vhich completely captures each of those phenon1ena (let alone one which captures every aspect of aU of them. in a uniform vay). Bt.1t a.t le.st "'e have a much better picture of some of the properties which a successful theory of phonology should have. Alongside external and universal challenges, such as university adn1inistrators who do not see the in1portance of somethng as esoteric as the study of sound systen1s, we can identify at least l\vo internal dangers which challenge our field. The irst is that n1any debates are abandoned at some point, and then orgotten, \Vith the issues involved sometimes being rediscovered much later, ''ithout the earlier research necessarily being knovn to the ne1 generation of researchers. The reasons for this are often perfectly understandable, and there is probably no 'ay to avoid this state of affairs con1pletely. After lengthy discussions n the litera­ tLlre on, say, the relationship bel\veen continuancy and place of articu.lat.on, appar­ ently involv i ng arguents for and against almost every logically poss i ble vie'" on the issue, the topic may seem to be intractable, and scholars may tire of it and move on to ne' topics of debate. Thus Ve n1ay avoid unruitful atten1pts to solve problems for '''hich \Ve just do not have the right tools at that particular mon1ent. ,

1

in

1913

1916).

i

Copyrightd maerial

xxx

Preface

Ho·ever, these discussions 'vill frequently have led to at least partial agreement on in1portant properties of the phenomenon at hand, even if only that the pheno­ n1enon s extren1ely con1plicated: just positing, for instance, son1e feature-geon1eric structure n '"hich continuancy and place of articulation are in an unambiguous universal relation to each other 'vill encounter many problems that help us better understand the phenomenon, even if they do not lead to a generally accepted solution. However, by abandoning the debate on the topic, \Ve run the risk of losing that knov]edge. The danger is that ne\v generations will have to rediscover the subtleties of the phenomenon \vhen the topic is taken up again. The second danger is over-specialization. Saussure \vas an aJl-round li ngu ist, vith a deep understanding of much of (Western) linguistics as it 'vas kno''n at the time. Today there are very fev, if any, people vho can make such a claim. Scholars '''ith a thorough w1derstanding of phonology n1ay have some t10\'­ ledge of neighboring disciplines sum as morphology or phonetics, but usually not even of both of those. There are many more phonology talks given every year at specialized phonology conferences than there are phonology talks at genera! linguistics conferences. And even within the field itself there are fairly \Vell­ established dividing lines: an expert on intonation is unlikely to be co1npletely up to date on the literature on coronals; somebody \vho studies the Ian1bic-Trochaic La\v may skip the talks on sign language phonology in the local phonology vork­ shop; somebody \vho 'vorks on stress may have little knowledge of segmental phonology. his means that crucial insights vithin one sub-discipline of phono­ logy are becoming less and less accessible to phonologists '"orking V.thin other sub-disciplines. does not have the ambition to offer solutions to these problems; indeed, they are probably unavoidable. Yet a tool such as this allo,vs us to document at least some of the n1any insights into hun1an language that phonologists have gathered in the past decades, and also to give an overview of what at present sen1 to be the n1ajor issues that those interested in sound sructure a re think ing and a rgu in g a bout. The is in essence an encyclopedia of case studies. Each chapter addresses some topic '"hich has been debated in one \vay or another in the his­ tory of our field. Authors were invited to concentrate pri.n1arily on the empirical arguments that have been put forvard by the various sides in sum debates. Because of this concentration on case studies, there are many topics we have ignored . For instance, there is a chapter on coronals, but not one on labials, simply because there has been much more discussion in the phonological literature on the inter­ nal structure and the behavior of coronal sounds than on their labial counterparts. Si.nUlarly, there is a cllapter on palatalization, but not one on labialization, again because the former has been discusse. broadly in the phonology literature while ilie latter has not. Some chapters have turned out more like the case studies 'e originally had in mind than others. Inevitably, some chapters had to be organize. differently, for instance those concentrating n1ore on a specific theoretical device (such as constraint conjunctio n or rule ordering) than on some empirical phenoo.eno n. However, even ilie authors of these chapters were asked to provide some discussion of the data vhich Jed scholars to develop such theoretical concepts in the first place. that many We are, of course, concious of the fact that the reader vill possible topics are missing fron1 the including son1e hotly debated ones

The Blackwell Companion to Plionology Companion

Co11 p.nion,

d

Copyrightd maerial

Preface 124

.

XX e A Á D/ appear in pre-r position (e.g. here, care, car, sure, more), while all other vowels are neutralized to /// (which may reflect a merger with historical /r/), as in bird, word, heard. The general tendency of rhotics not to undergo palatalization, as discussed by Walsh Dickey (1997) and Hall (2000), provides another example of (negative) rhotic–vowel interaction. (b) As discussed by Walsh Dickey (1997: 91–92), rhotics in Australian languages are often prohibited from occurring in word-initial position (see chapter 86: morpheme structure constraints). One such language is Mullukmulluk, in which the two rhotic phonemes /7/ and /r/ can appear in any position except wordinitially (Birk 1975: 61). Given that Australian languages typically allow for two or three rhotic phonemes, this constraint focuses on rhotics as a class, and not on a single phoneme which happens to be a rhotic. There are other more detailed constraints on the placement of rhotics: for Warlpiri, Nash (1980: 76) notes that in a CVC sequence (with heterosyllabic Cs), the two consonants cannot be identical rhotic phonemes (i.e. two tokens from one of the phonemes /7 È P/). This constraint seems to have exceptions, but it still captures a pattern significant in its exclusive reference to the class of rhotics. (c) For languages that allow clusters of more than one consonant to appear in onset and/or coda position, rhotics are typically assigned to the position immediately adjacent to the vowel of the respective syllable. That is, a template of the type CrVrC describes the phonotactic placement of rhotics rather well, with C standing for one or more consonants other than /r/. (8) below exemplifies such a patterning from German, but many languages with complex syllable constituents behave analogously (chapter 49: sonority). (d) In many languages, there is a great deal of allophonic variation for the usual single r-phoneme, with the allophones standing either in free variation or in complementary distribution. But the large number of these allophones are all drawn from the inventory of rhotic sounds. The Persian language (Farsi) provides an instructive example: the phoneme /r/ has trilled [r] as its main allophone according to Majidi (1986: 63–64, 2000: 41–43), but has three to four additional rhotic allophones in complementary distribution, as shown in (5). The phoneme /r/ in Persian does not have non-rhotic allophones. (5)

Distribution of rhotics in Persian (Majidi 1986: 63–64) flap [7] intervocalically /tare/ ta’[7]e ‘chive’ b. voiced fricative [p] in word-initial position /ruz/ ’[p]uz ‘day’ c. partially or completely devoiced trills [ã] adjacent to voiceless consonants and word-finally7 /babr/ ’bab[ã] ‘tiger’ /xCrkan/ xC[ã]’kan ‘collector of blackthorn’ d. voiced trill [r] elsewhere /arzCn/ a[r]’zCn ‘cheap’ a.

7

Majidi (1986: 64) sees a tendency to distinguish the devoiced trills in terms of either partial or complete lack of voicing.

The Representation of Rhotics

7

Walsh Dickey (1997: 94) concludes from her survey of alternations involving rhotics that “rhotics were overwhelmingly associated with other rhotics in alternations.” In addition, rhotics of all types can be involved. Looking beyond single variants of a language, languages with a sufficiently large number of variants/dialects may display almost all types of rhotics in the various dialects. This is true for English, for example: Ladefoged and Maddieson (1996: 235) point out that for English “we can exemplify nearly all the different forms of rhotics we have been discussing simply by reference to this one language.” The same is also true for German, as the discussion in §2.3 will demonstrate. Not surprisingly, the same massive variation is found in language change. Historical change from one type of /r/ to a different one is widespread. But in striking contrast to the variability noted here and further exemplified in the next section, rhotics are quite invariable in terms of their phonotactic patterning: they are, as discussed above, the consonants occupying the position closest to vowels (where vowels include glides). (e) Examples of restricted patterning among rhotic phonemes have been given above for the rhotic phonemes of Spanish and Catalan, for which there is partial neutralization between the two rhotics. A further example is provided by Irish Gaelic, which has /r/ and palatalized /r j/ as rhotic phonemes, but with the restriction of the palatalized version to word-internal contexts (Ní Chiosáin 1999: 553). In simple word-initial onsets, the distinction between the two rhotic phonemes is neutralized to /r/.

2.3

Variability of rhotics

In the fields of dialectology and sociolinguistics, many studies of sound change and variation consider the behavior of rhotics to be important linguistic variables. Most prominently in the history of sociolinguistics, Labov’s studies of sound change in American English demonstrate the absence or presence of postvocalic /r/ as an important sociolinguistic variable in the East Coast varieties of American English (Labov 1966). In general, the enormous variability of rhotics makes this set of sounds highly suitable as a linguistic marker of regional, social, or age distinctions; see Scobbie (2006) for comment and more examples. The present section is meant to demonstrate the extent of the variability of rhotic sounds and the possible speed of change for these sounds. Examples here are taken from German, but similar points could be made for Dutch, French, English, and other well-documented languages (see the contributions on various European languages in Van de Velde and van Hout 2001). In the field of German dialectology, Göschel (1971) provides a survey on variants of prevocalic /r/ across local dialects in Germany of the 1930s. As shown in the map in Figure 30.1, /r/ across German dialects could be alveolar, retroflex, or uvular. It could also be either tap/flap, trill, approximant, or fricative. Thus, nearly all variants of /r/ listed in (1) above can be found in the dialects of a single language at one particular point in time. Focusing on Austrian German and Swiss German variants, Ulbrich and Ulbrich (2007) found a total of 12 different phonetic variants of the phoneme /r/, plus a zero realization. The unity of rhotics in this case consists in the fact that all of these varieties have exactly one rhotic phoneme, usually in corresponding positions in cognate words, but its phonetic realization varies widely across the varieties of German,

Richard Wiese

8

Königsberg

Hamburg Stettin

Bremen

[r]

Münster

[r]

Magdeburg

Kassel

[È]

[ú]

Köln

Breslau

Dresden

Erfurt

[;]

Koblenz

[È]

Frankfurt Trier

Prag Würzburg

[r] [r] [r]

Regensburg

The articulation of r in German

alveolar [r] [7] uvular [r]

Ulm

retroflex [È], also [P]

München

voiced uvular or velar fricative sounds [ú] [:] From tape-recordings of 1200 speakers in 300 locations from “Sound memorial of German dialects in 1936.”

Jnnsbruck 0

50 km

100

Transcribers: G. Heike, F. Schindler, J. Göschel

Figure 30.1 Variants of prevocalic /r/ across local dialects in Germany in the 1930s (from Göschel 1971: 74)

with no obvious limitations. In the same study, Göschel (1971: 98) also demonstrates that the languages of Europe display the complete range of r-sounds distinguishable by articulatory parameters. Nearly all of these languages have exactly one rhotic phoneme, confirming the point that there is a large difference between phonetic variability on the one hand and phonological uniformity on the other (see chapter 89: gradience and categoricality in phonological theory).8 Wiese (2003) analyzes the changes of rhotics found for the speakers of the dialect in southwest Germany, west of the river Rhine, as described in the dialect atlas of Bellmann et al. (1999). This dialect atlas compares dialectal features across two generations of (male) dialect speakers, those from an older generation, around 70 years old, and those from a younger generation, around 40 years old, and presents separate maps for the two generations. The comparisons presented in (6) give the results of counting all the r-related changes documented in three different lexical contexts in Bellmann et al. (1999).9 8

I am not aware of any European language without a single rhotic phoneme. Some Romance languages have two rhotic phonemes, although the distinction between them is partially neutralized (see discussions of Spanish and Irish Gaelic above). 9 A small number of points on the maps are rendered as “hard to segment.” These were not counted.

The Representation of Rhotics

9

These counts demonstrate that, from one generation of dialect speakers to the next, up to 60 percent of rhotics are changed in their phonetic realization. What remains is a single rhotic phoneme in the respective word. (6)

Intergenerational changes a.

b.

c.

Berg ‘mountain’ change in type of /r/ no change Rose ‘rose’ change in type of /r/ no change fahren ‘go, drive’ change in type of /r/ no change

194/327 ≈ 60% 133/327 ≈ 40% 93/296 ≈ 31% 203/296 ≈ 69% 106/313 ≈ 34% 207/313 ≈ 66%

Word-initial or syllable-initial rhotics change in about a third of the tokens, while rhotics in the coda, as in the lemma Berg, showed a large number of r-realizations for both generations of speakers. Furthermore, the most r-realizations changed across the generations. There is no reason to think that this phenomenon is restricted to this particular dialect. An even more striking case of such a change in rhotic realization is reported by Enderlin (1911: 168). In his study of the Alemannic (Swiss German) dialect of Kesswil, he notes that at the local school all 1st grade students had uvular [ö], while all 9th grade students had alveolar [r]. He also reports a 50 percent realization of both forms for 4th and 5th grade students. In general, postvocalic, rhymal rhotics seem to be subject to more variation than rhotics in onset positions. In Table 30.2, these changes are classified according to the type of change found in the same dialect map, that for Berg ‘mountain’ in Bellmann et al. (1999: 463). The r-sounds listed next to “older generation” are those found for the older speakers; while the first row lists rhotics found for the younger generation. The check marks denote intergenerational changes for particular pairs of rhotics (i.e. in some particular location as selected in the dialect atlas) through the comparison of the two maps.

Older generation

Table 30.2 r-conversions found for one lemma (Berg ‘mountain’)

r

ö

Younger generation > > H

7















ö















>















>















H















C















Ø















C

Ø

10

Richard Wiese

Although r-pronunciation may vary considerably as a result of dialectal change, the two related varieties studied display a large range of rhotics. In other words, dialectal change does not lead to a neutralization between different realizations of /r/ or to a proliferation of rhotic phonemes. The continuous and omnipresent change from one type of rhotic to another is also demonstrated on a different level in the acoustic measurements presented by Ladefoged and Maddieson (1996: 219f.) for individual tokens: their spectrograms of r-sounds from a speaker of Standard Italian show that these sounds, usually classified as alveolar trills, may display an approximant, possibly vowel-like, phase, preceding and/or following the opening/closing phase characteristic of trills. Many other languages display a range of phonetic variants of /r/; in §2.2 above, Persian was shown to have a trill /r/ phoneme, with a whole range of allophonic variants, all of which are rhotics. For more examples of this type see Lindau (1985: 158–159). Walsh Dickey (1997: 96–97) reports that different dialects of Portuguese show a similar and very wide range of variants of the (two) rhotic phonemes. Here and in the case of the southwest German dialect discussed above, even individual realizations of the r-phoneme are not confined to particular manners or places of articulation.

3

The featural basis of rhotics

(1) demonstrated the articulatory variability of rhotics. There is simply no articulatory feature there which is shared by all rhotics, and it is hard to see what other, possibly more general, articulatory feature might do the job. Major class features as used in theories of phonology since the proposals advanced by Chomsky and Halle (1968) are of no help either (see chapter 13: the stricture features; chapter 17: distinctive features). In order to see this, consider the values for the classificatory features [continuant], [sonorant], and [consonantal], taken here to be binary features. Rhotics are not uniform with respect to any of these features: taps and flaps are usually classified as [−continuant], while approximants and trills are [+continuant]; fricatives are [−sonorant], while all other rhotics are [+sonorant]. As for [consonantal], rhotics are [+consonantal], but this class is far too comprehensive in order to be of help in the classificatory definition. Furthermore, rhotic vowels might well be included in the class of rhotics; see discussion in §4. (For further discussion of these features for rhotics, see Hall 1997: 124–128.) Phonetic studies have therefore explored the possibility that a description of rhotics might be given in acoustic rather than articulatory terms. Lindau (1978), in particular, explores the acoustics of rhotics, finding that many r-sounds across languages are characterized by a lowered third formant. However, for other rhotics, again cross-linguistically, the third formant was actually found to be very high (for summaries, see Lindau 1985 and Ladefoged and Maddieson 1996: 244–245). The conclusion drawn by Lindau (1985) and Ladefoged and Maddieson (1996) is that there is no possibility of a description establishing the unity of all rhotics as a class by means of phonetically based features, either articulatory or acoustic. Lindau further concludes that there are family resemblances between neighboring subclasses of rhotics, but that there are no overall characteristics common to

The Representation of Rhotics

11

all the subclasses. Similarly, Kohler concludes that a positive characterization of the phoneme /r/ in German, encompassing all its allophonic variations, is not possible, even for a single speaker: “only a negative characterization is possible” (Kohler 1995: 156). Ladefoged and Maddieson (1996: 245) summarize on an even more pessimistic note: Although there are several well-defined subsets of sounds (trills, flaps, etc.) that are included in the rhotic class, the overall unity of the group seems to rest mostly on the historical connections between these subgroups, and on the choice of the letter “r” to represent them all.

This conclusion would deny the existence of rhotics as a phonetically defined class, and is rather pessimistic on the possibility of providing any coherent descriptions in phonetic terms. It falls back on the conventions of alphabetic writing, while these are themselves obviously in need of explanation. Conceivably, the spelling of some (class of) sounds by means of the letter exerts some influence on the paths of historical change of these sounds. But to assume that this spelling has a pervasive cross-linguistic influence and thereby constitutes the sole basis of the development of a class of rhotic sounds worldwide does not seem to be well founded. Other well-documented sound changes such as spirantizations, consonant losses, or vowel shifts do not seem to be restricted by the spelling systems. For example, the Second Germanic Consonant Shift, changing /p/, /t/, and /k/ to fricatives or affricates (Iverson and Salmons 2006), was not prevented by the fact that the spelling of affected words was changed. Both Lindau (1985) and Ladefoged and Maddieson (1996), while noting the lack of a convincing segmental definition, emphasize the role of rhotics as a phonologically relevant class, along the lines discussed in §2 above. In order to express this unity, Hall (1997) proposes to use a feature [±rhotic] as a classificatory feature for the rhotic/non-rhotic distinction. However, a substantive definition of this abstract feature does not seem to be available. In yet another attempt, Walsh Dickey (1997) proposes to define rhotics by means of the feature Laminal, the use of the tongue blade as opposed to the tongue tip, as an articulator subordinated in the feature hierarchy to the articulator Coronal, as presented in (7). (7)

Feature structure of rhotics (Walsh Dickey 1997: 106) [liquid] Coronal Apical Laminal

However, it is questionable whether all rhotics make use of this feature structure. It is certainly not the case for uvular rhotics. Furthermore, it is unclear why such a marked segment class (expressed here by a deep hierarchical stacking of several

12

Richard Wiese

place features, as opposed to an underspecified representation indicating an unmarked class; see chapter 27: the organization of features; chapter 7: feature specification and underspecification) should be present in the majority of languages. The question of manner features for rhotics is even more pressing: if the degree of opening for rhotics may range from vocalic to fricative, as the survey given above seems to suggest, it is unclear how the class of rhotics may be characterized as a whole. Furthermore, this raises the puzzle how segments at the extreme ends of this dimension are to be classified: for example, when is a voiced uvular fricative [ú] a rhotic, and when is it not? The following section will propose an approach from a different angle. The failure to find a common denominator for rhotics in terms of acoustic or articulatory features does not preclude the possibility that it can be found eventually, but chances seem to be slight.

4

Alternative proposals

Featural descriptions proposed for rhotics, as discussed in the preceding section, have in common that they attempt to characterize rhotics in purely segmental terms. But we have already seen above that rhotics are tightly connected to their positions within larger phonotactic patterns, at least with respect to the conditions for their allophonic variants or for patterns of complementary distributions. This observation raises the question whether rhotics should in fact be described in terms of purely segmental categories, or in segmental terms at all. An alternative view would be to capitalize on the observation that rhotics appear in a particular well-defined syllabic position, namely the one immediately adjacent to the vowel. This view, proposed by Wiese (2001), and previously by Selkirk (1984), relies primarily on the apparently uniform behavior of all types of rhotics in terms of their syllabic constraints, and suggests that this is in fact the defining and constant property of the class of rhotics. On this view, the search for constant segmental properties is futile, because it starts from an incorrect presupposition, i.e. that classificatory features are by necessity segmental features. In contrast, it seems that rhotics are very stable with respect to their phonotactic behavior. In particular, their slot in the structure of a syllable does not seem to change with a change in their segmental make-up. For example, Hall (1993) discusses a (lower Rhine) variety of German in which the rhotic phoneme, in coda position, varies between a vowel and a voiceless fricative (Tor [tos] ‘gate’ vs. Sport [œpDøt] ‘sport’). In other words, a fricative rhotic obeys the same constraints on syllabic placement as a trill or approximant rhotic, or even a rhotacized vowel. The proposal then is that rhotics are defined as a particular relative point on the sonority scale, the point between vowels and laterals. Another pattern discussed in Wiese (2003) casts doubts on the segmental approach to rhotics as a class. (8) presents those onset clusters of present-day Standard German which consist of any stop followed by a sonorant (the velar nasal /I/ is excluded from such clusters on principled grounds). Examples given in the cells of the table include rather marginal clusters (such as /pn/ or /tm/ occurring in a few rare words only), but in all of these rare cases, there is no tendency to replace the clusters in question by a more natural cluster or to break it up by a process of epenthesis.

The Representation of Rhotics (8)

13

Initial clusters of stop + sonorant in Standard German /p/ /b/ /t/ /d/ /k/ /g/

/r/ Preis braun Traum drei Kreis grau

/l/ Platz blau — — klug Glaube

/n/ Pneu — — — Knie Gnade

/m/ — — Tmesis — Khmer Gmünd

Not all possible stop–sonorant clusters are attested. As shown in (8), /r/ can be combined with any of the existing stops, while all other sonorant consonants are restricted in some way or other with respect to the preceding stops. Basically, all homorganic clusters, such as /tl/ or /pm/, are ill-formed in (8), but there might be additional restrictions ruling out /bn/ and /dm/. For discussion of these, see Hall (1992: 65–80) and Wiese (2000: 261–269). But crucially, in contrast to the nonrhotic sonorants, no restrictions at all hold for stop–sonorant clusters involving /r/. As the examples in (8) demonstrate, this phoneme productively combines with any preceding stop, and none of the clusters involving /r/ is marginal. While it would be possible to simply make /r/ exempt from the ban on homorganic clusters, such a move would simply beg the question why this is the case. It seems more plausible to say that place features of /r/ simply do not count, or, in an underspecification approach to features, are not present (see chapter 22: consonantal place of articulation; chapter 7: feature specification and underspecification). The homorganicity ban in place for other sonorants cannot apply, then, leading to the complete set of stop–/r/ clusters shown in (8). Similar patterns can also be demonstrated for other clusters in German (such as fricative–stop clusters), and for similar clusters in other languages (e.g. English, Italian, Basque, Lithuanian). This line of argumentation, arguing that rhotics are defined as those sounds which bear a sonority value between that of vowels (including glides) and the next lower sonority class, is supported by the fact that the freedom of rhotics to combine with a preceding stop is independent of the particular type of the r-phoneme present in the respective variety of German – place features as well as manner features of the rhotic phoneme are always irrelevant. Finally, the proposal gives an answer to a puzzle noted above, namely that one and the same segment may sometimes be classified as a rhotic and sometimes not. This is particularly obvious in the case of voiced uvular fricatives: for Standard French, [ú] is generally seen as a rhotic; for Classical Arabic, what is apparently the same segment is not (Watson 2002: 13). But if rhotics are defined in terms of their phonotactic behavior, this is less mysterious: in French, the segment in question appears between obstruents and vowels, as in the initial cluster in frais [fúe] ‘fresh’ or in the final cluster in carte [kaút] ‘card’, while in Classical Arabic the respective segment patterns with other fricatives in terms of phonotactics, even after the rhotic [r] as in farı [farú] ‘width’. In addition, Arabic has a tap or trill /r/ which is analyzed as a rhotic, because it behaves accordingly in terms of phonotactic patterning: e.g. San’ani Arabic sirt ‘I/you (masc sg) went’; hribt ‘I fled’ (Watson 2002: 67, 73). Dutch is another language which shows both a phoneme /r/ (with a great deal of variation) and a voiced velar or uvular fricative /:~ú/ (see Booij 1995: 7–8).

Richard Wiese

14

In summary, there is some evidence for a description of the rhotic class in terms of sonority (chapter 49: sonority). First, this hypothesis explains the constant phonotactic position of rhotics in the syllable, and second, it explains the similarity of rhotics to vowels across languages, namely the phenomena of r-vocalization and of rhotic–vowel interaction. This view also abstracts away from all types of segmental realizations for rhotics, a move justified by their non-uniform behavior with respect to segmental variability, and is able to accommodate the fact that a segment sometimes acts as a rhotic and sometimes does not. The proposal does not mean that specific rhotics are necessarily unspecified for place and/or manner; obviously such a specification is needed at least for languages with more than one rhotic. Rhotacized vowels, which have largely been ignored so far, can also be brought under the sonority-based definition of rhotics. Consider the final segments in the words in (9), from American English and German. These are, arguably, vocalized variants of /r/ in the two languages. In other words, rhotacized vowels are rhotics functioning as syllabic nuclei or as glides in postvocalic position. (9)

Rhotacized vowels a.

American English ladder [lædÌ] beer [b>r]

b. German Leiter [la>tô] Bier [bis]

‘ladder’ ‘beer’

In terms of segmental features, it would be difficult, if not impossible, to give a definition of rhotics which includes these or similar vowels. In terms of a sonoritybased definition, these vowels naturally fit into the class to which consonantal rhotics belong as well. This is because all sonorant consonants (/r l n m/) in English and German can be realized as syllabic nuclei. The only difference between /r/ and the remaining sonorants is that the latter are not vocalized.

5 5.1

Related phenomena Rhotacism

Changes of some sounds into an /r/ are repeatedly found in languages, and have been given a name of their own, rhotacisms (Barry 1997; Catford 2001).10 Rhotacisms may lead to systematic alternations, such as those in (10) for Latin and Germanic. (10)

Rhotacisms a.

10

Latin nom sg ius os opus

gen sg iuris oris opera

‘law’ ‘mouth’ ‘work’

Speech disturbances involving r-sounds, especially in language acquisition, are also called rhotacisms. These will not be discussed here. It seems that the alveolar trill is particularly difficult to acquire, again raising the question of why its overall frequency is so dominant (see §2.1).

The Representation of Rhotics b.

Germanic maiza huzd dius (gen diuzis) marzjan

West Germanic mero hord, hort deor, tior meriian, merren

15

‘more’ ‘treasure’ ‘animal’ ‘disturb’

Latin rhotacism (10a) is analyzed by Walsh Dickey (1997: 82) as intervocalic voicing, on the basis of the assumption that there are no alveolar continuant consonants in the Latin phonemic system other than /s/ and /r/. The transfer of [+voice] from a neighboring vowel to /s/ would then lead to /r/ if this rule is constrained by structure preservation (because there is no /z/). Germanic rhotacism, exemplified in (10b), seems to be a similar phenomenon, but is not restricted to intervocalic contexts. The sources (e.g. Braune and Reiffenstein 2004: 80) argue that the /z/ undergoing rhotacism was a voiced sound already, itself derived from Germanic voiceless /s/ through Verner’s Law. This /z/ would spontaneously undergo rhotacism in West Germanic and North Germanic (but not in Gothic). The sound change from /s/ (perhaps via /z/) to /r/ seems to be the most common type of rhotacism, as in the examples of (10), but other changes, such as /n/ to /r/, are found as well (see Catford 2001).

5.2

r-epenthesis and metathesis

Rhotics are also found quite often as epenthetic consonants. Given that epenthetic consonants are usually those regarded as unmarked (glides, /?/, and /t/ being the most obvious examples), this is somewhat surprising. On the other hand, the phonetic variability of rhotics, ranging from a fricative to a glide, may make an r-sound particularly suitable as an epenthetic segment. Varieties of English can be sorted into rhotic dialects (roughly those with a rhotic in the syllable coda) and non-rhotic dialects (those without a rhotic in the coda). The latter dialects, in turn, are often classified into those displaying so-called “linking r” and those displaying “intrusive r.” In the “linking r” varieties, /r/ is absent in, e.g. near ([n>H]) and car ([kA(]), but surfaces in near him ([n>P>m]) and car is ([kAP>z]). The “intrusive r” type of varieties goes beyond linking r, and provides a well-known example of r-epenthesis in additional contexts, as in India and [>nd>HrHnd]. Uffmann (2007) discusses the phenomenon of r-epenthesis in these English dialects, and argues that the choice of /r/ instead of some other unmarked consonant is by no means arbitrary. Rather, /r/ provides the optimal consonant in a non-margin position because of its value on the sonority scale (see also Ortmann (1998: 58): “r is the default approximant”). A margin position is one at a domain edge, such as a foot or word boundary, while all other positions are non-margin positions. Thus /r/ is the consonant to be chosen in the internal position of the English example above, while glottal stop or some other unmarked stop is typically inserted in a margin position. This analysis again relates rhotics to a particular place on the sonority hierarchy, without reference to their segmental features. Rhotics may also be involved in metathesis between vowels and consonants (chapter 59: metathesis); see Blevins and Garrett (1998: 516ff.) for a discussion of rhotic metathesis in the Le Havre dialect of French. Given that rhotics occur adjacent to vowels, as noted above, and that metathesis reverses the order of adjacent segments, this is not surprising. Blevins and Garrett’s analysis involves

16

Richard Wiese

the assumption that rhotics involve a lowering of the third formant in r-sounds, an assumption, however, which does not seem to be valid for rhotics in general (see discussion in §3 above).

5.3

Liquids

Consonants of the class comprising both rhotics and laterals, the latter most commonly represented by the alveolar approximant /l/ (chapter 31: lateral consonants), are often called “liquids” (from Latin liquida ‘fluid (adj; neut pl)’, but the concept is utilized in Greek metrics to refer to the larger class of rhotics, laterals, and nasals). Greek and Latin grammarians noted at an early stage that such liquids share some properties, and form a unified class. In particular, Latin clusters consisting of a stop followed by either /l/ or /r/, the so-called muta cum liquida, behave like single consonants with respect to Latin syllabification. Furthermore, the muta cum liquida are the only true consonantal clusters allowed as complex onsets, along with clusters with initial fricatives, as in frater ‘brother’ and flumen ‘river’ (Marotta 1999: 298). In other languages as well, rhotics and laterals form a single phonological class, thus reinforcing the unity of liquids as a class. In a number of languages, e.g. Korean, Maori, and Japanese, there is widespread variation between a rhotic (flap) [7] and lateral [l] as realizations of a single phoneme. This phoneme may therefore most adequately be interpreted as an otherwise unspecified liquid consonant. The variation may be one either of free variation or of complementary distribution (see Rice 2005 for more examples). Another property shared by liquids is their similarity in terms of the sonority hierarchy: if rhotic and lateral liquids are different at all with respect to their sonority value taking the same position within the syllable, they occupy adjacent positions on this hierarchy, with laterals occupying the less sonorous position. For Germanic languages, there is evidence that the latter is true: rhotics are more sonorous than laterals and thus occupy the position closer to the vocalic nucleus: e.g. curl in rhotic dialects of English or German Kerl ‘guy’.

6

Final discussion

The present chapter has provided rich and varied evidence for the phonetic variability of rhotics. This variability, between and within languages, dialects, and even idiolects, is in stark contrast to the phonological unity and consistency of rhotics. It is this unity which justifies a unified representation of rhotics. Therefore, the present discussion of rhotics turns out to be a significant topic for the study of the relation between phonetics and phonology. On the one hand, no analysis has been successful in demonstrating that rhotics form a class in phonetic terms. Furthermore, the variability of rhotics within individual languages and across languages is impressive. On the other hand, there are various strands of evidence showing that rhotics do form a single class in terms of their phonological behavior. This uniform behavior may best be described as a syllable-related patterning. One conclusion therefore is that the phonological level is not necessarily one of segmental abstraction from the phonetic one. If this picture comes close to the truth, it also provides evidence for a view on the relation between phonetics and phonology according to which there can

The Representation of Rhotics

17

be a considerable difference between these two levels. In some recent theories of phonology, emphasis has been placed on the seamless integration of phonetic and phonological representations. In the model of Articulatory Phonology, for example, all representations consist of articulatory gestures, i.e. movements in the vocal tract, which constitute the domain for phonological as well as phonetic representations (for a survey see Browman and Goldstein 1992; chapter 5: the atoms of phonological representation). In a treatment of English /r/ and its alternations with “zero,” McMahon et al. (1994: 303) argue that /r/ consists of two articulatory gestures, a palatal constriction and a pharyngeal constriction. It remains to be seen to what extent this approach is adequate for other rhotics, in particular the prototypical trill and the uvular varieties. The discussion of rhotics in this chapter raises the question of whether a gestural – or any other – representation in terms of uniform and rather concrete units is possible for the characterization of rhotics.

REFERENCES Bammesberger, Alfred. 1982. Essentials of Modern Irish. Heidelberg: Carl Winter Verlag. Barry, William J. 1997. Another R-tickle. Journal of the International Phonetic Association 27. 34–45. Bellmann, Günter, Joachim Herrgen & Jürgen Erich Schmidt. 1999. Mittelrheinischer Sprachatlas, vol. 4: Konsonantismus. Tübingen: Niemeyer. Birk, David B. W. 1975. The phonology of MalakMalak. In Sharpe et al. (1975), 59–78. Blevins, Juliette & Andrew Garrett. 1998. The origins of consonant–vowel metathesis. Language 74. 508–556. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Braune, Wilhelm & Ingo Reiffenstein. 2004. Althochdeutsche Grammatik, vol. 1: Laut- und Formenlehre. Tübingen: Niemeyer. Browman, Catherine P. & Louis Goldstein. 1992. Articulatory phonology: An overview. Phonetica 49. 155–180. Catford, J. C. 2001. On Rs, rhotacism and paleophony. Journal of the International Phonetic Association 31. 171–185. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Enderlin, Fritz. 1911. Die Mundart von Kesswil im Oberthurgau: Mit einem Beitrage zur Frage des Sprachlebens. Frauenfeld: Huber & Co. Foulkes, Paul & Gerard J. Docherty. 2000. Another chapter in the story of /r/: “Labiodental” variants in British English. Journal of Sociolinguistics 4. 213–222. Göschel, Joachim. 1971. Artikulation und Distribution der sogenannten Liquida r in den europäischen Sprachen. Indogermanische Forschungen 76. 84–126. Hall, T. A. 1992. Syllable structure and syllable-related processes in German. Tübingen: Niemeyer. Hall, T. A. 1993. The phonology of German /r/. Phonology 10. 83–105. Hall, T. A. 1997. The phonology of coronals. Amsterdam & Philadelphia: John Benjamins. Hall, T. A. 2000. Typological generalizations concerning secondary palatalization. Lingua 110. 1–25. Harris, James W. 1969. Spanish phonology. Cambridge, MA: MIT Press. Haspelmath, Martin, Matthew S. Dryer, David Gil, Bernard Comrie & Hans-Jörg Bibiko (eds.) 2005. The world atlas of language structures. Oxford: Oxford University Press. Hulst, Harry van der & Nancy Ritter (eds.) 1999. The syllable: Views and facts. Berlin & New York: Mouton de Gruyter. IPA. 2007. Handbook of the International Phonetic Association: A guide to the use of the International Phonetic Alphabet. Cambridge: Cambridge University Press.

18

Richard Wiese

Iverson, Gregory K. & Joseph C. Salmons. 2006. Fundamental regularities in the Second Consonant Shift. Journal of Germanic Linguistics 18. 45–70. Jagst, Lothar. 1975. Ngardilpa (Warlpiri) phonology (language of the Warnayaka tribe, a subtribe of the Warlpiri tribe). In Sharpe et al. (1975), 21–57. Kohler, Klaus J. 1995. Einführung in die Phonetik des Deutschen. 2nd edn. Berlin: Schmidt. Labov, William. 1966. The social stratification of English in New York City. Washington, DC: Center for Applied Linguistics. Ladefoged, Peter & Ian Maddieson. 1996. The sounds of the world’s languages. Oxford & Malden, MA: Blackwell. Lindau, Mona. 1978. Vowel features. Language 54. 541–563. Lindau, Mona. 1985. The story of /r/. In Victoria A. Fromkin (ed.) Phonetic linguistics: Essays in honor of Peter Ladefoged, 157–168. Orlando: Academic Press. Lipski, John M. 1990. Spanish taps and trills: Phonological structure of an isolated opposition. Folia Linguistica 24. 153–174. Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University Press. Majidi, Mohammad-Reza. 1986. Strukturelle Grammatik des Neupersischen (Farsi), vol. 1: Phonologie: Eine paradigmatisch-syntagmatische Darstellung. Hamburg: Buske. Majidi, Mohammad-Reza. 2000. Laut- und Schriftsystem des Neupersischen. Hamburg: Buske. Marotta, Giovanna. 1999. The Latin syllable. In van der Hulst & Ritter (1999), 285–310. McMahon, April, Paul Foulkes & Laura Tollfree. 1994. Gestural representation and Lexical Phonology. Phonology 11. 277–316. Nash, David. 1980. Topics in Warlpiri grammar. Ph.D. dissertation, MIT. Ní Chiosáin, Máire. 1994. Irish palatalisation and the representation of place features. Phonology 11. 89–106. Ní Chiosáin, Máire. 1999. Syllables and phonotactics in Irish. In van der Hulst & Ritter (1999), 551–575. Odden, David. 2006. Kimatuumbi: Phonology. In Keith Brown (ed.) Encyclopedia of language and linguistics. 2nd edn, vol. 6, 198–206. Oxford: Elsevier. Ortmann, Albert. 1998. Consonant epenthesis: Its distribution and phonological specification. In Wolfgang Kehrein & Richard Wiese (eds.) Phonology and morphology of the Germanic languages, 51–76. Tübingen: Niemeyer. Padgett, Jaye. 2003. Systemic contrast and Catalan rhotics. Unpublished ms., University of California, Santa Cruz. Rice, Keren. 2005. Liquid relationships. Toronto Working Papers in Linguistics 24. 31–44. Schiller, Niels O. 1999. The phonetic variation of German /r/. In Matthias Butt & Nanna Fuhrhop (eds.) Variation und Stabilität in der Wortstruktur: Untersuchungen zu Entwicklung, Erwerb und Varietäten des Deutschen und anderer Sprachen, 261–287. Hildesheim: Olms Verlag. Scobbie, James M. 2006. (R) as a variable. In Keith Brown (ed.) Encyclopedia of language and linguistics. 2nd edn, vol. 10, 337–344. Oxford: Elsevier. Selkirk, Elisabeth. 1984. On the major class features and syllable theory. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 107–136. Cambridge, MA: MIT Press. Sharpe, Margaret C., Lothar Jagst & David B. W. Birk (eds.) 1975. Papers in Australian linguistics 8. Canberra: Australian National University. Uffmann, Christian. 2007. Intrusive [r] and optimal epenthetic consonants. Language Sciences 29. 451–476. Ulbrich, Christiane & Horst Ulbrich. 2007. The realisation of /r/ in Swiss German and Austrian German. In Jürgen Trouvain & William J. Barry (eds.) Proceedings of the 16th International Congress of Phonetic Sciences, 1761–1764. Saarbrücken: Saarland University. Van de Velde, Hans & Roeland van Hout (eds.) 2001. ’r-atics: Sociolinguistic, phonetic and phonological characteristics of /r/. Brussels: Université Libre de Bruxelles.

The Representation of Rhotics

19

Walsh Dickey, Laura. 1997. The phonology of liquids. Ph.D. dissertation, University of Massachusetts, Amherst. Watson, Janet C. E. 2002. The phonology and morphology of Arabic. Oxford: Oxford University Press. Wheeler, Max W. 2005. The phonology of Catalan. Oxford: Oxford University Press. Whitley, M. Stanley. 2003. Rhotic representation: Problems and proposals. Journal of the International Phonetic Association 33. 81–86. Wiese, Richard. 2000. The phonology of German. Oxford: Oxford University Press. Wiese, Richard. 2001. The phonology of /r/. In T. A. Hall (ed.) Distinctive feature theory, 335–368. Berlin & New York: Mouton de Gruyter. Wiese, Richard. 2003. The unity and variation of (German) r. Zeitschrift für Dialektologie und Linguistik 70. 25–43.

31

Lateral Consonants Moira Yip

Laterals are extremely common, and yet they are something of a phonological puzzle. For most consonants, there is fairly general agreement on which natural classes they belong to, and clear expectations of how they pattern in phonological processes. So for example [m] is Labial and [+nasal], and as such it patterns consistently with the other labials [p b], and the other nasals [n I]. But for laterals this is not the case, as we shall see. Their behavior is highly variable across languages. This chapter gives an overview of their somewhat perplexing behavior, summarizes extant proposals for their phonological representation, and ends by advocating a proposal using violable constraints that allows for this variability. I begin in §1 with some background on their articulation and acoustics, and how they are acquired by the child. §2 summarizes the types of laterals found in natural language. §3 surveys their roles in syllable structure. §4 discusses their sonorancy, voicing, and continuancy characteristics. §5 looks at alternations between laterals and other sounds. §6 asks whether there is a feature [lateral]. §7 looks at the positioning of [lateral] in a theory of feature geometry. I show that there are serious problems associated with any single choice of superordinate node, and instead in §8 I propose an approach based on violable feature co-occurrence constraints. §9 concludes.

1

Phonetics, perception, and acquisition

1.1 Types of laterals and their frequency in the world’s languages Over 80 percent of languages have one or more lateral consonants (Maddieson 1984). Laterals are defined by Ladefoged and Maddieson (1996: 183) as “sounds in which the tongue is constricted in such a way so as to narrow its profile from side to side so that a greater volume of air flows around one or both sides than over the center of the tongue.” The palatograms in Figure 31.1 show clearly that the tip of the tongue makes contact with the roof of the mouth, but the sides do not. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0031

Lateral Consonants Dark/l/

2

Light/l/

AK

PK

MI

SC

Figure 31.1 Linguo-palatal contact profiles of dark and light laterals for four speakers of American English, showing mid-sagittal contact in the alveolar and pre-palatal regions but no or limited contact at the sides. From Narayanan et al. (1997)

The approximant versions have some turbulence at the (incomplete) stricture; resonant versions like Standard German coda /l/ do not. Since the vocal tract is not fully obstructed in the typical approximant /l/, they are among the most sonorous of consonants (see chapter 49: sonority). They frequently contrast with other sonorants, especially rhotics, and the class of laterals and rhotics is referred to as the liquids (see chapter 8: sonorants). In other languages (such as Japanese) a language has only a single liquid, and it may vary between a lateral [l] and a more rhotic tap or flap [7]. The variation may depend on context (such as syllable position) or they may be in free variation. Speakers of such languages famously have trouble perceiving and producing the difference between [l] and [r] when learning languages like English (see Iverson et al. 2003). There are also lateral obstruents, which will be discussed below.

1.2 Articulation, acoustics, and acquisition of laterals Most laterals are dental or alveolar in articulation, but the tongue body is also frequently implicated, as shown for English by Sproat and Fujimura (1993). Gick et al. (2006) studied the articulation of laterals in six languages: Western Canadian English, Quebec French, Serbo-Croatian, Korean, Beijing Mandarin, and Squamish Salish. They found that all their laterals had an anterior tongue gesture in all syllable positions. More interestingly, in coda position all also have a dorsal gesture, which starts slightly earlier than the anterior gesture. Some, but not all, have a dorsal gesture in onset position too, in which case it is roughly simultaneous with the anterior gesture. The dorsal gesture in coda position may result from biomechanical causes, such as “active lateral compression of the

3

Moira Yip 4000

3000

2000

1000

Hz Figure 31.2 (2006)

[ l

e

d ]

[ j

e

l ]

Spectrograms of the English words led and yell, adapted from Ladefoged

tongue [. . .], which could result in a consistent but small non-phonological tongue dorsum backing, simply as the result of volume displacement” (Gick et al. 2006: 69). In onset position, however, the dorsal gesture may be phonological. The amount of dorso-palatal contact may vary from language to language, and may also depend on the vocalic context and the syllable position. In some languages there may be a continuum from the more anterior “clear” l to the more posterior “dark” l; in others there may be a categorical distinction. See Recasens and Espinosa (2005) for a useful overview. Taking these articulatory facts seriously, Walsh Dickey (1994, 1997) opts for a representation of laterals that explicitly labels them as featurally both Coronal and Dorsal. See §6 for further discussion. Acoustically, laterals have strong formant structure with a low F1 and a high F3. F2 is variable depending on place of articulation, and whether /l/ is clear or dark. The spectrogram in Figure 31.2 shows English /l/ in initial and final position. The raised F3 is detectable not only in the immediate vicinity of laterals, but up to five syllables preceding the lateral itself (Heid and Hawkins 2000). The low F1 groups them with the high vowels, and when they vocalize (synchronically, historically, or in acquisition), it is usually to [w/u], possibly because the acoustic spectra for [l] and for [w] are quite similar. See Ohala (1974). There are also possible articulatory reasons for this outcome: see §5.4 for an account along these lines. Laterals are often acquired late, and frequently children go through a phase in which they replace laterals by glides, as in Inkelas and Rose’s (2008) study of E, who replaces onset /l/ with [j] and coda /l/ with [w] (see chapter 101: the interpretation of phonological patterns in first language acquisition). It is likely that this is the result of both the motoric difficulty of phasing the complex tongue tip and body gestures required for /l/, and the variable nature of /l/ in English adult speech. The choice of high glides matches their acoustic and articulatory characteristics. See Johnson and Britain (2007) on the difference between the acquisition of clear l (often early) and dark l (often late, which is unexpected if its articulation is biomechanically unavoidable).

Lateral Consonants

2

4

Types of laterals

A suitable theory must account for the full range of lateral sounds, as summarized in this section, which is based heavily on Ladefoged and Maddieson (1996). Most laterals are coronal, and they may have any of eight coronal places of articulation: dental, alveolar, postalveolar, and palatal, each either apical or laminal (see chapter 12: coronals). All of these can contrast, though the largest set of contrasts within a single language is four, as in some Australian languages, such as Kaititj. Several others have a three-way contrast. Two-way contrasts are quite common. (1) shows the most common voiced lateral approximants by place of articulation: further distinctions are shown by the use of additional diacritics when necessary. (1)

Most common lateral places of articulation dental alveolar postalveolar (retroflex) palatal velar

* l Î O l

There are no known labial laterals, not even linguo-labial ones formed by the tongue and upper lip, even though they occur in child language, and even though in adult language linguo-labial stops and fricatives (but not laterals) are found in languages of Vanuatu (Maddieson 1987). For some English speakers, [f] and [v] have central closure only, and could thus be called labio-dental laterals, but they do not pattern with the true laterals phonologically. Phonetically, velar laterals are found, and their articulation is clearly velar, not coronal. The back of the tongue makes contact with the velum, and the sides of the tongue are lowered so air can escape past the molars. Their velarity is further attested to in some cases by their ability to spread their velarity to adjacent segments. For example, in Zulu, nasals before the velar lateral ejective affricate are [I], not [n]. The examples below come from the Papua New Guinea language Mid-Waghi, which contrasts the two coronals, dental [*] and alveolar [l], with the velar [l]: (2)

a*a a*a alala alale

‘again and again’ ‘speak incorrectly’ ‘dizzy’

Nonetheless, Blevins (1994) argues that velar laterals always have the phonological feature Coronal. The Papua New Guinean language Yagaria shows a coalescence of a lateral and a glottal stop. The result is a voiceless coronal stop. The lateral in question is phonetically a velar lateral, which Blevins argues to be phonologically Coronal, because the output of the coalescence is [t]. In (3a) the process changes the initial /s/ of the suffix to [p] after /?/, and in (3b) the initial velar lateral /l/ of the suffix to changes to [t] after /?/:

Moira Yip

5 (3)

a. b.

/igopa-si?/ /jo?-si?/ /igopa-lo?/ /gipa?-lo?/

igopasi? jopi? igopalo? gipato?

‘into the land’ ‘into the house’ ‘on the ground’ ‘at the door’

In Blevins’s feature-geometric analysis, phonetically velar /l/ is phonologically Coronal. A default rule adds secondary Dorsality. Most laterals are voiced approximants, but 9.5 percent of languages (Maddieson 2008) have lateral obstruents, such as the voiced fricative [\], and some languages such as Zulu contrast the two. Not all laterals are voiced: there are truly voiceless lateral approximants such as Toda [l] or the laterals following voiceless stops in English words like please, and there are many voiceless lateral fricatives like the Welsh and Zulu [x]. (4)

Zulu Toda Welsh

lálà ‘lie down’ kal ‘bead’ lo(n ‘road’

\álà kal xon

‘play’ ‘study’ ‘glad’

xàxá

‘cut off’

Ladefoged and Maddieson (1996: 199) point out that it is not always easy to tell voiceless lateral fricatives from voiceless lateral approximants. Acoustically, the approximants have “a lower amplitude of noise, a greater tendency to anticipate the voicing of a following vowel,” and lower frequency energy than the fricatives. Lateral affricates [tx] and [d\] are quite common, as are clicks with a lateral release, shown by || (chapter 18: the representation of clicks): (5)

Tlingit Zulu

xaa k||ók||à

‘melt’ ‘narrate’

txaa g||álà

‘be big’ ‘stride’

d\aa

‘settle (of sediment)’

Like other clicks, lateral clicks behave phonologically like obstruents, and there do not seem to be any cases where they pattern with lateral approximants. Finally, pre-stopped laterals and pre-lateralized stops are reported in Icelandic and West Norwegian, and also among Australian languages: see Evans (1995) for details. In sum, at least the following possibilities must be accounted for, with sample languages included: (6) approximant fricative affricate click

voiced

voiceless

l \ d\ g||

l x tx k||

Zulu Zulu Tlingit Zulu

Toda Zulu Tlingit Zulu

I shall assume that the approximants are sonorants (even if voiceless), and all the others are obstruents. This distinction is based on their acoustics – lateral approximants frequently interact with adjacent vowels, taking on the coloring of both following and preceding vowels – and their phonotactic behavior (see below), particularly their ability to become syllabic and occupy nuclear position. The affricates and clicks I take to be lateral stops, and I shall not have room to discuss how they are distinguished from each other featurally.

Lateral Consonants

3

6

Phonotactics

Lateral approximants can occur in almost any syllable position. In English, for example, they can be onsets, as in lick, the second part of onset clusters, as in click, codas, as in kill, and even nuclei, as in the last syllable of tickle. They may show allophonic variants depending on syllable position, as in US English light [l] in leaf and dark [l] in feel. It is not entirely clear whether this is categorical, as Scobbie and Wrench (2003) argue, or whether it involves two ends of a continuum, as Sproat and Fujimura (1993) propose. In some languages, flapped versions of laterals are found intervocalically, and in these very short closures it is not always clear whether the release is or is not still lateral. For example, some Japanese speakers use approximant [l] wordinitially but flapped [7] intervocalically (Ladefoged and Maddieson 1996). The phonotactic behavior of laterals shows clearly that along with the rhotics they are the most sonorous type of consonant, closest to the vowels. Not only may they act as nuclei in many languages, but they are typically the innermost member of a consonant cluster, so that we find onsets like [gl], but not *[lg]. Many languages require members of a cluster to maintain a minimum sonority distance from each other. Based on this fact, laterals seem to be more sonorous than nasals, since a language may allow Cl onsets but not Cn (e.g. Spanish), but the reverse is not true. They appear to be less sonorous than rhotics, although the evidence here is not quite so striking. Rhotic dialects of English, for example, allow [rl.] in codas in words like Carl, but not *[lr.]. See Botma et al. (2008) for an interesting discussion of coda liquids within Government Phonology. One particular aspect of their clustering behavior has received much attention. In many languages, [bl] and [gl] are found, but [dl] is not permitted. Flemming (2002) and Bradley (2006) point out that the transitions for the [d] release before [l] are acoustically shifted closer to those for the [g] release, explaining why the contrast is not very robust. Indeed, children find [dl] and [gl] hard to distinguish (Macken 1980; Hale and Reiss 2008; Smith 2010), and as a result may produce puddle as [pZgHl] but pickle as [p/tHl]. This line of argument explains why [dl] and [gl] are not in contrast, but not why [gl] is preferred cross-linguistically to [dl]. A common suggestion is that [dl] is banned by the OCP (Obligatory Contour Principle), since both /d/ and /l/ are coronal. However Bradley (2006) claims that there are languages, such as the Phúhòa dialect of Katu, which allow [dl] but not [gl].

4

Phonological behavior

We now come to the most controversial aspect of laterals: how are they represented phonologically, or more precisely what distinctive features do they carry? Almost every aspect of their featural representation is under dispute, and here I shall try to summarize some of the major areas of disagreement. For reasons of space I shall limit myself to feature theories that are, in some broad sense, descendants of the Chomsky and Halle SPE system. On the representation of laterals in element/dependency theory, see Anderson and Ewen (1987) and Botma (2004: ch. 2). See also chapter 17: distinctive features and chapter 27: the organization of features for more general discussion of feature theory.

7

Moira Yip

4.1 Sonorancy and voicing There is general agreement that the vast majority of laterals are sonorants. But how sonorant are they? Based on widely accepted phonological evidence, Parker (2008) posits the following sonority hierarchy for sonorant consonants (see also chapter 49: sonority): (7) Relative sonority of sonorant consonants glides > flaps > laterals > trills > nasals Using data from Quechua (Peruvian) Spanish and US English, he then sets out to show that this ranking has a measurable phonetic correlate, namely acoustic intensity. Although he finds a very good statistical match for his overall notion, [l] in onset position frequently comes out as louder (and thus more sonorous) than the glides, contra expectations. Parker suggests that this is because the glides may become more “obstruent-like” in onset position, especially in Spanish. According to Harris and Kaisse (1999), this is rather pervasive. In Argentinian Spanish it is carried to an extreme, as shown by alternations such as le[j] ‘law’ vs. le.[Ú]es ‘laws’. But even in Castilian, the onset glide [j] frequently hardens to the non-strident palatal fricative [ ù] or even to its non-continuant counterpart. Once we look outside the prototypical approximant laterals, we quickly encounter laterals that are not sonorant at all. Examples of obstruent laterals include not only the obvious fricatives, affricates, and clicks, but also languages in which [l] patterns with the voiced obstruents, such as Southern Min, which has [l] instead of [d]. Modern Southern Min completely lacks [d]. Not only is [l] the modern reflex of historical *d, but underlying /p t k/ voice to [b l g] foot-internally (Hsu 1996), and /b l g/ nasalize to [m n I] before nasal vowels. In some Bantu languages, like Ikalanga, historical *d has become /l/, and synchronically under velarization /l/ becomes the stop [gw], suggesting that it may still be an obstruent. (A reviewer correctly notes that in both these cases there could be intermediate steps involved, and /l/ may not be underlyingly obstruent.) There are even a few languages in which all the laterals appear to be obstruents: Tlingit for example has two lateral fricatives and three lateral affricates, but no lateral approximants. Finally, turning to voicing, like other sonorants laterals are typically voiced, but voiceless versions are found, for example in Toda. The theoretical implications of the fact that most (but not all) laterals are voiced sonorants will be discussed in §7.

4.2 Continuants or not? There has been little agreement as to whether ordinary approximant laterals behave like stops or like continuants, and thus whether they are [±continuant]. Alternations with stops are common, but alternations with fricatives do not seem to be found. Nonetheless, Holt (2002) argues that they are both [+continuant] and [−continuant] and Mielke’s (2005) survey, as his title “Ambivalence and ambiguity in laterals and nasals” makes clear, finds an almost even 50/50 patterning as [±continuant]. He attributes this to their ambiguous phonetic cues, and suggests that it is precisely phonetically ambiguous segments that are likely to behave ambivalently in the phonology. I shall return to this issue in §9.

Lateral Consonants

8

Mielke gives these examples of each pattern. In Basque, laterals (but not rhotics) pattern with nasals in assimilating in place to a following consonant. The class of segments that undergo the rule is [+sonorant, −continuant]. Mielke points out that there is no way to define this class other than by using [−continuant] to include /l/ and the nasals, but not /r/. (8)

/l/ patterning with non-continuants in Basque (Hualde 1991: 96) egu[m] berri egu[|] denak egu[J] ttiki gu[I] gorri ata[*] denak ata[O] ttiki

‘new day’ ‘every day’ ‘small day’ ‘red day’ ‘every section’ ‘small section’

(I will suggest an alternative explanation for the Basque case in §8.3.) Conversely, in Finnish stems may end in the five coronals /t s n r l/. Before /n/-initial suffixes, [−continuant] /t/ undergoes total assimilation, but [+continuant] /s r l/ trigger total assimilation. (9)

/l/ patterning with continuants in Finnish (Sulkala and Karjalainen 1992: 387– 388) a. b.

/avat/ /pur/ /nous/ /tul/

active potential [avannut] [purrut] [noussut] [tullut]

2nd part active [avannee] [purree] [noussee] [tullee]

‘open’ ‘bite’ ‘rise’ ‘come’

Mielke quotes Kaisse (2000) on the issue. She points out that the status of laterals hinges on whether [−continuant] is defined in terms of complete occlusion in the oral tract (“vowel tract” in SPE; Chomsky and Halle 1968: 318) or complete occlusion in the mid-sagittal region of the oral tract (see also chapter 13: the stricture features). Laterals have complete occlusion only in the mid-sagittal region, not elsewhere, so they qualify as [−continuant] only under the latter definition. Mielke suggests that this makes laterals phonetically ambiguous, and that the boundary for the natural classes of [+continuant] and [−continuant] may vary cross-linguistically, placing laterals in different classes in different languages. The implication is that feature values may not be universally fixed for such segments, but may “emerge” on the basis of observable phonetic properties. One might also note that other rationales for cross-linguistic differences in feature specifications have been advanced, particularly the system of contrasts in the language in question. See Morén (2006) for an account on these lines for Serbian laterals.

5

Alternations involving laterals

When laterals alternate with other segments, it is usually with ones that are minimally different either articulatorily, acoustically, or both. Since most laterals are coronals, a small change in the type of closure so that it is complete will

9

Moira Yip

produce a coronal stop. If the lateral finds itself subject to nasal spreading, so that the velum is lowered, a nasal sonorant is the likely result. If the tongue shape is inverted, so that the closure is made with the sides but not the midline of the tongue, then a rhotic is formed. And if the closure is eliminated, but the tongue body remains high, a high vocoid results. Some of these changes also produce a sound that is acoustically still quite similar to the lateral, meaning that the change may have originally been driven in whole or in part by a misperception of the signal. All of these are common changes, and are discussed below.

5.1 Stops In some languages, /l/ alternates with stops; more specifically, it appears to replace the voiced coronal stop /d/ in some contexts. In Palenquero Spanish (Piñeros 2003), /d/ is in free variation with [l] in some words: (10)

/{e{o/ → [’le.lo] ~ [’{e.Ïo] ‘finger’

In some Bantu languages, like Ikalanga, historical *d has become /l/, but under velarization /l/ becomes the stop [gw], suggesting that it may still be an obstruent. Conversely, in Southern Min (which has /l/ instead of /d/ in its phoneme inventory), if syllable-final /t/ ends up intervocalically (especially before an unstressed vowel), it voices and becomes not [d] but [l] (although descriptions vary, and as it is a brief flap or tap, it is not entirely clear how lateral its articulation is; see Hsu 1996).

5.2 Nasals Historically, Cantonese had two distinct phonemes /l/ and /n/. Both could occur syllable-initially, as in the contrast between [lei] ‘reason’ and [nei] ‘you’. Only /n/ could occur syllable-finally. However, in the last 50 years a gradual sound change has been taking place, and is now nearly complete for younger speakers (Bauer and Benedict 1997). Initial /n/ is being replaced by /l/, so that ‘you’ and ‘reason’ are now both pronounced as [lei]. As a result, [l] and [n] can be treated as allophones of a single phoneme, with [l] as the syllable-initial variant and [n] as the syllable-final one. However, actual alternations do not exist, because the language has essentially no resyllabification. In the Min dialects of Chinese (including Southern Min, mentioned earlier), voiced stops and nasals are in complementary distribution, with [b g] occurring only before oral vowels and [m I] only before nasal vowels. These data are from the Chaoyang dialect: (11)

bi(?11 ‘hide’

mg(53 ‘fast’

The language has no alveolar voiced [d], but the reflex of historical *d is [l] before oral vowels and [n] before nasal vowels. As a result [l] and [n] are in complementary distribution, and this is productive. In Chaoyang onomatopoeia, words are reduplicated and one onset is replaced by [l]. However, if the vowel is nasal it is replaced instead by [n]. See Yip (2001) for details.

Lateral Consonants (12)

tsi tsiau liau jãh njãh

10

‘sound of a crowd talking’ ‘sound of door opening’

5.3 Rhotics Laterals and rhotics have a very close affinity to each other and, as we mentioned earlier, some languages have a single liquid that varies between the two. (On which feature distinguishes the two, see §4.2 on [continuant] and §6 on [lateral].) In Korean, [l] occurs only in codas, and [r] only in onsets. Most authors assume they are reflexes of a single phoneme, here given as /L/. These examples are taken from Iverson and Sohn (1994). (13)

a. b.

/paL-cHn/ /mi-taL/ /saLam/ /Ladio/

[palc’Hn] [midal] [saram] [radio]

‘development’ ‘shortage’ ‘person’ ‘radio’

A more unusual alternation is found in Yanggu Chinese (Yip 1992; Yu 2007). Yanggu has a diminutive suffix which usually surfaces as the rhotic [P], as in (14a), matching its historical source as a rhotic suffix. However, if the word begins with a dental or alveolar, it also adds an infix to form an onset cluster, and this infix surfaces as a retroflex lateral [Î], as in (14b). The intervening vowels are also rhotaicized (not shown). (14) a. b.

root

diminutive

pu ke tao

puP keP tÎaoP

‘cloth’ ‘cover’ ‘knife’

Onset clusters are not found elsewhere in Yanggu: the lateral is found only as part of the diminutive formation process. Yu argues that it arises from a misperception of the drastic transition from an anterior consonant to a retroflex vowel: see Yu (2007: 146) for details.

5.4 Glides or vowels One of the most widespread of alternations is l-vocalization. It is found in many dialects of English, as when milk becomes [miwk] or table [te(bu], and alternations are present in words like feel [fiw] vs. feeling [fi(l/I]. Although commonly transcribed as [w] in coda position and [u] in nucleus position, lip rounding is often absent. Scobbie and Wrench (2003) show clearly that l-vocalization is categorical for many English speakers. Johnson and Britain (2007) draw attention to the articulatory basis of this change, and propose a markedness account in OT. Vocalization happens in coda position, which means its source is always dark l (not clear l). They extensively document the historical and geographical distribution of dark l in English, and show that l-vocalization correlates with the presence of dark l. Recall that dark l has two gestures, with Dorsal coming first in coda position. They suggest that vocalization is driven by markedness reduction,

11

Moira Yip

leaving only the more vocalic Dorsal gesture (see chapter 75: consonant– vowel place feature interactions). A rather different but equally well-known case comes from the Cibaeño dialect of Spanish, in which both /l/and /r/ become the palatal glide [j] in coda position (Guitart 1985; Harris 1985; Alba 1988). (15)

celda cerda

[sejda] [sejda]

‘cell’ ‘bristle’

Since the gesture that is preserved here is the Coronal one, Johnson and Britain’s approach cannot deal with these facts unmodified. However, there is evidence that, unlike English coda /l/, Spanish coda [l] is clear, not dark, with a smaller Dorsal gesture. This might then explain why /l/ vocalizes as [j], not [w], although the picture is less clear if one studies a range of Spanish and Portuguese dialects. See Quilis et al. (1979) and Recasens and Espinosa (2005) for details. Many dialects of Spanish, especially in Latin America, have replaced the palatal lateral [O] by the glide [j]. This sound change is known as yeismo. (In Buenos Aires Porteño Spanish, this change has gone one step further in a change called zheismo, with [j] spirantizing to [Ú]. See Harris and Kaisse (1999) for details.) (16)

llorar ella

[O]orar e[O]a

[j]orar e[j]a

‘to cry’ ‘she’

In Serbian, rather unusually, /l/ vocalizes to [D]. See Morén (2006) for an interesting account within a Parallel Structures Model. Historically, vocalization of laterals is also common. In Germanic, compare English old to Dutch oud. In Polish, dark [û] changed to [w] everywhere, even in onsets. In Romance, Latin *l has developed variously to [w/u], e.g. in French (caldus to chaud), or to [j/i] (in both onset and coda), as found in Italian flos to fiore and Portuguese multus to muito. The inverse of vocalization of laterals, the lateralization of glides, seems to be much rarer (chapter 15: glides). Li (1974) documents a case in which *j > [l] in some Formosan languages, and a voiced coronal fricative [z Ï] in others. Interestingly, in no case does *w > [l], even though in l-vocalization /l/ becomes [w] more often than [j]. This suggests that there must be a coronal gesture already present (as there is for [j]) for the creation of a novel lateral.

6

The feature [lateral]: Is it necessary?

Phonologists routinely use a feature [lateral] to distinguish /l/ from /r/, but some linguists (Spencer 1984, Brown 1995, and most recently Walsh Dickey 1997) have argued that it can be dispensed with. If a language has [l] but no [r], one might define [l] by the features [+consonant, +sonorant, −nasal], and [lateral] would be redundant. However, if [l] contrasts with [r], as it does in many languages, this will not suffice. They could perhaps be distinguished by the feature [continuant], with /r/ as [+continuant] and /l/ as [−continuant], but this is not without problems (see §4.2, and also van der Weijer 1995). Walsh Dickey (1997: 55) distinguishes them by means of Place features, with /l/ having complex Corono-Dorsal place and /r/ non-primary Laminal. Dissimilation of /l/ to [r] is loss of a secondary

Lateral Consonants

12

Dorsal articulation. The lateral articulation of [l] is, for her, “a necessary phonetic consequence of a phonological Corono-Dorsal complex place structure.” It is worth having a slightly closer look at her arguments and her proposal. Walsh Dickey points out, correctly, that the strongest arguments for a feature come from its role in defining natural classes. The many languages with only one lateral can of course never offer this type of evidence. She lists three types of potentially significant evidence from languages with more than one lateral: (i) co-occurrence restrictions on different types of laterals; (ii) positional restrictions which cover all types of laterals in a language; and (iii) phonological processes which need to refer to all laterals, both sonorant and obstruent. She concludes that no such evidence exists, and I have also not encountered any convincing cases. However, she pays a high price for the absence of [lateral]. In particular, she has to greatly complicate the internal structure of the coronal node by including Laminal (which in turn may or may not be [dental]), Apical (which in turn may or may not be [back]), and secondary Dorsal (which may have yet another Dorsal specification below it, to account for velarized “dark” laterals). Finally, any other sound which might have been thought to involve secondary Dorsal articulations, such as velarized coronal consonants, would require some extra specification if they did not have a lateral release. I therefore tentatively conclude that the feature [lateral] is still useful, and probably necessary. Positive evidence for the feature [lateral] comes from its active role in the phonology of many languages, despite Walsh Dickey’s claims to the contrary. In Eastern Catalan (and Sanskrit), for example, [lateral] spreads onto nasals to create a lateral nasal: /nl/ → [ml] in /son les tres/ → [somles tres] (Mascaró 1976). There are wellknown phonological processes that involve only [l] and [r], and in which they either dissimilate, as in Latin, where the suffix /-alis/ surfaces as [-aris] after a lateral root: nav-alis vs. milit-aris (Steriade 1987), or assimilate, as in Sundanese, where the infix /-ar-/ surfaces as [-al] after a preceding /l/: [k-ar-usut] vs. [l-al-Hga] (see Cohn 1992 for details). Several of these processes are long distance, and can cross over other Coronals, making a Place feature account tricky. I conclude that the feature [lateral] cannot be dispensed with. I should note that for the remainder of this chapter I shall treat [lateral] as a privative feature, but the results would not be materially affected if it were to turn out to be binary, as Steriade (1987) argues.

7

Feature geometry and the feature [lateral]: Two competing models

It has been proposed that distinctive features are related to each other by a hierarchical feature geometry (Sagey 1986, Clements and Hume 1995, and many others; see also chapter 27: the organization of features). For example, the features Labial, Coronal, and Dorsal are dominated by a Place node. In such a model, we must then ask where the feature [lateral] is located. Early proposals spent little time worrying about the placement of [lateral], and tended to put it directly under the root node. Clements and Hume (1995: 293) opt for this, but admit that its position is open to dispute. Subsequently, two competing detailed proposals for the placement of lateral have been put forward, and are shown below: it might

13

Moira Yip

be under the Place feature Coronal, or under a node related to its sonorant nature, called Sonorant Voicing (SV), which is argued to be responsible for voicing in sonorants but not (most) obstruents. (17) a.

Under Coronal (Blevins 1994)

b. SV model (Rice and Avery 1991)

Place Labial

Coronal

Sonorant Voicing Dorsal

[lateral]

[nasal]

[lateral] These proposals were motivated by two observations about laterals: they are normally Coronal (hence the Coronal proposal), and they are normally voiced sonorants (hence the SV proposal). But problems arise because, as we have seen, these are tendencies rather than universals. The problems become even more vexing when we look at other predictions of these two proposals. In particular, [lateral] requires the presence of its superordinate node, and anything which affects that node (such as spreading it, delinking it, or deleting it) will also affect [lateral]. The trouble is, the evidence is contradictory.

7.1 Coronal node model Let us look at these predictions in more detail, starting with the predictions of the Coronal node theory (see Table 31.1).

7.1.1 Supporting evidence for the Coronal model The evidence in support of this theory is well known. Additional data for this and subsequent sections can be found in Yip (2004, 2005). First, as we have already Table 31.1 Predictions of the Coronal node theory Prediction

Under Coronal? FOR

AGAINST

All laterals should be Coronal

Laterals are usually Coronal: many languages

Placeless laterals: Cambodia Velar laterals: Yagaria

Place spreading should spread [lateral]

Yes: Selayarese

No: Chukchee

When [lateral] spreads, it should seek Coronal targets

Yes: Teralfene Flemish, Yanggu Chinese

Laterals skipped by harmony that targets Coronals: Tahltan

Place spreading onto laterals should remove [lateral]

Yes: Moroccan Arabic, Cuban Spanish

No: English, Basque

Loss of Place contrasts should eliminate [lateral]

No cases known

No: Caribbean Spanish

Lateral Consonants

14

seen, most laterals are coronal. Secondly, place spreading in languages like Selayarese also spreads laterality, as can be seen below: (18)

Selayarese place assimilation (Mithun and Basri 1986) anna[I] anna[mp]oke anna[Jj]arang anna[nt]au anna[nr]upa anna[ll]oka

‘six’ ‘six spears’ ‘six horses’ ‘six persons’ ‘six kinds’ ‘six bananas’

Thirdly, in some languages when lateral spreads it seeks out specifically Coronal targets. Yanggu Chinese has a very unusual affix (Yip 1992). It may be rhotic or lateral, or both, surfacing variously as [r l È Î], where the last two are retroflexed. All that concerns us here is that the lateral variants only surface in words with surface coronals, and they are attracted to the rightmost coronal. This mobility suggests that [lateral] is a floating feature that docks onto coronals: (19)

a.

No coronals

b.

Initial coronal

c.

Initial and final coronals

xou pe tu na tsh uHn

xour per tlur nlar tsh uHÎ

‘monkey’ ‘card, board’ ‘rabbit’ ‘to press’ ‘village’

Fourthly, if laterals are the targets of rules that spread the Place node, their laterality should be lost, as is the case in educated Havana Spanish (Guitart 1976; Harris 1985; Padgett 1991: 228). Liquids assimilate in Place, Manner, and nasality to the following consonant. Before stops, they always remain voiced, but before voiceless fricatives they devoice. In all cases they lose their laterality: (20)

albañil tal droga pulga el pobre el tres tal mata el fino

a[bb]añil ta[dd]droga pu[gg]a e[bp]obre e[dt]res ta[mm]ata e[ff]ino

‘mason’ ‘such a drug’ ‘flea’ ‘the poor man’ ‘the three’ ‘such a shrub’ ‘the refined one’

7.1.2 Counterevidence to the Coronal model It is also clear that there is considerable counterevidence to this proposal from cases like those in the right-hand column of Table 31.1. Firstly, not all laterals are Coronal. We have already seen that velar laterals exist. An example of phonologically placeless laterals comes from Cambodian (Nacaskul 1978). There are co-occurrence restrictions on identical Place features, but they do not treat /l r/ as Coronal, even though Place restrictions cross-cut obstruents and sonorants, stops and fricatives, nasals and glides. Instead, /l r/ behave like [h ?] in co-occurring freely with all other sounds. Somewhat similar facts hold in Javanese, where /l r/ also fail to trigger nasal assimilation or coalescence, unlike all other obstruents, nasals,

15

Moira Yip

and glides. See Yip (1989) for details. As a final note, these laterals are of course articulated with the tip or blade of the tongue, so this would have to be viewed as the phonetic implementation of a phonologically placeless lateral. In a model committed to full output specification, like classic OT, this poses a challenge. Secondly, there are languages like Chukchee where laterals as triggers of Place assimilation spread only Coronal, not [lateral], as can be seen by the last example below: (21)

Chukchee (Clements and Hume 1995: 270) tHI-Hû?-Hn (/tEI-/) tan-tsai tam-pera-k tan-ran tam-vairgin ten-jHûqHt-Hk tam-wa:Hr:-Hn but ten-leut *tel-leut

‘good’ ‘good tea’ ‘to look good’ ‘good house’ ‘good being’ ‘to sleep well’ ‘good life’ ‘good head’

Thirdly, there are languages where harmonies that target all other Coronals skip the lateral series, as if they did not have Coronal nodes at all. Tahltan (Shaw 1991) has five series of coronals, shown here by their voiced affricate members: /d dl dÏ dz –/. At each place of articulation, there are voiced/voiceless and glottalized, as in [dz ts ts’]. At the last four places, there also voiced and voiceless fricatives, as in [z s]. Within a word, the last three series harmonize with the rightmost participant. In the following data, the first dual subject prefix /Ïi(d)/ stays [.] before non-coronals or the /d/ and /./ series, but becomes [s] before the /dz/ series, and /œ/ before the /–/ series: (22)

.i(t.ædi nisi(t’a(ts uœi–e

‘we ate it’ ‘we got up’ ‘we are called’

The lateral /dl/ series are not triggers or targets, and are transparent: /na.iba(tx/ ‘we hung it’. Fourthly, in Basque, as we have already seen in §4.2, there is general Place spreading onto sonorants, but it does not remove laterality, as shown in (23) (some of these data appeared earlier in (8)). Nasals assimilate to all places of articulation, as can been seen in the left-hand column below. Laterals also assimilate before coronals, but are unchanged before other places (Hualde 1991). (23)

egu[n]a egu[m] berri egu[,] fresku egu[|{] enak egu[J] ttiki egu[I] gorri

‘the day’ ‘new day’ ‘cool day’ ‘every day’ ‘small day’ ‘red day’

ata[l]a ata[l] berri ata[l] fresku ata[*{] enak ata[O] ttiki ata[l] gorri

‘the section’ ‘new section’ ‘cool section’ ‘every section’ ‘small section’ ‘red section’

Lateral Consonants

16

Similar facts hold in Tamil (Beckman 1998), and in English: we[*.] wealth, but whe[lk] whelk, although the English case may be purely phonetic, since it is not structurepreserving. The interesting fact is that in Basque and all these other languages laterals do not lose their laterality as the targets of Place assimilation, contra the predictions of the Coronal model. Finally, there are languages in which Place contrasts are lost – a phenomenon described by Trigo (1988) and others as the delinking of the Place node – but laterality survives. In Caribbean Spanish (Trigo 1988: 71) place features are neutralized in codas: /d/ deletes, /s/ becomes [h], and all nasals become velar; /r/ and /l/ are unchanged. (24)

a. b. c.

/ßerdad/ /ines/ /album/ /tren/ /desdeJ/ d. /tonel/ /par/

→ → → → → → →

[ßerÏa] [ineh] [albuI] [treI] [desdeI] [tonel] [par]

‘truth’ ‘Ines’ ‘album’ ‘train’ ‘disdain’ ‘barrel’ ‘pair’

(optional)

The data in this section are problematic for the Coronal model.

7.2 The sonorant voicing (SV) model What about the sonorant voicing (SV) theory? Rice and Avery (1991) base their claims on three main arguments. Firstly, the SV node is the target node when lateral spreads to other sonorants. Secondly, a process of de-sonorantization can be seen as delinking of the SV node. Thirdly, rules which spread both nasality and laterality can be viewed as spreading the SV node (see Table 31.2).

Table 31.2 Predictions of the SV model Prediction

Under sonorant voicing? FOR

AGAINST

All laterals should be voiced sonorants

Laterals are usually voiced sonorants: many languages

Voiceless laterals: Tahltan Obstruent laterals: Min, Bantu Affricate laterals Tahltan, Zulu

SV spreading should spread [lateral]

Yes: Sanskrit

No: Polish

When [lateral] spreads, it should seek SV targets

Yes: Toba Batak

Laterals skipped by harmony that targets sonorants: no cases known

SV spreading onto laterals should remove [lateral]

Yes: Itsekiri

No: English

Loss of SV removes [lateral]

Yes: Yagaria

No: Koyukon, Angas

17

Moira Yip

7.2.1 Supporting evidence for the SV model Apart from the observation that most laterals are voiced sonorants, there are four other types of evidence in support of the proposal. Firstly, SV spreading sometimes spreads [lateral], as in Sanskrit. Before laterals, all obstruents voice, and if they are coronal they also lateralize (Whitney 1889: 54; no examples of non-coronals are given): (25)

tat labhate trin lokan

→ →

tal labhate trim lokan

Secondly, [lateral] spreading sometimes seeks SV targets to attach to, as in Toba Batak (Hayes 1986). Coronal sonorants assimilate to a following liquid: (26)

Spreading

No change

nr → rr nl → ll rl → ll

ln rn lr mr ml Il Ir rr ll nn

If what spreads is the SV node, and if [lateral] is its dependent, then laterality will be carried along too. Thirdly, when SV spreads onto a lateral target, replacing the original SV specification, the original lateral specification may be lost. The following facts from Itsekiri (Nigeria; Piggott 1991, cited in Brown 1995: 64) are often cited, and very similar facts hold in Southern Min Chinese and in Yoruba. (27) lã → nã

‘ask the price of’

This type of nasal harmony can be seen as spreading the superordinate SV node, carrying nasality with it, and removing the original SV node, laterality and all. Fourthly, if an SV node delinks, lateral may be removed too, as in Yagaria. Here a coalescence process removes voicing and converts a sonorant [l] to an obstruent [t], and in the process laterality also goes. (28)

gipa?-lo? → gipato? ‘at the door’

For further data on Yagaria, see §2, where these data are discussed in a different context.

7.2.2 Counterevidence to the SV model There are various types of counterevidence to the SV model, as listed in the right-hand column of Table 31.2. The first problem is the existence of laterals that are not voiced sonorants, being either voiceless, obstruent or both. But there are other problems. Firstly, SV spreading does not always spread [lateral]. The following data show a post-lexical process of voicing assimilation in Krakow and Posnan Polish (Dorota Glowacka, personal communication; MadeÓska and Witaszek-Samborska 1998):

Lateral Consonants (29)

brat syn

→ →

bra[d] Doroty/Natalji/Iwony/Luizy sy[n] Luizy

18

‘brother of X’ ‘son of Luiza’

Similar facts hold in all dialects between verbal prefixes and roots: (30)

s-kDJŒqtu ‘to end’

z-bitu ‘to break’

z-liŒqtu ‘to count’

In the SV model, voicing in sonorants is represented by the presence of an SV node, and so this must be the active node that spreads. If [lateral] is under this node, it too should spread, but it does not, contra the predictions of the SV model. Secondly, SV spreading into lateral targets sometimes leaves the laterality intact. This is the case in English, where liquids after voiceless aspirated stops become voiceless: (31)

[bl]eak [br]eam

[pl]ease [pã]een

[gl]eam [gr]een

[kl]ean [kã]eam

If sonorant voicing is denoted by the presence of an SV node, the devoicing would presumably mean that the SV node had been delinked, and one would then expect loss of [lateral] as well, but no such thing happens. Thirdly, if the SV node of a lateral delinks for other reasons, rendering it voiceless, laterality should also be lost, but this is not the case in the Athapaskan language Koyukon (Rice 1994), which devoices syllable-final sonorants and continuants, including /l/. For the lateral, the result is a voiceless lateral fricative [x]. Final stops are plain voiceless unaspirated. (32)

[nH:ælH] ‘your (sg) trap’ [sH?D:H’] ‘my snowshoes’ [nizuni] ‘that which is good’

[xæx] [?Dx] [nizun]

‘trap’ ‘snowshoes’ ‘it is good’

Under the SV hypothesis, where [lateral] is under SV and devoicing of sonorants means removal of the SV node, laterality should also disappear, but it does not, contra the predictions of the SV model. In the case of [lateral], then, the need created by theories of universal feature geometry to commit to a single location for the feature in all languages creates problems. Luckily, there are alternatives to these two proposals. Hegarty (1989) and Bao (1992) argue that [lateral] is simply a dependent of the Root node. Yip (2004: 5) goes further, and agrees with Padgett (1995, 2002) that (at least with respect to the behavior of [lateral]) features can be treated as an unstructured set of which [lateral] is a member, and that feature geometry as such is redundant. The next section lays out this proposal.

8

A feature co-occurrence constraint approach

8.1 Inventories Suppose we capture the fact that the least marked lateral is a coronal sonorant by means of two universally fixed hierarchies of feature co-occurrence markedness constraints.

19 (33)

Moira Yip a. b.

*LateralObstruent >> *LateralSonorant *LateralLabial >> *LateralDorsal >> *LateralCoronal

The idea is that these constraints are violated if a segment bears both features, at least if both are primary articulations. For example, the intention is that *LateralLabial bans bilabial or labio-dental segments with a lateral release. If the labiality is secondary, as in [lw], it may be acceptable. Both the fixed rankings are phonetically grounded. Lateral release means the airflow is not easily obstructed, so lateral obstruents are more marked than lateral sonorants (33a), and lateral release is easiest if only the tip of the tongue is used to make closure, so coronals are the least marked laterals (33b). Placing the faithfulness constraints at different points in these hierarchies gives us a typology of segmental inventories like those below: (34)

Typology of lateral place of articulation a.

Either no laterals (Maori), or placeless ones (Cambodian) b. *LatLab >> *LatDors >> Faith >> *LatCor Common type, with Coronal laterals only (English) c. *LatLab >> Faith >> *LatDors >> *LatCor New Guinea type, with velar and coronal laterals (Mid-Waghi), or perhaps palatal laterals d. Faith >> *LatLab >> *LatDors >> *LatCor Laterals at all POAs (unattested)

(35)

*LatLab >> *LatDors >> *LatCor >> Faith

Typology of lateral sonorants and obstruents a. b.

*LatObs >> *LatSon >> Faith *LatObs >> Faith >> *LatSon

c.

Faith >> *LatObs >> *LatSon

Languages with no laterals Common language type, with sonorant laterals Languages with both obstruent and sonorant laterals

In this way, we can correctly describe the inventories of laterals found in the world’s languages, with the possible exception of Tlingit, which is reported to have lateral obstruents but no lateral sonorants. The complete absence of labial laterals also remains unexplained.

8.2 The spreading of [lateral] How can these mini-grammars characterize the behavior of laterals shown in §7? First, let us consider the behavior of [lateral] in spreading processes, such as in Place assimilation or perhaps the spreading of SV under pressure from the Syllable Contact Law. Let us assume that assimilation involves a violation of the Ident family of faithfulness constraints, such as Ident-Place and Ident-Son,

Lateral Consonants

20

under pressure from higher-ranked constraints such as Share-F and Syllable Contact. I start with cases in which [lateral] is (potentially) spread by the process in question. Any assimilation process that creates the ordinary sonorant Coronal lateral [l] from an underlying non-coronal or non-sonorant will violate at least one of Ident-Place and Ident-Son (and of course also Ident-Lat). The ranking of these constraints with respect to the constraints causing assimilation, here abbreviated as Share-F, will determine which segments may undergo the process. If IdentSon >> Share-F, targets must be already sonorant. If IdentPlace >> Share-F, targets must be already Coronal. If the output is always Coronal and sonorant, *LatObs and *LatDors are always high-ranked, and *LatCor and *LatSon are always low-ranked. The following typology results: (36)

a.

Target must be sonorant: *LatObs, IDENT-SON >> SHARE-F >> *LatSon a’. Target need not be sonorant, but output will be: *LatObs >> SHARE-F >> IDENT-SON, *LatSon b. Target must be Coronal: *LatDors, IDENT-PLACE >> SHARE-F >> *LatCor b’. Target need not be Coronal, but output will be: *LatDors >> SHARE-F >> IDENT-PLACE, *LatCor

By combining one of the sonorancy rankings with one of the Place rankings, we get the following mini-grammars: (37) a & b.

Target must be sonorant and Coronal: Toba Batak *LatObs, *LatDors, IDENT-PLACE, IDENT-SON >> SHARE-F a & b’. Target must be sonorant, but need not be Coronal: Selayarese *LatObs, *LatDors, IDENT-SON >> SHARE-F >> IDENT-PLACE a’ & b. Target must be Coronal, but need not be sonorant: Sanskrit, Yanggu *LatObs, *LatDors, IDENT-PLACE >> SHARE-F >> IDENT-SON a’ & b’. Target need not be Coronal or sonorant, but output will be both: ? *LatObs, *LatDors, SHARE-F >> IDENT-SON, IDENT-PLACE

In all these cases, *LatCor and *LatSon are low-ranked, since these languages have underlying laterals, and also allow the creation of new ones from some subset of targets. The more interesting cases are those where underlying laterals survive, but new ones are not created, as in Chukchee and Polish. The central idea is that this will arise in any ranking where *LatSon and *LatCor (and of course also *LatObs and *LatDors) are ranked above Share-F, blocking laterality from surfacing at all on the target. To be more precise, the difference between the grammars of Selayarese (in which new laterals are created by Place assimilation) and Chukchee (in which they are not) lies in the relative ranking of *LatSon and Share-F, where Share-F requires adjacent consonants to have identical values for as many features as possible. It will be held in check by markedness constraints such as *LatSon, and faithfulness constraints such as Ident-Voice (in a language without voicing assimilation). In Selayarese, Share-F is ranked above *LatSon, and so it can create new lateral sonorants. In Chukchee, Share-F is ranked below *LatSon, so all the other Place

21

Moira Yip

features are shared, but not laterality, and the segment remains a nasal. For full details, see Yip (2003, 2005). We see then that variability in the spreading behavior of laterals in assimilation is handled without great difficulty by these minimally different OT grammars.

8.3 Laterals as the targets of spreading I now turn to cases where lateral is (potentially) lost on the target of assimilation, or in neutralization. Whether or not it is lost depends on the relative ranking of Share-F and Ident-Lat, as shown below: (38)

a. b.

Loss of [lateral] under (Place) assimilation: Educated Havana Spanish Share-F >> Ident-Lat, Ident-Place Retention of [lateral] under (Place) assimilation: Basque IdentLat >> Share-F >> Ident-Place

A parallel analysis can be constructed for the cases in which SV spreading does or does not obliterate laterality. The Basque case deserves a little more attention, because the Basque facts were used by Mielke to argue that laterals behave as [−continuant], since like nasals they undergo place assimilation, whereas [+continuant] [r] does not. Under the account offered in this section, however, there is another explanation available, as suggested by a reviewer. Nasals assimilate to all places, but laterals assimilate only to coronals, since *LatLab and *LatDors are high-ranked. Suppose there is also a set of constraints regulating the co-occurrence of a feature [rhotic], such that only a single rhotic exists. If these constraints and Ident[rhotic] all outrank Share-F, the rhotic will fail to assimilate, and continuancy need not be invoked. Finally, laterality may or may not be lost when Place or SV is neutralized, for example in codas. Here the grammar is modeled on the ones in (38), but with *Coda-F instead of Share-F. I have now outlined analyses that explain all the variation in the behavior of laterality, without reference to feature geometry, and using only rather simple markedness constraints on the co-occurrence of [lateral] with the various Place features, and with the feature [sonorant]. These interact with familiar faithfulness constraints and with markedness constraints that create pressure for change, such as Share-F, SyllableContact or *Coda-F. The clear preferences for coronal and sonorant laterals are captured by positing universal rankings of the relevant feature co-occurrence constraints. As always in OT, these constraints are violable, and this correctly predicts the observed cross-linguistic variation in lateral behavior that is such a problem for a fixed universal feature geometry.

9

Conclusion

The topic of this chapter has been laterals. As I hope is clear, they do not fit tidily into current phonological theories, especially when it comes to their distinctive feature make-up. Considering how common they are in language, this is something of an embarrassment. After all, they are neither unusual, prone to disappear over time, nor hard to acquire (that is, even if the child acquires them late, they get

Lateral Consonants

22

them sooner or later, which is more than can be said for [s] and [P] for some speakers of British English). It is therefore incumbent on a good theory to accommodate them. We have seen that their behavior is more variable across languages than that of most other sounds. So, for example, [t] always behaves like a voiceless coronal stop, and its distinctive features and their placement in the feature geometry are rarely if ever in dispute. So why should laterals be different, and are there other classes of sounds that exhibit comparable variability? Variable behavior might be seen whenever the features are most readily produced and perceived on a certain type of segment, but can with some effort also be produced and perceived on sounds of other types too. For example, [strident] sounds, in which the turbulence produced at the point of constriction is sufficiently strong, and/or where the ensuing airstream then hits a sharp obstacle like the teeth, is easy to produce with the tip or blade of the tongue, but hard to produce elsewhere. We derive from this a constraint hierarchy *[Labial, strident] >> *[Coronal, strident]. Languages which contrast [f] and [z], like Ewe, arguably violate the former as well as the latter. Turbulent airflow also requires a period of incomplete closure, or continuancy, so we also derive *[−cont, strident] >> *[+cont, strident]. Languages that violate the former have strident affricates, which have often been argued to be strident stops. In principle, then, the interactions of these constraints might also produce comparable variation to that we have seen with laterals. For other features, no such variation is to be expected. [anterior] and [distributed] refine the type of contact the tip or blade of the tongue makes with the roof of the mouth. As such they can only be present in Coronals, and a sound that is [Dorsal, +anterior] is phonetically uninterpretable. Mielke, by contrast, takes the variability in behavior of “ambivalent segments” like laterals to be an argument against universally defined distinctive features. Instead, he argues for “emergent distinctive features” based on phonetic similarity. Laterals, for example, may pattern with either continuants (16 languages) or noncontinuants (61 languages) because, like continuants, they do not have totally blocked airflow, but like non-continuants they do have “a blockage of airflow past the primary structure.” It is not clear how his proposals bear on those cases in §7.1 where the variability of laterals concerns their coronality rather than their continuancy, and thus where no appeal to variation in continuancy would seem to explain their behavior. It might be possible, however, to develop an extension of his approach from the starting observation that laterals are produced with both coronal and dorsal tongue gestures. Languages might therefore differ in which gesture they choose to interpret as a distinctive place feature for laterals. I leave the fleshing out of this idea for future research. A substantial part of this chapter has focused on where the feature [lateral] sits in the feature geometry, but the proposal outlined in §7 and §8 makes no use of feature geometry at all. In this respect it is entirely compatible with the proposals of Padgett (1995, 2002), in which features form classes, but in which these classes are not embodied as superordinate nodes (with the exception of a root node). Instead of asking which node dominates [lateral], one would ask which class(es) of features [lateral] belongs to. There is then no reason why it might not belong to more than one class, for example Place and SV (or Manner), or to none. The feature co-occurrence proposal goes further, however, in that as far as [lateral]

23

Moira Yip

is concerned it does not even make use of feature classes. For die-hard proponents of feature geometry, I should note that it would be possible to combine the feature co-occurrence proposal with feature-geometric representations if other phenomena make this desirable, but only if [lateral] were directly under the root node.

REFERENCES Alba, Orlando. 1988. Estudio sociolingüístico de la variación de las líquidas finales de palabra en el español cibaeño. In Robert M. Hammond & Melvyn C. Resnick (eds.) Studies in Caribbean Spanish dialectology, 1–12. Washington, DC: Georgetown University. Anderson, John M. & Colin J. Ewen. 1987. Principles of dependency phonology. Cambridge: Cambridge University Press. Bao, Zhiming. 1992. A note on [Lateral]. Unpublished ms., Ohio State University. Bauer, Robert S. & Paul K. Benedict. 1997. Modern Cantonese phonology. Berlin & New York: Mouton de Gruyter. Beckman, Jill N. 1998. Positional faithfulness. Ph.D. dissertation, University of Massachusetts, Amherst. Published 1999, New York: Garland. Blevins, Juliette. 1994. A place for lateral in the feature geometry. Journal of Linguistics 30. 301–348. Botma, Bert. 2004. Phonological aspects of nasality: An element-based dependency approach. Ph.D. dissertation, University of Amsterdam. Botma, Bert, Colin J. Ewen & Erik Jan van der Torre. 2008. The syllabic affiliation of postvocalic liquids: An onset-specifier approach. Lingua 118. 1250–1270. Bradley, Travis G. 2006. Contrast and markedness in complex onset phonotactics. Southwestern Journal of Linguistics 25. 29 –58. Brown, Cindy. 1995. The feature geometry of lateral approximants and lateral fricatives. In Harry van der Hulst & Jeroen van der Weijer (eds.) Leiden in last, 41–88. The Hague: Holland Academic Graphics. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In Goldsmith (1995), 245 –306. Cohn, Abigail C. 1992. The consequences of dissimilation in Sundanese. Phonology 9. 199–220. Evans, Nicholas D. 1995. Current issues in the phonology of Australian languages. In Goldsmith (1995), 723–761. Flemming, Edward. 2002. Auditory representations in phonology. London & New York: Routledge. Gick, Bryan, Fiona Campbell, Sunyoung Oh & Linda Tamburri-Watt. 2006. Toward universals in the gestural organization of syllables: A cross-linguistic study of liquids. Journal of Phonetics 34. 49–72. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. Guitart, Jorge M. 1976. Markedness and a Cuban dialect of Spanish. Washington, DC: Georgetown University Press. Guitart, Jorge M. 1985. Variable rules in Caribbean Spanish and the organization of phonology. In Frank H. Nuessel (ed.) Current issues in Hispanic phonology and morphology, 28–33. Bloomington: Indiana University Linguistics Club. Hale, Mark & Charles Reiss. 2008. The phonological enterprise. Oxford: Oxford University Press. Harris, James W. 1985. Autosegmental phonology and liquid assimilation in Havana Spanish. In Larry D. King & Catherine A. Maley (eds.) Selected papers from the 13th Linguistic Symposium on Romance Languages, 127–148. Amsterdam: John Benjamins.

Lateral Consonants

24

Harris, James W. & Ellen M. Kaisse. 1999. Palatal vowels, glides and obstruents in Argentinian Spanish. Phonology 16. 117–190. Hayes, Bruce. 1986. Assimilation as spreading in Toba Batak. Linguistic Inquiry 17. 467–499. Hegarty, Michael. 1989. An investigation of laterals and continuancy. Unpublished ms., MIT. Heid, Sebastian & Sarah Hawkins. 2000. An acoustical study of long-domain /r/ and /l/ coarticulation. Proceedings of the 5th Seminar on Speech Production: Models and Data, 77–80. Munich: Institut für Phonetik und Sprachliche Kommunikation, LudwigMaximilians-Universität. Holt, D. Eric. 2002. The articulator group and liquid geometry: Implications for Spanish phonology present and past. In Caroline R. Wiltshire & Joaquim Camps (eds.) Romance phonology and variation, 85 –99. Amsterdam & Philadelphia: John Benjamins. Hsu, Chai-Shune. 1996. Voicing underspecification in Taiwanese word-final consonants. UCLA Working Papers in Linguistics 2: Papers in Phonology 3. 1–24. Hualde, José Ignacio. 1991. Basque phonology. London & New York: Routledge. Inkelas, Sharon & Yvan Rose. 2008. Positional neutralization: A case study from child language. Language 83. 707–736. Iverson, Gregory K. & Hyang-Sook Sohn. 1994. Liquid representation in Korean. In Young-Key Kim-Renaud (ed.) Theoretical issues in Korean linguistics, 1–19. Stanford: CSLI. Iverson, Paul, Patricia K. Kuhl, Reiko Akahane-Yamada, Eugen Diesch, Yohich Tohkura, Andreas Kettermann & Claudia Siebert. 2003. A perceptual interference account of acquisition difficulties for non-native phonemes. Cognition 87. B47–B57. Johnson, Wyn & David Britain. 2007. L-vocalisation as a natural phenomenon: Explorations in sociophonology. Language Sciences 29. 294 –315. Kaisse, Ellen M. 2000. Laterals are [–continuant]. Unpublished ms., University of Washington. Ladefoged, Peter. 2006. A course in phonetics. 5th edn. Boston: Thomas Wadsworth. Ladefoged, Peter & Ian Maddieson. 1996. The sounds of the world’s languages. Oxford & Malden, MA: Blackwell. Li, Paul Jen-Kuei. 1974. Alternations between semi-consonants and fricatives or liquids. Oceanic Linguistics 13. 163–186. Macken, Marlys A. 1980. The child’s lexical representation: The ‘puzzle-puddle-pickle’ evidence. Journal of Linguistics 16. 1–17. Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University Press. Maddieson, Ian. 1987. Linguolabials. Journal of the Acoustical Society of America 81. S65. Maddieson, Ian. 2008. Lateral consonants. In Martin Haspelmath, Matthew S. Dryer, David Gil & Bernard Comrie (eds.) World atlas of language structures online, ch. 8. Munich: Max Planck Digital Library 2008. http://wals.info/feature/description/8. MadeÓska, L. & M. Witaszek-Samborska. 1998. Zapis fonetyczny. PoznaU: Wydawnictwo Naukowe UAM. Mascaró, Joan. 1976. Catalan phonology and the phonological cycle. Ph.D. dissertation, MIT. Mielke, Jeff. 2005. Ambivalence and ambiguity in laterals and nasals. Phonology 22. 169–203. Mithun, Marianne & Hasan Basri. 1986. The phonology of Selayarese. Oceanic Linguistics 25. 210 –254. Morén, Bruce. 2006. Consonant–vowel interactions in Serbian: Features, representations and constraint interactions. Lingua 116. 1198–1244. Nacaskul, Karnchana. 1978. The syllabic and morphological structure of Cambodian words. In Philip N. Jenner (ed.) Mon-Khmer studies VII, 183–200. Honolulu: University Press of Hawaii. Narayanan, Shrikanth S., Abeer A. Alwan & Katherine Haker. 1997. Toward articulatoryacoustic models for liquid approximants based on MRI and EPG data, part I: The laterals. Journal of the Acoustical Society of America 101. 1064–1077. Ohala, John J. 1974. Phonetic explanation in phonology. Papers from the Annual Regional Meeting, Chicago Linguistic Society: Parasession on natural phonology. 251–274.

25

Moira Yip

Padgett, Jaye. 1991. Stricture in feature geometry. Ph.D. dissertation, University of Massachusetts, Amherst. Padgett, Jaye. 1995. Feature classes. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 385 –420. Amherst, MA: GLSA. Padgett, Jaye. 2002. Feature classes in phonology. Language 78. 81–110. Paradis, Carole & Jean-François Prunet (eds.) 1991. The special status of coronals: Internal and external evidence. San Diego: Academic Press. Parker, Steve. 2008. Sound level protrusions as physical correlates of sonority. Journal of Phonetics 36. 55 –90. Piggott, Glyne L. 1991. The geometry of sonorant features. Unpublished ms., McGill University. Piñeros, Carlos. 2003. Accounting for the instability of Palenquero voiced stops. Lingua 113. 1185 –1222. Quilis, A., M. Esgueva, M. L. Gutiérrez Araus & M. Canterero. 1979. Características acústicas de las consonantes laterales españolas. Lingüística Española Actual 1. 233–343. Recasens, Daniel & Aina Espinosa. 2005. Articulatory, positional and coarticulatory characteristics for clear /l/ and dark /l/: Evidence from two Catalan dialects. Journal of the International Phonetic Association 35. 1–25. Rice, Keren. 1994. Laryngeal features in Athapaskan languages. Phonology 11. 107–147. Rice, Keren & Peter Avery. 1991. On the relationship between laterality and coronality. In Paradis & Prunet (1991), 101–124. Sagey, Elizabeth. 1986. The representation of features and relations in nonlinear phonology. Ph.D. dissertation, MIT. Scobbie, James M. & Alan Wrench. 2003. An articulatory investigation of word final /l/ and /l/ sandhi in three dialects of English. In M. J. Solé, D. Recasens & J. Romero (eds.) Proceedings of the 15th International Congress of Phonetic Sciences, 1871–1874. Barcelona: Causal Productions. Shaw, Patricia A. 1991. Consonant harmony systems: The special status of coronal harmony. In Paradis & Prunet (1991), 125–157. Smith, Neil V. 2010. Acquiring phonology: A cross-generational case-study. Cambridge: Cambridge University Press. Spencer, Andrew. 1984. Eliminating the feature [lateral]. Journal of Linguistics 20. 23–43. Sproat, Richard & Osamu Fujimura. 1993. Allophonic variation in English /l/ and its implications for phonetic implementation. Journal of Phonetics 21. 291–311. Steriade, Donca. 1987. Redundant values. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23(2). 339–362. Sulkala, Helena & Merja Karjalainen. 1992. Finnish. London & New York: Routledge. Trigo, Loren. 1988. On the phonological behavior and derivation of nasal glides. Ph.D. dissertation, MIT. Walsh Dickey, Laura. 1994. Representing laterals. Papers from the Annual Meeting of the North East Linguistic Society 25. 535 –550. Walsh Dickey, Laura. 1997. The phonology of liquids. Ph.D. dissertation, University of Massachusetts, Amherst. Weijer, Jeroen van de. 1995. Continuancy in liquids and in obstruents. Lingua 96. 45 –61. Whitney, William Dwight. 1889. Sanskrit grammar, including both the classical language, and the older dialects of Veda and Brahmana. 2nd edn. Cambridge, MA: Harvard University Press. Yip, Moira. 1989. Feature geometry and cooccurrence restrictions. Phonology 6. 349 –374. Yip, Moira. 1992. Prosodic morphology in four Chinese dialects. Journal of East Asian Linguistics 1. 1–35. Yip, Moira. 2001. Segmental unmarkedness versus input preservation in reduplication. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory, 206 –228. Cambridge: Cambridge University Press.

Lateral Consonants

26

Yip, Moira. 2003. Some real and not-so-real consequences of comparative markedness. Theoretical Linguistics 29. 53–64 Yip, Moira. 2004. Lateral survival: An OT account. International Journal of English Studies 4(2). 25–51. Yip, Moira. 2005. Variability in feature affiliations through violable constraints: The case of [lateral]. In Marc van Oostendorp & Jeroen van de Weijer (eds.) The internal organization of phonological segments, 63–91. Berlin & New York: Mouton de Gruyter. Yu, Alan C. L. 2007. A natural history of infixation. Oxford: Oxford University Press.

32

The Representation of Intonation Amalia Arvaniti

1

Introduction

It is a well-known truism that no utterance is ever produced in a strict monotone; all utterances, in all languages, show some pitch modulation. Such changes in pitch – impressionistically described as rises and falls – are due to changes in fundamental frequency or F0, the physical property of the speech signal that is determined by the basic rate of vibration of the vocal folds and gives rise to the percept of pitch. Although pitch modulations exist in all languages, their origin and function differ, in that pitch patterns may be specified either at both the lexical and phrasal levels or only at the phrasal level, resulting in more or less dense tonal specifications, respectively (Gooden et al. 2009). The term intonation is used to refer to phrasal tonal patterns, while the terms pitch accent and tone are traditionally used to refer to lexical tonal specifications (chapter 45: the representation of tone). Simplifying somewhat, in languages like English, Italian, Greek, and many other European languages the entire F0 contour is specified at the phrasal level by means of a complex interplay between metrical structure, prosodic phrasing, syntax, and pragmatics; these factors determine where pitch movements will occur and of what type they will be. In languages referred to as tone languages – such as Mandarin, Thai, and Igbo – most syllables are lexically specified for tone and tonal changes affect lexical meaning; in languages often referred to as pitch accent languages – such as Japanese, Swedish, and Serbian – tone operates in a similar fashion, except that at most one syllable in each word is lexically specified for tone. In both tone and pitch accent languages additional tonal patterns are specified at the phrasal level. Here the focus is on languages without lexical tonal specifications, since it is the intonation of these languages that has been mostly examined. Determining the structure of pitch modulation and the primitives that make up pitch contours in languages without lexical tone is challenging, since F0 changes are not as discrete and easily identifiable as in “tonal” languages, their connections to segmental material are less easy to determine, and associated meanings are harder to pinpoint since they deal with information structure and pragmatic interpretation rather than lexical semantics. The following examples illustrate these points. In Figure 32.1, two utterances are shown, Me?! and A The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0032

2

Amalia Arvaniti

Pitch (Hz)

375

Me?! 100

A

ballgown

designer?!

0

2.575 Time (s)

Figure 32.1 Waveforms and F0 contours of two English utterances illustrating the incredulity contour (Hirschberg and Ward 1992); on the left, Me?!; on the right, A ballgown designer?!. Vertical lines indicate word boundaries

ballgown designer?!, both using the rise-fall-rise melody that implies incredulity (Ward and Hirschberg 1985; Hirschberg and Ward 1992). They are plausible responses to a career advisor’s pronouncement that, according to test results, designing ballgowns is the recommended career choice for the speaker, who has all along dreamed of becoming an aerospace engineer (for similar examples, see Ladd 2008: 45–46). Although the short contour can be informally described as rise–fall–rise, the longer contour cannot be described in a similar fashion, as it shows a long low-level stretch between a rise–fall and a final rise. In Figure 32.2, Greek contours very similar to the English ones in Figure 32.1 are shown, though in the case of Greek these contours are used for wh-questions (Arvaniti and Baltazani 2005; Arvaniti and Ladd 2009). As can be seen, the same issue with overall shape arises here as well. Further, as Arvaniti and Baltazani (2005) note, the Greek melody in Figure 32.2 can also be used for polite requests employing an imperative; e.g. [’Ïose sti ma’ria ’li:o ne’raci] ‘give Maria some water’ (lit. give to Maria a-little water-dim). Finally, Figure 32.3 illustrates two instances of another English melody: unlike the contours in Figures 32.1 and 32.2, which look different from each other but convey the same meaning in each case, the contours of Figure 32.3 are realizations of the same melody but convey different meaning, depending on the utterance: the melody used is

Pitch (Hz)

325

pça 125

pça mama tilefonise sti

0

nosokoma 3.32

Time (s) Figure 32.2 Waveforms, transcriptions, and F0 contours of two Greek wh-questions, on the left, [’pça] ‘which (fem)’, on the right, [’pça ma’ma tile’fonise sti noso’koma] ‘which mom called the nurse?’ Vertical lines indicate word boundaries

The Representation of Intonation

3

Pitch (Hz)

300

That’s really 150

awesome

That’s

0

twenty

dollars 2.323

Time (s) Figure 32.3 Waveforms and F0 contours of two English utterances; on the left, That’s really awesome; on the right, That’s twenty dollars. Vertical lines indicate word boundaries

the default for That’s twenty dollars, but sounds blasé or sarcastic when used with That’s really awesome. Note that this is not because the melody is wrong for That’s really awesome: the use is legitimate and meaningful (if the speaker wishes to be sarcastic or convey her indifference). These examples illustrate three main points about intonation. First, they show that the shape of intonational contours with a given pragmatic interpretation can vary substantially, depending on the segmental material with which they are uttered. Such differences are not random, but related to the overall prosodic structure of the utterance with which the contours are associated, including the number of syllables and the position of stresses (where applicable). Second, the examples show that contours do not have a constant meaning, either within or across languages; within a language, their interpretation may well depend on lexical and other choices that accompany the use of the melody; across languages differences can be arbitrary. A successful theory of intonation should be able to capture these properties: it should be able to explain the connection between intonation and meaning and make generalizations from surface F0 data with sufficient predictive power to generate new contours of the same basic melody to “fit” new utterances of varying lengths and structures. Although the above observations are by and large shared by most intonational models, the ways in which they treat these basic properties show substantial differences. As discussed in more detail in §2.1, many researchers have treated F0 contours as gestalts or configurations, that is, as holistic pitch movements that encompass entire utterances and have a uniform meaning. In other models, melodies are seen as being composed of primitives of some sort. These primitives are considered to be either local configurations (or dynamic tones), such as local rises and falls, or level tones, such as high, mid, and low. Here I review both the controversy between advocates of gestalt approaches to intonation and those who proposed analyses based on the decomposition of melodies into smaller elements, and the disagreement between researchers who use dynamic tones (that is, local configurations) as the primitives of intonational structure and those who advocate the use of level tones instead. As I show, however, focusing only on the form of intonational primitives avoids an even more fundamental question, namely which parts of a contour should be

4

Amalia Arvaniti

represented at all. This question is addressed in more detail in §3.2, where the main argument is advanced that an inordinate attention to phonetic form and a reluctance to accept that intonation is part of a language’s phonology have hampered research and have led to analyses that by and large aim at reproducing entire melodies, but have little predictive power and cannot successfully generalize beyond the F0 contours they reproduce. As argued in §3.2 and §3.3, sparse representations that aim at capturing only the linguistically significant aspects of each contour can better handle both intonational form and intonational meaning.

2 2.1

Configurational models Melodies as gestalts

As mentioned above, many researchers have treated intonation contours as gestalts, such as Bolinger (1951), Jones (1972), Cooper and Sorensen (1981), Hirst and Di Cristo (1998), Grabe et al. (2003), and Xu (2005). In these works, it is most often the case that intonational contours are seen as holistically and directly reflecting certain functional or structural aspects of speech, such as the distinction between questions and statements or that between levels of phrasing. According to Jones (1972: 279) – who in the last edition of An outline of English phonetics followed several earlier intonational analyses, notably those of Armstrong and Ward (1926) and Kingdon (1958) – English has two basic tunes, Tune 1 and Tune 2. These are a fall and rise respectively, which “may be spread over a large number of syllables, or [. . .] be compressed into smaller spaces.” Bolinger (1951: 208) also concludes his critique of level-based analyses (see §3.1) by saying that “intonation could not be a more appropriate illustration of the Gestalt.” More recently, Cooper and Sorensen (1981) presented a series of experiments in which contours are treated as undivided wholes and peak height is taken to directly reflect levels of phrasing. Modern versions of the gestalt approach include INTSINT (International Transcription System for Intonation; e.g. Hirst and Di Cristo 1998; Hirst et al. 2000), OXIGEN (Oxford Intonation Generator; Grabe et al. 2003), and PENTA (Parallel Encoding and Pitch Target Approximation; e.g. Xu 2005). In INTSINT, entire intonation contours are transcribed, using symbols that represent pitch movements. The movements, however, are not seen as primitives but rather as a means to transcribing the course of F0 over an entire utterance (hence their descriptive labels Higher, Lower, Upstepping, Downstepping, Same, Top, and Bottom, which express the course of F0 relative to preceding points and the overall range of the speaker). In OXIGEN, polynomials are used to model overall contour shape differences between statements and questions in British English. Finally, in PENTA, each syllable in a contour has its own pitch specification, while global aspects of the overall melody are directly linked to functional effects (a feature shared with OXIGEN); e.g. the use of an utterance as statement or question is said to lead to changes in overall pitch shape from fall to rise (Xu 2005). Configurational approaches have been quite popular for several reasons. First, their appeal is intuitive: F0 contours are more or less continuous, so, as Bolinger (1951: 206) put it, intonation can be seen as “a pattern [. . .] in the fundamental, down-to-earth sense of a continuous line that can be traced on a piece of paper”

The Representation of Intonation

5

(though, as noted in Arvaniti 2007, the fact that F0 looks continuous on paper or a computer monitor does not necessarily mean that it is perceived in this fashion). Second, the relationship between shape and function seems sufficiently natural in many cases that it has been argued to derive from the biological underpinnings of pitch production (e.g. Ohala 1983; for a thorough discussion of the biological code, see Gussenhoven 2004: ch. 5, who, however, does not adopt a configurationalist approach to intonation). This view is supported by certain general trends, such as the use of rising F0 for questions and falling F0 for statements for which it is possible to find empirical evidence in several languages (e.g. Grabe et al. 2003). Despite their popularity, configurational approaches face a major problem when it comes to accounting for intonational form. Specifically, if melodies were undivided wholes, they should simply shrink and stretch to “fit” the segmental duration of the utterance they co-occur with. There is plenty of evidence, however, that when tunes are matched with utterances of varying lengths and different metrical structures they are not realized in this simple manner. This was observed by ’t Hart et al. (1990: ch. 4), among the first researchers to use instrumental rather than impressionistic data for intonation research (e.g. Cohen and ’t Hart 1968; ’t Hart and Cohen 1973; ’t Hart and Collier 1975). They noticed that in their Dutch corpora sequences of pitch movements would appear on a single syllable in some instances, but would be separated by several syllables in others (cf. Figures 32.1 and 32.2). Importantly, ’t Hart et al. found that this elasticity, as they called it, did not affect the melodic identity of the contour (determined by means of perceptual experiments; see §2.3), even though it radically altered the overall contour shape (thus, the concept of elasticity can be juxtaposed with the compression envisaged by Jones 1972, which implies greater uniformity in the squeezing and stretching of contours). Results from several later studies support the original observations of ’t Hart et al. that certain aspects of the contour are important for listeners, while overall contour shape is not. Pierrehumbert and Steele (1989) varied in steps the position of the pitch peak in English melodies that can be holistically described as rise–fall–rise, and found that listeners imitating these stimuli produced not a continuum but two different contours, one with an early and one with a late peak. Similar results have also been presented by Redi (2003), following Pierrehumbert and Steele’s imitation protocol (argued by Gussenhoven 1999 to be the best way to examine categoricality in intonation). Similarly categorical responses to intonational continua that would be holistically seen as instances of the same contour have also been obtained by Rietveld and Gussenhoven (1995) for Dutch, D’Imperio and House (1997) for Neapolitan Italian, and Botinis (1989) for Greek, inter alia. The contours in Figures 32.1 and 32.2 also illustrate this general point. In the monosyllabic utterances, the rise–fall–rise stretches over the entire syllable. In the longer utterances, however, the rise co-occurs with the first stressed syllable (with some peak delay) and the final rise is realized on the last syllable, while the fall and subsequent low-level stretch vary depending on the language and length of the utterance. As a result, the contours of the longer utterances are not stretched-out copies of the shorter contours, nor are the shorter contours compressed versions of the longer contours; these differences, extensively discussed in Arvaniti and Ladd (2009), are illustrated in Figure 32.4, using the Greek contours of Figure 32.2.

6

Amalia Arvaniti

Pitch (Hz)

325

125

0

3.32 Time (s)

Figure 32.4 F0 contours of the Greek wh-questions shown in Figure 32.2; the solid gray lines are compressed and stretched-out copies of the long and short contour respectively

Differences in contour shape may also relate to the number and position of stressed syllables in an utterance and the location of the word in focus. This is experimentally shown in Arvaniti et al. (2006a), who studied the default melody of Greek polar questions in which the position of focus can vary. They show that the shape of the contour is strongly affected by the position of the stressed syllables and of the word in focus (see also Arvaniti and Baltazani 2005; Arvaniti 2007). The focus effect in particular is illustrated in Figure 32.5, which shows the two contours that can be used with the sentence [’pinun lemo’naÏa] ‘they drink lemonade’ when it is uttered as a question with focus on the verb (dotted line) or the noun (solid line). As can be seen, no description in terms of overall shape can possibly cover both contours; at best, the late focus question would be characterized as rise–fall–rise–fall and the early focus question as rise–fall, but this ignores the location of the rise–fall part that the two contours share and the significance of this location both for understanding the pragmatics of the two questions and for their naturalness (for details, see Arvaniti et al. 2006a). Holistic approaches, then, suffer from two problems. First, they cannot represent in the same manner contours that superficially look different, like the contours in Figure 32.5. At the same time, holistic approaches cannot account for systematic differences between realizations: e.g. they cannot account in a principled manner for the fact that the syllable [nun] in Figure 32.5 is low in the dotted, early focus contour, but rising in the solid, late focus contour. In short, configurational approaches cannot represent different instantiations of the same melody in a way that can either capture their similarity or predict their differences, thereby failing one of the main criteria for an adequate theory of intonational phonology mentioned in §1.

2.2

Intonational gestalts and meaning

In addition to issues with intonational form, gestalt approaches encounter problems with intonational meaning. In gestalt models, overall contour shape is said to associate with differences in meaning. Yet, as illustrated in §1, the relationship between melody and meaning is not one-to-one: the same melody can lend different pragmatic nuances to different utterances, while the same meaning can

The Representation of Intonation

7

Pitch (Hz)

300

pi 100

nun

le

mo

0

na 1.001

Time (s) Figure 32.5 Waveform, transcription, and F0 contours for the phrase [’pinun lemo’naÏa] ‘they drink lemonade’, uttered as a question with focus on [’pinun] ‘they drink’ (dotted contour) or on [lemo’naÏa] ‘lemonade’ (solid contour); the former could be glossed as ‘is lemonade something they would drink?’ and the latter as ‘is it lemonade that they are drinking?’ Vertical lines indicate syllable boundaries

be expressed by superficially different-looking contours. This lack of one-to-one correspondence has been repeatedly noted over the years (among many, Bolinger 1964; Lehiste 1970: 95ff. and references therein; Ladd 1980; Pierrehumbert 1980; ’t Hart et al. 1990: ch. 4), and prompted Pike (1945: 23ff.) to strongly caution against the practice of investigating contour meaning on the basis of grammatical structure (such as pitting statements against questions). Overall, then, melodies do not appear to have specific functions, and indeed attempts to describe the melodies of specific pragmatic nuances, such as irony, have proved unsuccessful (e.g. Bryant and Fox Tree 2005). In addition, cross-linguistic research has shown that functional effects of the sort favored by gestalt approaches are expressed in language-specific ways. Such findings abound and strongly argue against a natural or direct relationship between intonational form and function. For instance, it has been argued that focus is universally expressed as local pitch expansion with a concomitant reduction in pitch range post-focally (e.g. Xu 2005). Yet a review of the literature clearly shows that this is far from a universal mechanism for the prosodic marking of focus. For example, in Greek polar questions the word in focus has the lowest F0 in the entire utterance and pitch expansion is associated with the post-focal region (Arvaniti et al. 2006a; see Figure 32.5 for an illustration). Taiwanese relies on duration to mark focus rather than changes in pitch (Pan 2008), while in many other languages pitch expansion is just one, optional, manifestation of an overall prosodic reorganization under focus (e.g. Chen 2006, 2010 for Mandarin; de Jong 1995 and Harrington et al. 2000 for English; Baltazani 2006 and Arvaniti et al. 2006b for Greek; Jun 2005a for Korean; Venditti et al. 2008 for Japanese). Perhaps the strongest counter-argument against the view that the relationship between focus and pitch range is natural and direct is the fact that not all languages can use intonation to mark focus (e.g. Swerts et al. 2002 on Italian; see Ladd 2008: ch. 6 for a discussion). If the relationship between focus and intonation is natural and direct, there is no good explanation of why some languages do not avail themselves of this option. The presence of an arbitrary relationship between intonation and meaning (as in all aspects of linguistic structure) is also evident in cross-linguistic data from

8

Amalia Arvaniti

questions, which, as mentioned, are often assumed to end in rises while statements are said to end in falls (e.g. Ohala 1983; Gussenhoven 2004: ch. 4; Xu 2005). But this idea is not supported cross-linguistically. Low or falling intonation is used with questions in typologically diverse languages, such as Bengali (Hayes and Lahiri 1991), Chickasaw (Gordon 2005), Bininj Gun-wok (Bishop and Fletcher 2005), many Niger-Congo languages (Rialland 2007), Greek (Arvaniti et al. 2006a, 2006b; Arvaniti and Ladd 2009), and Romani (Arvaniti and Adamou, forthcoming). Conversely, it is also the case that statements end in rises in many languages, including Bengali (Hayes and Lahiri 1991), Chickasaw (Gordon 2005), and many varieties of English (e.g. Grabe et al. 2000; Fletcher et al. 2005). Overall then, cross-linguistic research has confirmed the observation made by many scholars over the last century that the relationship between meaning and melody (as in global contour shape) is arbitrary and many-to-many. In turn, these results support the view that no useful generalizations about either intonational meaning or intonational form can be made on the basis of contour shape and its relationship to broad functional effects.

2.3

Intonation as a composite of rises and falls

If it is accepted that melodies are composed of independent elements, the question that arises is the nature of these elements. In this respect, most researchers have favored approaches in which the primitives are dynamic tones or movements, such as rises and falls, though the possibility of level tones (monotones) within such systems is also acknowledged (Bolinger 1964, 1986; Crystal 1972; Ladd 1980; ’t Hart et al. 1990). Perhaps the most thoroughly tested of these models is that of the IPO (Institute for Perception Research, ’t Hart et al. 1990, and references therein). The IPO model was developed on the basis of Dutch, but it has also been used for the description of intonation in other languages, such as English, German, and Russian (see ’t Hart et al. 1990: ch. 4, and references therein). In this system, the main elements are rises and falls, a choice justified on perceptual grounds: the IPO researchers noticed that pitch changes were realized more slowly than possible by laryngeal control (as determined by the studies of Ohala and Ewan 1973 and Sundberg 1979), and concluded that the purpose of this slow execution must be to give listeners the perception of “pitch movement” rather than of a jump in pitch (’t Hart et al. 1990: 71). Rises and falls are composed of four “perceptual features”: pitch direction, timing relative to syllable boundaries, rate of pitch change, and excursion size. In addition, rises and falls combine into larger configurations or contours; e.g. a rise–plateau–rise creates a “hat pattern” while a rise–fall creates a “pointed hat.” This architecture makes the system configurational in two ways, since both the primitives and their combinations are configurations. An important feature of the IPO approach is that meaning does not play a part in establishing either the primitives or the contours on the grounds that “intonation features have no intrinsic meaning” (’t Hart et al. 1990: 110). Instead, decisions as to the number and nature of melodies are based on experimental evidence derived from the close-copy technique, in which listeners are asked whether stylized versions of various contours sound the same or different. By using experiments like these, the IPO researchers determined the limits within which

The Representation of Intonation

9

contours may vary phonetically; in turn, variants that are considered different by listeners are used to establish the elements of the intonational system. The IPO stance toward the role of meaning in the analysis of intonation is the exact opposite of that taken by the British school (among many, Halliday 1967, 1970; Crystal 1972; O’Connor and Arnold 1973), where differences in meaning are crucial for establishing the existence of both primitives and entire tunes. In this system, tone units (or tunes or tone groups) can span entire utterances, but are also decomposed into smaller parts, the pre-head, head, nucleus, and tail (this is the division proposed by Crystal 1972 and O’Connor and Arnold 1973; for a review of additional analyses, see Ladd 1980: 16). The nucleus, defined as the pitch movement on the stressed syllable of the most important word of the utterance, is the only required element of a tune. The F0 of any unstressed syllables following it is the tail, while the F0 stretch covering all syllables from the first stressed syllable to the nucleus is the head; the F0 of any unstressed syllables preceding the head forms the pre-head. As can be surmised, not all melodies include all four primitives, while particular primitives can span arbitrary lengths of an utterance. For instance, the short utterance in Figure 32.1 consists only of a nucleus, while the two utterances in Figure 32.3 include a pre-head (that’s), a high head (twenty/really), a low-falling nucleus (the stressed syllable of dollars/awesome), and a tail (the unstressed syllable of dollars/awesome). These elements do not combine entirely freely. As an example, O’Connor and Arnold (1973) distinguish seven nuclear tones, four types of heads, and two types of pre-heads, but only 20 types of tone groups, instead of the 56 that all possible combinations of primitives would produce (tails follow the movement of the nucleus, so they do not enter into the calculation). The use of primitives of unconstrained length forces analyses of the British school into an artificial distinction between simple and compound rise–falls and fall–rises (e.g. Halliday 1970; O’Connor and Arnold 1973). The compound tunes fit uneasily into the system, since they are said to contain two nuclear tones. Furthermore, most practitioners admit that it is hard to distinguish simple tunes from their compound counterparts on the basis of meaning, a serious drawback for a system in which the role of meaning is central (e.g. Crystal 1972; O’Connor and Arnold 1973: ch. 1). This problem is illustrated by the contours in Figure 32.1: the tune of Me?! is more plausibly analyzed as a simple rise–fall, but that of A ballgown designer?! can only be treated as a compound tune, though their meaning affinity is evident. Finally, like gestalt models, the British school analyses face problems with their treatment of intonational meaning, which is hard to pinpoint, yet must be determined if the elements of the system are to be defined. As a result, the analyses of different authors disagree on the meaning and number of tunes, including the number and shape of nuclei; e.g. Halliday and Crystal recognize one type of fall, while O’Connor and Arnold distinguish between a high fall and a low fall. To the extent that meanings can be determined, they tend to be vague and occasionally contradictory, and to a large extent dependent on grammatical aspects of the utterance. For example, O’Connor and Arnold (1973: 78–79) describe the “Jackknife” (simple rise–fall) as showing that the speaker is “impressed, perhaps awed,” but they also add that the speaker can use it to sound “complacent, self-satisfied, even smug” or as “shrugging aside any involvement.” These meanings apply if the Jackknife is used with statements, but change if it is used with wh-questions, polar questions, or commands (for discussions of the problems of the British school approach to intonational meaning see Liberman 1975: 132ff. and Pierrehumbert and Hirschberg 1990).

10

2.4

Amalia Arvaniti

Superpositional models

In a number of configurational models, contours are said to be composed of two elements, a general trend and local perturbations which “ride” on this overall movement. This conception of intonation was also espoused by Bolinger, who distinguished accentuation from intonation, using accentuation to refer to pitch movements (accents) on stressed syllables, and intonation to refer to the general course of F0, “the rise and fall of pitch as it occurs along the speech chain” (Bolinger 1986: 194). The IPO system is one such superpositional model, in that its primitives and contours are seen as localized movements superposed on a larger declination component, which is taken to be largely automatic and due to the drop in subglottal pressure during the course of an utterance (declination reset and local movements, on the other hand, are seen as actively controlled by the speaker; ’t Hart et al. 1990: ch. 5). The exact role of declination and its physiology are still a matter of debate (see e.g. Pierrehumbert and Beckman 1988: ch. 3; Gussenhoven 2004: ch. 6), though evidence such as that provided for Japanese downtrends by Pierrehumbert and Beckman (1988) does not support the idea of declination playing as important a role as the IPO scholars envisioned. Superpositional models have also been presented by Fujisaki (1983, 2004), Gårding (1983, 1987), and Thorsen (1980, 1985, 1986). Simplifying somewhat, in Fujisaki’s system a phrase command results in the rising–falling course of F0 throughout an utterance (or a part thereof), with accent commands being responsible for more localized perturbations. Gårding (1983, 1987) posits grids (quasi parallel lines) within which most local F0 minima and maxima can be fitted; the overall range and direction of the grid (rising or falling or a combination thereof) reflect functional differences between utterances, such as the distinction between statements and questions. Similarly, Thorsen (1980: 1022) suggests that the rate of F0 drop in Danish is directly related to utterance function: “falling intonation contours are associated with declarative, intermediate contours with nonfinal, and flat contours with interrogative sentences.” Due to the connection between communicative functions and overall F0 trends, the models of Thorsen and Gårding face similar issues to gestalt models with respect to meaning. On the other hand, Fujisaki’s model, which does not rely on meaning distinctions, must resort to counterintuitive solutions – such as negative accent commands and phase commands that span linguistically arbitrary stretches – in order to adequately describe the course of F0 in languages other than Japanese (e.g. Fujisaki et al. 1997 on Greek; Fujisaki et al. 2005 on Mandarin; Gu et al. 2007 on Cantonese; for a discussion of these problems, see Ladd 2008: 23ff.; Arvaniti and Ladd 2009).

3 3.1

Pitch levels as primitives Early level-based models

Descriptions of intonation by means of level tones date from the American structuralists (Pike 1945; Trager and Smith 1951; Hockett 1955; Trager 1961). In these systems, intonation is analyzed by means of four levels, extra-high, high,

The Representation of Intonation

11

mid, and low. The level tones of these analyses are meant to be phonological abstractions equivalent to phonemes (chapter 11: the phoneme); as such, they are said to be defined relative to each other, rather than each representing a specific pitch range. These early analyses were heavily criticized by configurationalists, most notably Bolinger (1951), who questioned the claim that the four levels are relative, pointing out that if this assertion is taken at face value, it is not possible to distinguish combinations such as 123 from 234, although, theoretically, such combinations should be distinct. Thus, Bolinger concluded that level tones cannot be relative but must “rove each in its own bailiwick” (1951: 200), and set out to test this hypothesis by recording utterances in various ways (as Bolinger termed them, i.e. melodies differing in various aspects) and having listeners judge them for similarity or appropriateness for a given purpose, such as appeasing a child. His results showed that contours analyzed as contrastive in the system of Trager and Smith (whom he particularly targeted) are perceived as similar by listeners, while others, analyzed as allophones of the same basic melody, are considered by listeners to be contrastive. Bolinger used his results to argue that a system with four levels can be at the same time too powerful and not powerful enough to capture contrastive and allophonic variations in the intonational system of English. His results led him to reject level tone analyses as untenable and to propose instead that melodies are gestalts. Bolinger’s critique reflects the assumptions of concreteness and bi-uniqueness prevalent at the time. It is clear, for example, that Bolinger expected the different levels to faithfully represent the entire course of an utterance’s F0, and to do so in such a way that the pitch range of each level did not overlap with that of others at any point in the utterance. Further, his comment that F0 forms “a continuous line that can be traced on a piece of paper” (Bolinger 1951: 206), coupled with his distinction between monotones, which he accepts, and level tones, which he does not (e.g. Bolinger 1986: 29), suggest that he expected a level-based representation to be phonetically realized as a series of sustained pitch levels. It is obvious that if these assumptions are adopted, a level-based analysis is unworkable on both phonological and phonetic grounds (on the latter, see Xu and Sun 2002; Dilley and Brown 2007). Although Bolinger’s critique was well accepted, it is fair to note that many of the assumptions he attributes to the structuralists are not found in their works. Pike (1945), Hockett (1955), and Trager (1961) all note that levels represent “only those points in the contour crucial to the establishment of its characteristic rises and falls” (Pike 1945: 26); with the exception of terminal junctures, these points associate with stressed syllables. Similarly, the structuralists noted that absolute pitch levels are not significant as such, and recognized the existence of both level tones and contours (e.g. Trager and Smith’s terminal juncture phonemes). Further, Pike (1945) discusses at length the fact that level tones need not be realized as a series of sustained pitches but can be realized as glides, especially when they are found close to each other, as happens in short utterances, for example. Nevertheless, as a result of configurationalist critiques, level-based analyses were largely abandoned in the following decades. Research within the generative framework focused primarily on the description of tone languages and no theoretical position was strongly taken in favor either of levels or of configurations with respect to intonation.

12

Amalia Arvaniti

Intonational analyses by means of level tones adopting many of the principles of the early structuralist accounts appear again in the late 1970s, in Liberman (1975), Goldsmith (1976), and Leben (1976). Based on the idea of Leben (1973), who first conceptualized tones as distinct tonal segments rather than features of particular tone-bearing units (typically syllables or moras), Goldsmith represented tones as autosegments residing on a separate tier, and analyzed English intonational melodies as sequences of H and L level tones (chapter 14: autosegments). Liberman (1975), on the other hand, proposed an analysis of English intonation with the traditional four levels represented by means of two features, [±high] and [±low], though, informally, he also used autosegmental representations closer to those of Leben (note that [+high, +low] is possible in Liberman’s system, and represents a high-mid level). In addition to the obvious differences in terms of formalism and the overall conception of phonology, the early autosegmental models depart from those of the early structuralists in that they assume that all syllables in an utterance are associated with some tone (something that may be accomplished, e.g. by tone copying or spreading). However, as discussed in more detail in §3.2, the assumption that contours should be fully specified leads back to the problems that Bolinger (1951) first noted.

3.2

The autosegmental-metrical model of intonational phonology

A major breakthrough in our understanding of intonation came from Bruce’s (1977) dissertation, a phonetic investigation of Swedish tonal structure. Bruce showed that the difference between the two lexical pitch accents of Swedish, accent I and accent II, does not lie in the shape of the accent, which is a fall in both cases, but in its timing with respect to the accented syllable: for accent I the fall is timed early, and for accent II it is timed late. A corollary of this difference is that a large part of the fall is truncated when a syllable with accent I is utterance-initial, while it is fully present in words with accent II (giving rise to a peak preceding the fall). Furthermore, Bruce provided an explanation for the second peak of accent II words, the seemingly erratic presence of which had been a longstanding puzzle: he showed that this peak is not part of the lexical accent at all, but part of the utterance’s intonation, which he analyzed as a sequence of a sentence accent and a terminal juncture. Several key points emerged from Bruce’s work. First, it demonstrated the importance of turning points, F0 minima and maxima which temporally align with particular elements of the segmental string. Bruce showed that these timing relations are regular in production and salient in perception. Crucially, they are also sufficient for modeling a contour without specifying the F0 in between (that is, by interpolating between salient points). Second, Bruce showed that an F0 contour can be composed of elements of different origins within the grammar: in Swedish, some parts of the contour, the pitch accents, are lexically specified, while others, the sentence accents and terminal junctures, are phrasal elements, the result of intonation and phrasing. Similar distinctions were hinted at in other models – e.g. in Trager and Smith’s terminal junctures, and in the prominence lending vs. non-prominence lending distinction between pitch movements

The Representation of Intonation

13

espoused by the IPO – but they had not been clearly demonstrated before. Finally, Bruce showed that phrasal and lexical tonal elements simply concatenate, rather than forming two distinct systems superimposed on each other, and that this concatenation can result in lawful, context-dependent variation in the realization of tones. The insights of the early autosegmentalists regarding the representation of tone, together with Bruce’s insights about the structure of tunes and their phonetic realization, were applied to the analysis of English intonation in Pierrehumbert’s dissertation (1980) (see chapter 116: sentential prominence in english for a more extensive discussion of intonational patterns in English). Her model was further developed, particularly in Beckman and Pierrehumbert (1986) and Pierrehumbert and Beckman (1988), into the model currently known as the autosegmental-metrical model of intonational phonology (a term coined by Ladd 1996). The autosegmental-metrical model (henceforth AM) has since been applied, with various modifications, to a series of unrelated languages with diverging prosodic systems (e.g. Jun 2005a; see also Arvaniti, forthcoming and D’Imperio, forthcoming for reviews). In Pierrehumbert (1980), melodies are represented as strings of H and L tones on an autosegmental tier. Crucially, the purpose of this string of tones is not to trace or transcribe the course of F0, but rather to represent the linguistically significant parts of the melody; that is, intonational representations are underspecified (chapter 7: feature specification and underspecification). The tones associate with the segmental string indirectly, via associations with the metrical tree (in Pierrehumbert and Beckman 1988, this is formalized as association between tones and prosodic trees, specifically between tones on the one hand and feet and phrasal boundaries on the other). Thus, like Bolinger and Bruce, Pierrehumbert adopts the distinction between tones that associate with stressed syllables (i.e. the heads of feet) and tones that associate with the edges of phrasal constituents. The former, following Bolinger, are pitch accents notated with an asterisk (e.g. H*, a notation first used by Goldsmith 1976); the latter, known as boundary tones, are notated with a percentage sign (e.g. H%). Pierrehumbert also noted that tunes included a pitch movement between the boundary tone and the preceding nuclear accent (by definition, the last pitch accent of an utterance, often referred to in other literature as sentence stress). Pierrehumbert analyzed these pitch movements between the nuclear accent and following boundary tone as floating tones, dubbed phrase accents. In Beckman and Pierrehumbert (1986), phrase accents are instead analyzed as phrasal tones that associate with the right edge of intermediate phrases (ips), a prosodic constituent larger than the prosodic word and smaller than the intonational phrase (IP), the nature of which is formalized in Pierrehumbert and Beckman (1988). In Pierrehumbert (1980), the H and L tones are said to be realized as tonal targets, typically peaks and dips respectively, although the relationship between phonological tones and phonetic realization is not always transparent. First, phonological tones may not be realized as F0 minima or maxima; e.g. in Pierrehumbert’s original analysis the H*+L- accent is not realized as a fall from a high to a low F0 level; rather the L tone is said to trigger downstep on a following H tone. Realizations are also context dependent; e.g. after a L- phrase accent, L% is realized as a drop to the bottom of the speaker’s range, but after a downstepped H- phrase accent it is realized as sustained mid-level pitch. The

14

Amalia Arvaniti

scaling of targets is computed on the fly, and is determined by metrical strength and context (see also Liberman and Pierrehumbert 1984; Pierrehumbert and Beckman 1988; Prieto et al. 1996). Furthermore, following Goldsmith (1976), Pierrehumbert assumed that tones co-occur (align) with the segmental material they are (indirectly) associated with: pitch accents align with associated stressed syllables, and boundary tones with phrase-final syllables; phrase accents, being floating tones, are realized in a less precise manner (for evidence that this view cannot fully account for tonal alignment cross-linguistically, see, inter alia, Arvaniti et al. 2000; Grice et al. 2000; Gussenhoven 2000). Since Bruce (1977) and Pierrehumbert (1980), several phonetic studies have demonstrated the crucial role of local F0 minima and maxima in the production of intonation. Such tonal targets have been shown to be consistently aligned with the segmental string and to show lawful variation, based on a number of factors, such as speaking rate (Fougeron and Jun 1998; Prieto and Torreira 2007), phonological weight (Ladd et al. 2000), tonal crowding (Silverman and Pierrehumbert 1990; Prieto 2005; Arvaniti and Ladd 2009), and dialectal differences (Arvaniti and Garding 2007; Ladd et al. 2009). Results attesting to the regularity of turning points have been reported for languages with very different prosodic systems, including not only languages without lexical tone but also tone languages, e.g. Mandarin (Xu 1999), Kinyarwanda (Myers 2003), and Thai (Morén and Zsiga 2006), pitch accent languages, e.g. Roermond Dutch (Gussenhoven 2000), Basque (Hualde et al. 2002; Elordieta and Hualde 2003), and Serbian (Smiljanio 2006), and languages with hybrid systems, e.g. Chickasaw (Gordon 2008) – for a comprehensive review, see Arvaniti (forthcoming) and D’Imperio (forthcoming).1 The search for highly localized tonal targets is in part a corollary of one of the most important tenets of the AM model, namely that contours are underspecified not only phonologically, but phonetically as well: while H and L tones are realized as targets with specific scaling and alignment, the F0 between them is determined by interpolation. Underspecification was empirically documented by Pierrehumbert and Beckman (1988: ch. 2), who showed that the course of F0 between the initial phrasal H of unaccented accentual phrases in Japanese and the final L% of the phrase is a fall that varies in steepness depending on the number of moras between the two tones. This type of realization is incompatible with fully specified representations, whether such specifications are present from the beginning or determined at a later stage through tone spreading.2 1

Despite the overwhelming evidence in favor of level tones being realized as highly localized minima and maxima, it is clear that other realizations are also possible. First, in some cases turning points may be just indirect reflexes of tones that are convenient to identify and measure in a signal that presents as a continuous curve; for example, Arvaniti and Ladd (2009) show that the turning points that define the low-level stretch in the contours of Figure 32.2 are not necessarily targets themselves, but quite possibly the easy-to-measure outcome of the tune’s requirement for a low-level stretch, the reflex of the L- phrase accent. Results like these suggest that tones should not be posited as phonological entities simply because a turning point in an F0 contour has been noted. Second, tones may not be realized as instantaneous events (among many, see Pierrehumbert 1980; Silverman 1987; D’Imperio 2000; Arvaniti et al. 2006a; Knight and Nolan 2006; Arvaniti and Garding 2007). Thus, although it is convenient to measure tonal targets as F0 minima and maxima, tones may also have duration, a type of realization that may be perceptually enhancing (e.g. D’Imperio et al. 2000; D’Imperio et al. 2010). 2 Recent results of work by Barnes et al. (2010) indicate that F0 transitions from one tone to another can be perceptually relevant, and aid in the identification of particular pitch accents (at least in experimental settings in which transitions are the only information listeners have for tonal identification).

The Representation of Intonation

15

The role of underspecification cannot be underestimated: it leads to a clear understanding that the number of tones need not match the number of tone-bearing units (TBUs), so that both strings of tonally unspecified TBUs and instances of several tones associating with the same TBU are possible. Thus, in the AM framework, the two English contours in Figure 32.1 are both analyzed as L*+H L- H%. By using the same representation for these two contours, AM captures the fact that they are instantiations of the same melody, thereby generalizing beyond surface form. AM can also account for the systematic differences between contours like those in Figure 32.1, which, as mentioned earlier, were particularly problematic for the British school. In Me?!, all three tonal events must co-occur with the only syllable of this utterance; hence the obvious lengthening of me (720 msec) and the swift movement from one tonal target to the next. In A ballgown designer?!, L*+H is associated with the metrically strongest syllable in the utterance, i.e. ball, and it co-occurs with it (showing the peak delay expected for this accent; e.g. Pierrehumbert and Steele 1989; Arvaniti and Garding 2007). The H% is realized on the last syllable, which is the one showing a rise. The L-, which is associated with the ip boundary, spreads between the L*+H and H%, accounting for the fall and low-level stretch of F0 (for details on the realization of the L- in such contours, see Grice et al. 2000; Barnes et al. 2006). A similar analysis applies to the Greek wh-questions shown in Figure 32.2, analyzed as L*+H L- !H% (where !H refers to a downstepped H tone; Grice et al. 2000; Arvaniti and Baltazani 2005; Arvaniti and Ladd 2009). Overall, the AM model avoids several pitfalls of previous analyses. First, by formally separating stress from intonation and providing a mechanism for their interaction, the AM model incorporates the insights of Bolinger about pitch accents without requiring distinct accentual and phrasal components to account for pitch contours. In addition, the use of only H and L tones avoids the problems noted by Bolinger (1951) with respect to level tones. At the same time, by treating the issue of pitch range as a matter of phonetic realization, AM avoids the problems that plagued the British analyses due to the confounding of linguistic and paralinguistic aspects of pitch range (cf. the disagreements regarding whether high falls and low falls are distinct entities). Further, by making explicit the separation between the phonetics and phonology of intonation, the AM model provides a principled account of the context-dependent variation of tones, a point that was not explicitly addressed in previous models, which mostly confounded contours and representations. Finally, the use of underspecification provides a parsimonious and elegant way of capturing both the similarities of melodies and the differences in phonetic realization that arise from the properties of the metrical structure with which a melody associates. In this way, the model can account for both local phonetic detail and abstract phonological form, something that configurational and full specification models cannot do (for extensive discussions of this point, see Pierrehumbert and Beckman 1988; Arvaniti et al. 2006a; Arvaniti and Ladd 2009). Finally, since the degree of underspecification can vary, AM can account for languages with dense tonal specifications, such as Mandarin, as well as for languages with more sparse specifications, such as English. In short, then, AM not only provides an answer regarding the nature of intonational primitives, but crucially addresses the even more fundamental question of what should be represented phonologically when it comes to intonation, an issue that most other theories have not tackled by focusing exclusively on faithful representations of entire F0 curves.

16

3.3

Amalia Arvaniti

AM and meaning

AM has not grappled systematically with meaning, though several models of information structure have relied on AM analyses to understand the role of intonation, particularly with respect to focus marking (Steedman 2000; Büring 2007; among others). Although research on intonational meaning is not extensive, it so far suggests that the principles of AM are more likely to lead to an understanding of intonational meaning than configurational approaches, since AM analyses are compositional and thus in principle more flexible than configurational approaches in dealing with the complex relationship between intonational meaning and form both within and across languages. Perhaps the best-known treatment of meaning within AM is Pierrehumbert and Hirschberg (1990), who developed a theory of intonational meaning specifically for English (the principles of which are, however, applicable to intonational systems at large). According to this model, each pitch accent, phrase accent, and boundary tone is seen as a morpheme that has its own pragmatic meaning; the meaning conveyed by an entire melody is therefore compositional and depends on contributions from all tones. Further, Pierrehumbert and Hirschberg suggest that each tone’s meaning is to be interpreted with respect to the phonological domain with which it is associated: specific lexical items in the case of pitch accents, ips in the case of phrase accents, and IPs in the case of boundary tones. The advantage of a system like this is that it can account for similarities in meaning conveyed by identical components of different melodies, such as the use of the same pitch accent with different phrasal tones, or the use of a given boundary tone with different pitch and phrase accents. In addition, since the system does not rely on a natural or biologically determined relationship between tones and meaning, it is more easily amenable to an understanding of cross-linguistic and cross-dialectal variation in the relationship between meaning and form. Crucially, Pierrehumbert and Hirschberg take pains to explain that the meaning of a tune is not to be directly interpreted; e.g. H* L- L%, often used with declaratives, is not to be interpreted as “S (the speaker) believes x”; rather, the speaker’s belief in x will be inferred from all the tones that make up a tune and the context in which they are used. This conception of intonational meaning could account for the different interpretation of the same tune in That’s twenty dollars and That’s really awesome shown in Figure 32.3. Simplifying somewhat, the downstepped !H* accent on the second word (dollars, awesome) does not impart salience on the accented item (note that in the analysis of Pierrehumbert 1980, and that of Pierrehumbert and Hirschberg 1990, this accent is analyzed as H*+L); rather, it implies that this item should be inferable by the hearer. This is expected for dollars in a context in which dollars are the only currency in which purchases can be made; but when used with awesome the inferred predictability implies that the speaker is either ritualistically using an expression associated with excitement (and therefore is not excited) or deliberately avoids the more plausible H* pitch accent in order to convey sarcasm. Despite the existence of this framework, it is fair to say that many aspects of intonational meaning remain unclear. For example, Pierrehumbert and Hirschberg themselves note that similarities between the meanings of bitonal accents (L*+H and L+H*, and H*+L and H+L*) are evident in English, but not easy to account for in their model. Further, models similar to that developed by Pierrehumbert

The Representation of Intonation

17

and Hirschberg (1990) have not been as developed for other languages, so the framework has not been extensively tested for cross-linguistic validity. Research has also shown that certain tonal combinations are more frequent than others both in English (e.g. Dainora 2001, 2006) and in other languages (e.g. Arvaniti and Baltazani 2005 for Greek). Observations of this sort abound and have given rise to hypotheses that perhaps intonational meaning depends on strings of intonational primitives rather than being strictly compositional; in this case, the importance of the melody from the nuclear accent onward (akin to the nucleus of the British school) is emphasized (for a review and discussion, see Gussenhoven 2004: ch. 7). More recently, Calhoun (2010) has provided a metrical analysis for the importance of the nuclear pitch accent, suggesting that the presence of prenuclear accents is metrically motivated and that they do not contribute to information structure, contra Pierrehumbert and Hirschberg’s position that all accents contribute to meaning (for arguments similar to Calhoun’s, see Büring 2007). As even this brief discussion indicates, much remains to be done before the relationship between melodies and their contribution to information structure is fully understood. It is clear, however, that crude distinctions such as statement vs. question or focus vs. lack thereof are not sufficient to explain intonational meaning and that a more sophisticated understanding, quite possibly following the main principles of the treatment provided in Pierrehumbert and Hirschberg (1990), is likely to be more successful.

4

The phonetic realization of tones: Level or dynamic tones?

With the noted exceptions of Bolinger and the IPO, the choice between level and dynamic tones was not always explicitly motivated in the various models discussed here. It is of course easy to consider this a mundane issue: after all, as Bolinger (1986: 225) put it, “it makes no difference, in describing a movement, whether one says ‘first you are going to be up and then you are going to be down’ or ‘you are going to go down.’” Choosing the right answer, however, is neither trivial nor a matter of taste, as the answer has empirically testable consequences. This issue was addressed in Pierrehumbert and Beckman (1988: chs 2–4), who discuss extensively how the context-related scaling differences of tones they observed in Japanese cannot be elegantly accounted for in models using dynamic tones as primitives. Arvaniti et al. (1998) specifically compared the predictions of IPO to those of AM, by examining the realization of the rising pitch accents found in prenuclear position in Greek declaratives. They found that the timing, duration, and speed (rate of change) of these rises were not invariable, as advocated by the IPO, but depended on syllable duration. On the other hand, the timing and scaling of the onset and offset of the rise were held constant: the initial dip – interpreted as the reflex of an L tone – coincided with the onset of the accented syllable, while the peak – interpreted as the reflex of an H tone – was reached about 10 msec after the onset of the first post-accentual vowel. At the very minimum, the results of Arvaniti et al. (1998) suggest that the IPO notion of dynamic tones does not apply to all languages. More generally, they pose the question of how dynamic primitives such as rises and falls can be

18

Amalia Arvaniti

determined if none of the properties that may define them is stable. Finally, it is not clear how the notion of an indivisible unit can be defended for the Greek accents at all, since the beginning and ending points of the rise do not behave as one. Their relative autonomy is demonstrated by the fact that they align independently of each other and are not similarly affected by tonal crowding, which typically results in the undershooting of the L, while the realization of the H remains largely unaltered (Arvaniti et al. 2000). This pattern is difficult to account for if the rise is a unit, in which case one would more plausibly expect a curtailment of the entire pitch movement. It is thus clear that dynamic tones cannot account for some of the attested patterns. On the other hand, level tones can be used not only for the representation of loosely defined rises and falls, as in Greek, but also for rises and falls that are more closely knit. Such units have been reported by Frota (2002) for European Portuguese. Specifically, Frota found that in the falling accent indicating broad focus, the H and L are timed with respect to distinct segments (similarly to the Greek case), but the fall of the accent indicating narrow focus shows a constant timing relationship between the H and L tones (similar to that discussed by Pierrehumbert 1980 for L*+H and L+H* in English). This difference between the two accents of European Portuguese can be represented by means of a hierarchical representation of tones shown in (1), as first proposed by Grice (1995a, 1995b) and adopted by Frota, or it can be treated as an issue of phonetic realization, as argued in Arvaniti et al. (2006b). Either way, it is clear that while level tones can adequately describe all attested tonal patterns, dynamic tones cannot. In short, then, both the empirical evidence and phonological considerations of parsimony and descriptive adequacy make a theory based on level tones preferable. (1)

PA

PA

s

s

w s H+L*

5

s w H*+L

Conclusion

The original debate about levels vs. configurations (Bolinger 1951) focused on the issue of whether melodies are gestalts or should be seen as composites of primitives. Independently of this distinction, Bolinger’s views, espoused by many before him and since, are based on the idea that intonational contours should be represented in their entirety, either as a series of primitives, or as a “line [. . .] on a piece of paper.” Current understanding suggests that couching the problem in these terms is misleading, as neither type of representation is likely to be correct: as shown, gestalt approaches cannot account in a satisfactory manner for either intonational meaning or intonational form; yet representations that fully specify the course of F0, either in terms of dynamic tones or in terms of levels, do not fare much better. Due to the particularities of intonation, especially the fact that its realization depends on the metrical structure of the utterance with

The Representation of Intonation

19

which it co-occurs, significant generalizations about melodies and their phonetic variation are best captured if it is recognized that only certain parts of F0 contours are linguistically relevant and should be represented phonologically and phonetically. Finally, empirical evidence as well as considerations of representational parsimony strongly suggest that these linguistically relevant aspects of F0 contours are best represented as levels rather than movements.

ACKNOWLEDGMENTS I thank Christina Lee and Tara Boswell for providing the American English examples in Figures 32.1 and 32.3, respectively.

REFERENCES Armstrong, Lilias E. & Ida Ward. 1926. Handbook of English intonation. Cambridge: Heffer. Arvaniti, Amalia. 2007. On the relationship between phonology and phonetics (or why phonetics is not phonology). In Jürgen Trouvain & William J. Barry (eds.) Proceedings of the 16th International Congress of Phonetic Sciences, 19–24. Saarbrücken: Saarland University. Arvaniti, Amalia. Forthcoming. Prosodic representations. (Part I: Segment-to-tone association). In Cohn et al. (forthcoming). Arvaniti, Amalia & Evangelia Adamou. Forthcoming. Focus expression in Romani. Proceedings of the West Coast Conference on Formal Linguistics 28. Arvaniti, Amalia & Mary Baltazani. 2005. Intonational analysis and prosodic annotation of Greek spoken corpora. In Jun (2005b), 84–117. Arvaniti, Amalia & Gina Garding. 2007. Dialectal variation in the rising accents of American English. In Jennifer Cole & José Ignacio Hualde (eds.) Laboratory phonology 9, 547–576. Berlin & New York: Mouton de Gruyter. Arvaniti, Amalia & D. Robert Ladd. 2009. Greek wh-questions and the phonology of intonation. Phonology 26. 43–74. Arvaniti, Amalia, D. Robert Ladd & Ineke Mennen. 1998. Stability of tonal alignment: The case of Greek prenuclear accents. Journal of Phonetics 26. 3–25. Arvaniti, Amalia, D. Robert Ladd & Ineke Mennen. 2000. What is a starred tone? Evidence from Greek. In Broe & Pierrehumbert (2000), 119–131. Arvaniti, Amalia, D. Robert Ladd & Ineke Mennen. 2006a. Phonetic effects of focus and “tonal crowding” in intonation: Evidence from Greek polar questions. Speech Communication 48. 667–696. Arvaniti, Amalia, D. Robert Ladd & Ineke Mennen. 2006b. Tonal association and tonal alignment: Evidence from Greek polar questions and contrastive statements. Language and Speech 49. 421–450. Baltazani, Mary. 2006. Focusing, prosodic phrasing, and hiatus resolution in Greek. In Goldstein et al. (2006), 473–493. Barnes, Jonathan, Stefanie Shattuck-Hufnagel, Alejna Brugos & Nanette Veilleux. 2006. The domain of realization of the L-phrase tone in American English. In Rüdiger Hoffmann & Hansjörg Mixdorff (eds.) Proceedings of Speech Prosody 2006. Dresden: TUDpress Verlag der Wissenschaften GmbH. Available (August 2010) at http:// aune.lpl.univ-aix.fr/~sprosig/sp2006/contents/papers/PS3-11_0163.pdf.

20

Amalia Arvaniti

Barnes, Jonathan, Nanette Veilleux, Alejna Brugos & Stefanie Shattuck-Hufnagel. 2010. The effect of global F0 contour shape on the perception of tonal timing contrasts in American English intonation. Proceedings of Speech Prosody 2010. Available (August 2010) at http://speechprosody2010.illinois.edu/papers/100445.pdf. Beckman, Mary E. & Janet B. Pierrehumbert. 1986. Intonational structure in Japanese and English. Phonology Yearbook 3. 255–309. Bishop, Judith & Janet Fletcher. 2005. Intonation in six dialects of Bininj Gun-wok. In Jun (2005b), 331–361. Bolinger, Dwight L. 1951. Intonation: Levels versus configurations. Word 7. 199–210. Bolinger, Dwight L. 1964. Around the edge of language: Intonation. Harvard Educational Review 34. 282–293. Reprinted in Bolinger (1972), 19–29. Bolinger, Dwight L. (ed.) 1972. Intonation: Selected readings. Harmondsworth: Penguin. Bolinger, Dwight L. 1986. Intonation and its parts: Melody in spoken English. London: Edward Arnold. Botinis, Antonis. 1989. Stress and prosodic structure in Greek: A phonological, acoustic, physiological and perceptual study. Lund: Lund University Press. Broe, Michael B. & Janet B. Pierrehumbert (eds.) 2000. Papers in laboratory phonology V: Acquisition and the lexicon. Cambridge: Cambridge University Press. Bruce, Gösta. 1977. Swedish word accents in sentence perspective. Lund: Gleerup. Bryant, Gregory A. & Jean E. Fox Tree. 2005. Is there an ironic tone of voice? Language and Speech 48. 257–277. Büring, Daniel. 2007. Intonation, semantics and information structure. In Gillian Ramchand & Charles Reiss (eds.) The Oxford handbook of linguistic interfaces, 445–473. Oxford: Oxford University Press. Calhoun, Sasha. 2010. The centrality of metrical structure in signaling information structure: A probabilistic perspective. Language 86. 1–42. Chen, Yiya. 2006. Durational adjustment under corrective focus in Standard Chinese. Journal of Phonetics 34. 176–201. Chen, Yiya. 2010. Post-focus F0 compression: Now you see it, now you don’t. Journal of Phonetics 38. 517–525. Cohen, Antonie & Johan ’t Hart. 1968. On the anatomy of intonation. Lingua 19. 177–192. Cohn, Abigail C., Cécile Fougeron & Marie Huffman (eds.) Forthcoming. The Oxford handbook of laboratory phonology. Oxford: Oxford University Press. Cooper, William E. & John Sorensen. 1981. Fundamental frequency in sentence production. Heidelberg: Springer. Crystal, David. 1972. The intonation system of English. In Bolinger (1972), 110–136. Dainora, Audra. 2001. An empirically based probabilistic model of intonation in American English. Ph.D. dissertation, University of Chicago. Dainora, Audra. 2006. Modeling intonation in English: A probabilistic approach to phonological competence. In Goldstein et al. (2006), 107–132. de Jong, Kenneth J. 1995. The supraglottal articulation of prominence in English: Linguistic stress as localized hyperarticulation. Journal of the Acoustical Society of America 97. 491–504. Dilley, Laura C. & Meredith Brown. 2007. Effects of pitch range variation on f0 extrema in an imitation task. Journal of Phonetics 35. 523–551. D’Imperio, Mariapaola. 2000. The role of perception in defining tonal targets and their alignment. Ph.D. dissertation, Ohio State University. D’Imperio, Mariapaola. Forthcoming. Prosodic representations. (Part II: Tonal alignment). In Cohn et al. (forthcoming). D’Imperio, Mariapaola & David House. 1997. Perception of questions and statements in Neapolitan Italian. Proceedings of the 5th European Conference on Speech Communication and Technology (Eurospeech 1997), vol. 1, 251–254. Rhodes, Greece.

The Representation of Intonation

21

D’Imperio, Mariapaola, Jacques Terken & Michel Pitermann. 2000. Perceived tone “targets” and pitch accent identification in Italian. In Michael Barlow (ed.) Proceedings of the 8th International Conference on Speech Science and Technology, 206–211. Canberra: Australian Speech Science and Technology Association. D’Imperio, Mariapaola, Barbara Gili Fivela & Oliver Niebuhr. 2010. Alignment perception of high intonational plateaux in Italian and German. Proceedings of Speech Prosody 2010. Available (August 2010) at http://speechprosody2010.illinois.edu/papers/ 100186.pdf. Elordieta, Gorka & José Ignacio Hualde. 2003. Tonal and durational correlates of accent in contexts of downstep in Lekeitio Basque. Journal of the International Phonetic Association 33. 195–209. Fletcher, Janet, Esther Grabe & Paul Warren. 2005. Intonational variation in four dialects of English: The high rising tune. In Jun (2005b), 390–409. Fougeron, Cécile & Sun-Ah Jun. 1998. Rate effects on French intonation: Prosodic organization and phonetic realization. Journal of Phonetics 26. 45–69. Frota, Sónia. 2002. Tonal association and target alignment in European Portuguese nuclear falls. In Gussenhoven & Warner (2002), 387–418. Fujisaki, Hiroya. 1983. Dynamic characteristics of voice fundamental frequency in speech and singing. In Peter F. MacNeilage (ed.) The production of speech, 39–55. New York: Springer. Fujisaki, Hiroya. 2004. Information, prosody and modeling – with emphasis on tonal features of speech. In Bernard Bel & Isabelle Marlien (eds.) Speech Prosody 2004. Available (August 2010) at www.isca-speech.org/archive/sp2004/sp04_001.pdf. Fujisaki, Hiroya, Sumio Ohno & Takashi Yagi. 1997. Analysis and modelling of fundamental frequency contours of Greek utterances. Proceedings of the 5th European Conference on Speech Communication and Technology (Eurospeech 1997), vol. 2, 465–468. Rhodes, Greece. Fujisaki, Hiroya, Changfu Wang, Sumio Ohno & Wentao Gu. 2005. Analysis and synthesis of fundamental frequency contours of Standard Chinese using the command–response model. Speech Communication 47. 59–70. Gårding, Eva. 1983. A generative model of intonation. In Anne Cutler & D. Robert Ladd (eds.) Prosody: Models and measurements, 11–25. Berlin: Springer. Gårding, Eva. 1987. Speech act and tonal pattern in standard Chinese: Constancy and variation. Phonetica 44. 13–29. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Published 1979, New York: Garland. Goldstein, Louis, Douglas Whalen & Catherine T. Best (eds.) 2006. Laboratory phonology 8. Berlin & New York: Mouton de Gruyter. Gooden, Shelome, Kath-Ann Drayton & Mary Beckman. 2009. Tone inventories and tune-text alignments: Prosodic variation in “hybrid” prosodic systems. Studies in Language 33. 354–394. Gordon, Matthew. 2005. Intonational phonology of Chickasaw. In Jun (2005b), 301–330. Gordon, Matthew. 2008. Pitch accent timing and scaling in Chickasaw. Journal of Phonetics 36. 521–535. Grabe, Esther, Brechtje Post, Francis Nolan & Kimberley Farrar. 2000. Pitch accent realization in four varieties of British English. Journal of Phonetics 28. 161–185. Grabe, Esther, Greg Kochanski & John Coleman. 2003. Quantitative modelling of intonational variation. Proceedings of Speech Analysis and Recognition in Technology, Linguistics and Medicine 2003. Available (August 2010) at www.phon.ox.ac.uk/oxigen/ publications.php. Grice, Martine. 1995a. The intonation of interrogation in Palermo Italian: Implications for intonation theory. Tübingen: Niemeyer. Grice, Martine. 1995b. Leading tones and downstep in English. Phonology 12. 183–233.

22

Amalia Arvaniti

Grice, Martine, D. Robert Ladd & Amalia Arvaniti. 2000. On the place of phrase accents in intonational phonology. Phonology 17. 143–185. Gu, Wentao, Keikichi Hirose & Hiroya Fujisaki. 2007. Analysis of tones in Cantonese speech based on the command–response model. Phonetica 64. 29–62. Gussenhoven, Carlos. 1999. Discreteness and gradience in intonational contrasts. Language and Speech 42. 283–305. Gussenhoven, Carlos. 2000. The boundary tones are coming: On the non-peripheral realization of boundary tones. In Broe & Pierrehumbert (2000), 132–151. Gussenhoven, Carlos. 2004. The phonology of tone and intonation. Cambridge: Cambridge University Press. Gussenhoven, Carlos & Natasha Warner (eds.) 2002. Laboratory phonology 7. Berlin & New York: Mouton de Gruyter. Halliday, M. A. K. 1967. Intonation and grammar in British English. The Hague: Mouton. Halliday, M. A. K. 1970. A course in spoken English: Intonation. Oxford: Oxford University Press. Harrington, Jonathan, Janet Fletcher & Mary E. Beckman. 2000. Manner and place conflicts in the articulation of accent in Australian English. In Broe & Pierrehumbert (2000), 40–51. Hart, Johan ’t & Antonie Cohen. 1973. Intonation by rule: A perceptual quest. Journal of Phonetics 1. 309–327. Hart, Johan ’t & René Collier. 1975. Integrating different levels of intonation analysis. Journal of Phonetics 3. 235–255. Hart, Johan ’t, René Collier & Antonie Cohen. 1990. A perceptual study of intonation: An experimental-phonetic approach to speech melody. Cambridge: Cambridge University Press. Hayes, Bruce & Aditi Lahiri. 1991. Bengali intonational phonology. Natural Language and Linguistic Theory 9. 47–96. Hirschberg, Julia & Gregory Ward. 1992. The influence of pitch range, duration, amplitude and spectral features on the interpretation of the rise–fall–rise intonation contour in English. Journal of Phonetics 20. 241–251. Hirst, Daniel & Albert Di Cristo. 1998. A survey of intonation systems. In Daniel Hirst & Albert Di Cristo (eds.) Intonation systems: A survey of twenty languages, 1–44. Cambridge: Cambridge University Press. Hirst, Daniel, Albert Di Cristo & Robert Espesser. 2000. Levels of representation and levels of analysis for the description of intonation systems. In Merle Horne (ed.) Prosody: Theory and experiment, 51–87. Dordrecht: Kluwer. Hockett, Charles F. 1955. A manual of phonology. Baltimore: Waverly Press. Hualde, José Ignacio, Gorka Elordieta, Inaki Gaminde & Rajka Smiljanio. 2002. From pitch accent to stress accent in Basque. In Gussenhoven & Warner (2002), 547–584. Jones, Daniel. 1972. An outline of English phonetics. 9th edn. Cambridge: Heffer & Sons. Jun, Sun-Ah. 2005a. Korean intonational phonology and prosodic transcription. In Jun (2005b), 201–229. Jun, Sun-Ah (ed.) 2005b. Prosodic typology: The phonology of intonation and phrasing. Oxford: Oxford University Press. Kingdon, Roger. 1958. The groundwork of English Intonation. London: Longmans, Green & Co. Knight, Rachael-Anne & Francis Nolan. 2006. The effect of pitch span on intonational plateau. Journal of the International Phonetic Association 36. 21–38. Ladd, D. Robert. 1980. The structure of intonational meaning: Evidence from English. Bloomington: Indiana University Press. Ladd, D. Robert. 1996. Intonational phonology. Cambridge: Cambridge University Press. Ladd, D. Robert. 2008. Intonational phonology. 2nd edn. Cambridge: Cambridge University Press.

The Representation of Intonation

23

Ladd, D. Robert, Ineke Mennen & Astrid Schepman. 2000. Phonological conditioning of peak alignment in rising pitch accents in Dutch. Journal of the Acoustical Society of America 107. 2685–2696. Ladd, D. Robert, Astrid Schepman, Laurence White, Louise May Quarmby & Rebekah Stackhouse. 2009. Structural and dialectal effects on pitch peak alignment in two varieties of British English. Journal of Phonetics 37. 145–161. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. Leben, William R. 1976. The tones of English intonation. Linguistic Analysis 2. 69–107. Lehiste, Ilse. 1970. Suprasegmentals. Cambridge, MA: MIT Press. Liberman, Mark. 1975. The intonational system of English. Ph.D. dissertation, MIT. Liberman, Mark & Janet B. Pierrehumbert. 1984. Intonational invariance under changes in pitch range and length. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 157–233. Cambridge, MA: MIT Press. Morén, Bruce & Elizabeth C. Zsiga. 2006. The lexical and post-lexical phonology of Thai tones. Natural Language and Linguistic Theory 24. 113–178. Myers, Scott. 2003. F0 timing in Kinyarwanda. Phonetica 60. 71–97. O’Connor, J. D. & G. F. Arnold. 1973. Intonation of colloquial English. London: Longman. Ohala, John J. 1983. Cross-language view of pitch: An ethological view. Phonetica 40. 1–18. Ohala, John J. & William G. Ewan. 1973. Speed of pitch change. Journal of the Acoustical Society of America 53. 345. Pan, Ho-Hsien. 2008. Focus and Taiwanese unchecked tones. In Chungmin Lee, Matthew Gordon & Daniel Büring (eds.) Topic and focus: Cross-linguistic perspectives on meaning and intonation, 195–213. Dordrecht: Springer. Pierrehumbert, Janet B. 1980. The phonology and phonetics of English intonation. Ph.D. dissertation, MIT. Pierrehumbert, Janet B. & Mary E. Beckman. 1988. Japanese tone structure. Cambridge, MA: MIT Press. Pierrehumbert, Janet B. & Julia Hirschberg. 1990. The meaning of intonational contours in the interpretation of discourse. In Philip R. Cohen, Jerry Morgan & Martha E. Pollack (eds.) Intentions in communication, 271–311. Cambridge MA: MIT Press. Pierrehumbert, Janet B. & Shirley Steele. 1989. Categories of tonal alignment in English. Phonetica 46. 181–196. Pike, Kenneth L. 1945. The intonation of American English. Ann Arbor: University of Michigan Press. Prieto, Pilar. 2005. Stability effects in tonal clash contexts in Catalan. Journal of Phonetics 33. 215 –242. Prieto, Pilar & Francisco Torreira. 2007. The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics 35. 473–500. Prieto, Pilar, Chilin Shih & Holy Nibert. 1996. Pitch downtrend in Spanish. Journal of Phonetics 24. 445–473. Redi, Laura. 2003. Categorical effects in the production of pitch contours in English. In M. J. Solé, D. Recasens & J. Romero (eds.) Proceedings of the 15th International Congress of Phonetic Sciences, 2921–2924. Barcelona: Causal Productions. Rialland, Annie. 2007. Question prosody: An African perspective. In Carlos Gussenhoven & Tomas Riad (eds.) Tones and tunes, vol. 1: Typological studies in word and sentence prosody, 35–62. Berlin & New York: Mouton de Gruyter. Rietveld, Toni & Carlos Gussenhoven. 1995. Aligning pitch targets in speech synthesis: Effects of syllable structure. Journal of Phonetics 23. 375–385. Silverman, Kim E. A. 1987. The structure and processing of fundamental frequency contours. Ph.D. dissertation, Cambridge University.

24

Amalia Arvaniti

Silverman, Kim E. A. & Janet B. Pierrehumbert. 1990. The timing of prenuclear high accents in English. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 72–106. Cambridge: Cambridge University Press. Smiljanio, Rajka. 2006. Early vs. late focus: Pitch-peak alignment in two dialects of Serbian and Croatian. In Goldstein et al. (2006), 494–518. Steedman, Mark. 2000. Information structure and the syntax–phonology interface. Linguistic Inquiry 31. 649–689. Sundberg, Johan. 1979. Maximum speed of pitch changes in singers and untrained subjects. Journal of Phonetics 7. 71–79. Swerts, Marc, Emiel Krahmer & Cinzia Avesani. 2002. Prosodic marking of information status in Dutch and Italian: A comparative analysis. Journal of Phonetics 30. 629–654. Thorsen, Nina. 1980. A study of the perception of sentence intonation: Evidence from Danish. Journal of the Acoustical Society of America 67. 1014–1030. Thorsen, Nina. 1985. Intonation and text in Standard Danish. Journal of the Acoustical Society of America 77. 1205–1216. Thorsen, Nina. 1986. Sentence intonation in textual context: Supplementary data. Journal of the Acoustical Society of America 80. 1041–1047. Trager, George L. 1961. The intonation system of American English. In D. Abercrombie, D. B. Fry, P. A. D. MacCarthy, N. C. Scott & J. L. M. Trimm (eds.) In honour of Daniel Jones: Papers contributed on the occasion of his eightieth birthday, 266–270. London: Longmans, Green & Co. Reprinted in Bolinger (1972), 83–86. Trager, George L. & Henry L. Smith. 1951. An outline of English structure. Norman, OK: Battenburg Press. Venditti, Jennifer, Kikuo Maekawa & Mary E. Beckman. 2008. Prominence marking in the Japanese intonation system. In Shigeru Miyagawa & Mamoru Saito (eds.) The Oxford handbook of Japanese linguistics, 456–512. Oxford: Oxford University Press. Ward, Gregory & Julia Hirschberg. 1985. Implicating uncertainty: The pragmatics of fall–rise intonation. Language 61. 747–776. Xu, Yi. 1999. Effects of tone and focus on the formation and alignment of F0 contours. Journal of Phonetics 21. 55–105. Xu, Yi. 2005. Speech melody as articulatorily implemented communicative functions. Speech Communication 46. 220–251. Xu, Yi & Xuejing Sun. 2002. Maximum speed of pitch change and how it may relate to speech. Journal of the Acoustical Society of America 111. 1399–1413.

33

Syllable-internal Structure Anna R. K. Bosch

There is no simple discovery procedure for determining phonological syllable structure (which, like phonological representations in general, may not be in a one-to-one relationship with systematic phonetic syllabification, and which may not necessarily conform to native speaker intuitions about syllable division). The nature of the mechanism which assigns syllabification (defines possible syllables) for a given language is an empirical hypothesis, whose confirmation depends on the extent to which linguistically significant generalizations can be expressed under it (Feinstein 1979: 255).

1

Introduction

Although the syllable has been used by generations of linguists both in language description and in phonological theory, it is still surprising to see the variety of opinions and arguments on the topic. As Einar Haugen expostulated as early as 1956, “everyone talks about syllables, but no one seems to do anything about defining them” (Haugen 1956: 196). Since that time we have had fifty years or more of attempts to outline and define the syllable and its constituents, perhaps without coming much closer to solid agreement. Although the terms syllable, onset, rhyme, nucleus, and coda remain in common usage among phonologists, we cannot yet point to invariant acoustic or articulatory evidence for these constituents. As Haugen continues, “the only real basis for assuming their existence is that speakers of the language can utter them separately, dividing utterances into sequences that seem natural when pronounced alone.” We point to this evidence again and again as certain proof that there is “something” called the syllable; perhaps the everyday linguistic knowledge of the speaker is the single constant throughout phonological research on syllable structure. And yet, as Feinstein emphasizes in the quote appended above, we also draw a certain distinction between the “speaker’s syllable” and the “phonological syllable.” The phonological syllable is defined empirically by “linguistically significant generalizations,” while the speaker’s syllable is defined simply and automatically (when the decision is indeed simple and automatic) in careful speech, or by a number of – also empirical – experimental methods exploring external evidence, such as language games and other speaker behavior (chapter 96: experimental approaches in theoretical phonology). The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0033

2

Anna R. K. Bosch

As Goldsmith (1990) points out, there are at least two competing, or perhaps parallel, views of the syllable that have influenced phonological theory over the past century or more: just as light can behave as both wave and particle, the syllable too has been shown to demonstrate both a wave-like property based on sonority (chapter 49: sonority), and a piece-like, or chunk-like, division into smaller constituents, such as onset and rhyme. While there are still good reasons to hold on to a wave-shaped understanding of the syllable, defining the syllable and its properties with reference to the peaks and valleys of sonority shaping each one, the present chapter will nonetheless focus exclusively on theories of the syllable which specifically address the question of constituent structure. The question of syllable structure can be understood as a question about the nature of linguistic representation (cf. Anderson 1985); as Anderson claims, “most of the history of twentieth-century phonology is the history of theories of representations, devoted to questions such as ‘What is the nature of the phoneme, morphophoneme, morpheme, etc?’ ” (Anderson 1985: 9), to which we might add the question that concerns us here: “What is the nature of the syllable?” This chapter begins with a brief historical sketch tracing the early arguments in favor of different representations of syllable structure, followed by an overview of different models of the internal structure of the syllable. A final section reviews the conclusions of experimental studies as they adduce evidence for or against internal constituents of the syllable. (See also chapter 109: polish syllable structure; chapter 115: chinese syllable structure; chapter 56: sign syllables.)

2

Early twentieth-century discussions of syllable-internal structure

Early twentieth-century linguists asked themselves the same question, “what is the nature of the syllable?” Saussure proposes an impressionistic account of the syllable as composed of a succession of explosive and implosive articulatory movements; while all speech consists of an alternating series of these movements, the syllable boundary itself is marked by “the passage from an implosion to an explosion in a chain of sounds” (Saussure 1922). Arguing for the syllable as a unit of phonology, he reasons that “the regular coincidence of a mechanical principle and a definite acoustical effect assures the implosive-explosive combination of a right to existence in phonology” (1922: 57). Saussure goes so far as to claim that these opening and closing articulatory motions, which he distinguishes from acoustic sonority, are themselves the irreducible units of the syllable; further, a close examination of Saussure’s diagrams demonstrates that he considers the vocalic peak to form a part of the implosion, hinting at something like a rhyme: “Whenever a particular phoneme is more open than the following one, the impression of continuity persists; . . . an implosive link, like an explosive one, obviously can include more than two elements if each has wider aperture than the following one” (Saussure 1922: 56). Saussure’s footnote demonstrating the syllabification of the word particularly is noteworthy, presenting an early sketch of an onset-rhyme division within the syllable: a vowel and following tautosyllabic consonant are both “implosive,” according to Saussure’s terminology, while the prevocalic consonants are “explosive.” (Superscript arrow-heads indicating explosion [] are placed immediately above each alphabetic graph in Saussure’s text.)

Syllable-internal Structure (1)

3

< > > < >> < > < > > < > [par tik iu lar li]

KuryÓowicz develops a notion of syllable structure which was clearly influenced by Saussure, referring to “the initial (explosive) consonant group and the final (implosive) consonant group as they relate to the vocalic center”1 (KuryÓowicz 1949). Here KuryÓowicz quite explicitly associates the structure of both “semantic” and “phonic” systems, presenting a “table of correspondence,” or equivalence, which presages hierarchical structure within the syllable, as in (2), in parallel with the semantic functions of subject, predicate, etc. (2)

Table of correspondence (KuryÓowicz 1949, reprinted in Hamp et al. 1966: 230, translated A. R. K. Bosch) semantic system proposition predicate subject additional information

phonic system syllable vowel initial consonant group final consonant group, etc.

That is, just as a proposition consists of subject, predicate, and additional information, a syllable can be seen to consist of initial consonant group, vowel, and final consonant group. Peak and coda are grouped into a constituent in KuryÓowicz (1948), on the basis of co-occurrence restrictions which are found between peak and coda, but not between onset and peak. Although the term “vocalic peak” is already in use by the time of Saussure’s writing, Selkirk (1982) credits Hockett (1955) with the terms “onset” and “coda”; the use of “rhyme” to refer to the conjunction of peak and coda is attributed to Fudge (1969), although of course the informal usage of this term to describe poetic form dates from the seventeenth century.2

3

Evidence for constituents within the syllable

Reviewing arguments for syllable-internal structure from Saussure onward, we find that evidence for structure within the syllable is typically modeled on evidence for the syllable itself; in a comprehensive overview of syllable theories, Blevins (1995) outlines four traditional arguments in favor of the syllable itself as a constituent. She notes that (a) the syllable has been employed as the domain within which phonological processes or constraints may apply; (b) the syllable edge is identified as the locus for the application of processes or constraints; (c) the syllable itself may be picked out as a “target structure,” e.g. for the application of language games or for the assignment of stress or tone; and finally (d) field linguists recount that native speakers can express intuitions regarding

1

“le groupe consonantique initial (explosif) et le groupe consonantique final (implosif) par rapport au centre vocalique” [translation A. R. K. Bosch]. 2 The Oxford English Dictionary cites Samuel Butler (1663), “For Rhime the Rudder is of Verses, With which like Ships they stear their courses.”

4

Anna R. K. Bosch

the number of syllables per word or utterance. So, for example, nasalization may spread within a syllable (a); a syllable-final consonant may be devoiced (b); syllables may be independently manipulated in language games such as the French Verlan (c); finally, field linguists commonly report formal and informal studies of speakers who are easily able to count syllables, or who pause between syllables when exaggerating slow and careful speech (d). All these are common examples of the utility of the constituent “syllable”; however, not all of these arguments provide evidence for sub-syllabic constituents. Upon closer examination, only (a) and (c) usefully apply in evaluating syllableinternal constituents. First, as argued in (a), the constituents onset or rhyme have been argued to serve as phonological domains: Davis (1992) argues from Italian that the choice of the article (il or lo) depends on the constituent structure of the following onset (chapter 55: onsets; chapter 38: the representation of sc clusters). And the constituent structure of the rhyme – short vowel, long vowel, or vowel + consonant – may play a crucial role in stress assignment in quantitysensitive languages. In addition (as argued in (c)), the separate constituents onset and rhyme may be singled out as “target structures” for the application of language games. In the American children’s game “ubby dubby,” popularized by the 1970s television show Zoom!, the sequence [Hb] is inserted between each onset and rhyme: hello becomes [hHbelHboÁ]. Numerous language games play on the identification of onset and rhyme as target structures, and studies suggest that speech errors may operate on onset and rhyme sequences as single units (see discussion of experimental evidence, below). Thus evidence for sub-syllabic constituents derives primarily from data suggesting that onset and rhyme function as phonological domains or as target structures for other linguistic behavior. However, when we return to examine common evidence for the syllable as a constituent, parallel arguments for syllable-internal structure are not as convincing. Evidence such as (b) that refers to syllable-edges as targets (e.g. devoicing a syllable-final obstruent; chapter 69: final devoicing and final laryngeal neutralization) in fact says nothing about syllable-internal structure per se: a syllable boundary, without reference to constituent structure, could identify this position (see Steriade 1999, for example). And finally, evidence in (d) from slow or careful speech by native speakers may provide clues as to syllable count or syllable boundaries, but generally provides little insight into sub-syllabic constituents, without additional manipulations such as we find in studies of language games, etc. Nonetheless, we frequently uncover parallels between discussions of syllable structure and discussions of syllable-internal structure. This is made explicit within the framework of prosodic phonology (Selkirk 1982; Nespor and Vogel 1986): here the internal structure of the syllable is seen as a natural extension of the higherlevel prosodic structure, to which the syllable naturally belongs (see chapter 40: the foot; chapter 51: the phonological word; chapter 84: clitics; chapter 50: tonal alignment). After laying out arguments for the syllable as a constituent, Selkirk goes on to conclude that: The same three reasons leading to the postulation of the syllable can be shown to motivate the existence of privileged groupings of segments within the syllable which must be thought of as constituent-like linguistic units themselves . . . an internally-structured tree quite analogous to a tree representing syntactic structure (Selkirk 1982: 237).

Syllable-internal Structure

3.1

5

“Flat” models of syllable-internal structure

Various models of syllable-internal structure have been proposed over the past century of linguistic study, from an entirely flat structure consisting primarily of syllable boundary markers to a more highly articulated hierarchical structure. Kahn (1980), for example, proposes the simplest two-tier flat structure consisting of syllable nodes (q1, q2, etc.) on one tier, associated directly with the segments of phonetic (or phonological) representation, as in (3). (3)

No internal constituent structure (e.g. Kahn 1980) q1 æ

q2 t

l

s

H

atlas

For Kahn, the discrete segments are “associated” with the syllable node, and among his syllable-building principles is one akin to the no-crossing constraint of Goldsmith’s (1976) autosegmental phonology; “given the way the term ‘syllable’ is understood, it would seem nonsensical to speak of discontinuous syllables” (Kahn 1980: 36). Kahn explicitly cites Goldsmith manuscripts from 1974 and 1975, and in a footnote outlines his claim that he himself is working in an autosegmental framework, “because all theories of the syllable, including my own, are ‘autosegmental’ in that they involve parallel analyses of phonological material into (traditional) segments and syllables” (Kahn 1980: 61; see also chapter 14: autosegments). A contemporary version of Kahn’s flat structure is echoed in a recent textbook, Hayes (2009). Hayes takes a non-committal position on the internal structure of the syllable: while he prefers constituent (tree) structure to the simple use of boundary symbols to identify syllables, he makes no claim about constituency within the syllable. Introducing the terms onset, coda, and nucleus, he explains that In some theories, the onset, nucleus, and coda are described as constituents (they are daughters of the syllable node q, and dominate segments). This book will use “onset,” “nucleus,” and “coda” merely as useful descriptive terminology (Hayes 2009: 251).

In diagrams throughout this textbook, as in Kahn (1980), segments are dominated directly by the syllable node itself, without intervening structure. A related flat structure with an intervening CV tier is proposed by Clements and Keyser (1983) in (4) (see also chapter 54: the skeleton). (4)

Syllable with intervening CV tier (e.g. Clements and Keyser 1983) q C

V

C

k

æ

t

cat

Among the options that do incorporate some representation of internal structure, however, ternary branching structure represents perhaps the simplest

6

Anna R. K. Bosch

option, incorporating the nodes onset, peak, and coda, as in (5). The primary distinction between the examples in (3) and (5) lies in the use of the constituents onset, peak, and coda to function as hierarchical nodes of the syllable in the latter example; more than one segment may occur within a single constituent in (5), for example. (5)

Ternary branching, with internal structure (e.g. Hockett 1955) q Onset Peak Coda

While Kahn’s flat structure in (3) appears comparable to the ternary branching flat structure in (5), as employed by Hockett (1955) and others, the differences are considerable. Hockett presents an internal structure to the syllable, labeling the constituents onset, peak, and coda (he leaves the door open to other types of internal structure as well). In contrast to Hockett, Kahn simply refers to syllableinitial position, or syllable-final position, when stating phonotactic constraints defined by syllable positions. Thus, for Kahn, the [p] in support [sH.phort] is syllableinitial; we know this because the syllable-initial voiceless stop is aspirated [ph]. In contrast, the unaspirated [p] in asparagus is not syllable-initial, since the [s] is the first consonant of the syllable in the case of the vegetable: [H.spæ.rH.gHs] (Kahn 1980: 73). The aspiration of voiceless stops in English is precisely the type of evidence Kahn requires, as the syllable boundary alone can provide the environment for aspiration. An analysis which allows for internal constituent structure, such as that in (5), would place the [p] of asparagus in the syllable onset, of course, even if it is not syllable-initial (see also chapter 38: the representation of sc clusters). While Hockett proposes the syllable structure in (5), he also allows for a wide variety of different syllable models, suggesting in a “survey of syllable types” (Hockett 1955: 51ff.) that languages may employ different syllables and differing syllable “systems” according to the requirements of each system. He cites Bella Coola as an example of a language demonstrating the “onset type,” for example, since “while every (or almost every) syllable has a distinctive onset, many syllables contain no other syllable-element distinctively” (Hockett 1955: 57). As American linguists of the mid-twentieth century developed a taxonomic or distributional approach to language categories, the informal use of the syllable became commonplace as one means by which the distribution of phonemes could be expressed most economically;3 this was and remains today perhaps the primary use of the syllable as a unit in phonological theory and description. The constituents onset and coda played such a role for Pike and Pike’s (1947) analysis of 3

Compare Fischer-Jørgensen’s comment: “it may nevertheless be due to Bloomfield’s influence that most American linguists, even in short phonemic descriptions (such as the numerous descriptions of American Indian languages in the International Journal of American Linguistics), give a rather detailed statement of the syllabic structure of the language, and in this way present the material on the basis of which the phoneme categories may be established” (Fisher-Jørgensen 1952, reprinted in Hamp et al. 1966: 300).

Syllable-internal Structure

7

Mazateco, demonstrating what Selkirk calls the “immediate constituent” principle of phonotactics: that the constraints that hold between positions within the syllable are more tightly bound when those positions themselves form a constituent. Thus the constituents onset, peak, and coda – for Pike and Pike – are precisely those constituents within which we can describe such constraints. Similarly, Pike argues here and elsewhere (Pike and Pike 1947; Pike 1975a, 1975b) that phonological processes will refer to syllable constituents, rather than to individual segments; thus pitch and stress in Mazateco are defined with reference to the syllable peak. For example, in Mazateco contrastive tone is “limited exclusively” to the nucleus of the syllable; while tone will spread within the nucleus, it never spreads to the consonants in the “margins” of the syllable. Haugen’s (1956) analysis of the syllable in Kutenai details possible initial and final consonant clusters in that language, and elaborates whether medial clusters (the “interlude”) can properly be determined on the basis of permitted onset and coda clusters alone. Rand’s (1968) analysis of Alabaman syllables is specific in the disclaimer that “the syllable in Alabaman cannot be identified by any physical boundary feature.” Indeed, making use of both the wave model and hierarchical structure, he first defines the syllable in terms of sonority: there are as many syllables as there are peaks; “the determination of what occurs as peak is based on phonetics.” Later, however, he relies on hierarchical structure: “Stated another way, the syllable is an endocentric construction of two layers of immediate constituents, head (the vowel) and satellite (the consonant)” (1968: 97).

3.2

Hierarchical ordering of constituents within the syllable

Two different hierarchically structured binary-branching syllable types have been suggested, as in (6) and (7) below; by far the more conventional type is (6), in which the syllable consists of the constituents onset and rhyme, and the rhyme is formed by nucleus and coda. Arguments in favor of (7) have been proposed for languages such as Korean and Mandarin Chinese (McCarthy 1979; Yoon and Derwing 2001; Wang and Cheng 2008); in these examples the syllable is seen to consist of body + coda; the body forms a unit composed of onset and nucleus together.4 (6)

Binary branching with rhyme (e.g. from Pike and Pike 1947 onwards) Syllable Onset

Rhyme

Nucleus

4

Coda

A reviewer points out that this appears to contradict the traditional description of Chinese syllables in terms of “initials” and “finals” (cf. also Blevins 1995: 212; chapter 115: chinese syllable structure).

Anna R. K. Bosch

8 (7)

Binary branching with body (e.g. Yoon and Derwing 2001) Syllable Body

Coda

Nucleus

Onset

Levin (1985) proposed a variation of the branching structure in (6), a metrical theory of syllabic structure in which the syllable is a projection of the category “nucleus,” or N, illustrated in (8). Here the coda is represented as the complement, or right sister, of N, dominated by the first projection N′. The onset is the specifier of the syllable, dominated by the second projection N″. (8)

Metrical theory of syllable structure (Levin 1985) N″ Specifier

N′ N

Complement

Nucleus Finally, it should be noted that government phonology, as described by Kaye et al. (1990) and employed in Kaye (1990), Harris (1994), Botma et al. (2008), and various chapters in van der Hulst and Ritter (1999), makes use of syllable structure without the syllable, so to speak. The “syllabic constituents” onset (O) and rhyme (R) are not united into a single constituent (the syllable) in this analysis. The nucleus (N) is a constituent of the rhyme, but neither the coda nor the syllable is recognized as a constituent in government phonology. (Some versions of government phonology require CV syllables throughout, and posit an empty V where needed; see e.g. Scheer 2004.) (9)

Syllable structure without the syllable (Kaye et al. 1985) R O N x

3.3

x

x

Development of models of internal structure, from Fudge through Selkirk

In his influential article on the syllable, Fudge (1969) argues that the syllable has two primary functions as a linguistic universal: first, the syllable plays a role in

Syllable-internal Structure

9

the location of suprasegmental phenomena such as stress and tone; and second, it serves as the most appropriate unit for the formulation of phonotactic constraints. For Fudge, the internal structure of the syllable plays a key role. Fudge presents an analysis of the syllable in RP English, based in part on prior work on Chinese syllable structure (Firth and Rogers 1937; Hockett 1947). According to Fudge, the internal structure of the syllable, along with a detailed set of “collocational restrictions,” “clearly accounts in an appropriate way for the majority of the systematic restrictions on sound-sequences” (1969: 266ff.), or surface phonotactic constraints. The syllable consists of a hierarchical branching structure; below the labeled nodes onset, peak, and coda are numbered positions which play a role in the formulation of co-occurrence restrictions, as in (10). The phoneme inventory is specified according to what is possible, or permitted, for each syllable position. Thus, onset position 1 may include sC clusters as well as any of the non-syllabic phonemes of English; position 2 only allows the sonorants [w l r m n]. In addition to the sub-syllabic constituents onset and rhyme, Fudge argues for a “termination” node, which forms a part of word-final syllables only, and which permits only a small subset of segments, primarily morphemes such as past tense or nominative plural, and [-st] or [h].5 Later phonologists have also employed some version of a termination, or appendix, at word edges (see various articles in Féry and van de Vijver 2003), as discussed further below. (10)

The English syllable (Fudge 1969: 268) Syllable Onset 1

2

Rhyme Peak 3

(Termination)

Coda 4

6

5

Like KuryÓowicz (1948), Fudge argues against a flat syllable “on the basis that there is no means of stating relations between peak and coda, which we wish to do, while there is no such constraint between onset and peak” (1969: 273). So, for example, he details a number of constraints holding between positions 1 and 2 (i.e. within the onset); 4 and 5 (within the coda); and 3 and 4 (holding across peak and coda, within the rhyme). Paraphrased examples of Fudge’s collocational restrictions are given below (from Fudge 1969). (11)

Examples of phonotactic rules governing onsets a. b.

5

Within the onset, if the second position consists of [m] or [n], the first position can consist only of [s]. If the second position consists of [l], the first position may not consist of an alveolar (no [tl-, dl-, stl-, hl-]), except [s-].

Note that Fudge does not require his collocational restrictions to apply without exception; for a certain rule he also adds a footnote: “exceptions to this rule are not lacking (. . .). This does not detract from the value of stating the rule – even an ‘80% rule’ is well worth stating, provided that the exceptions to it are indicated” (Fudge 1969: 271).

10 (12)

Anna R. K. Bosch Example of phonotactic rules governing final clusters (paraphrased) Nasals form final clusters only with plosives and voiceless fricatives.

Fudge’s detailed elaboration of the phonotactic constraints of RP, based on syllable constituents, provided a model for later phonologists such as Selkirk (1982) and Goldsmith (1990). Nonetheless, hierarchical structure within the syllable has gone in and out of favor over the past fifty years. A number of articles throughout the 1970s argue for the importance of the syllable as a unit, but are not concerned with internal syllable structure (e.g. Hoard 1971; Hooper 1972; Vennemann 1972; Kahn 1980). Feinstein (1979), presenting a syllable-based analysis of prenasalized consonants in Sinhalese, employs the terms onset and coda, and yet remains explicitly neutral on the subject of syllable-internal structure. Instead, his analysis relies on identifying syllable boundaries only, without any more elaborate constituent structure. He employs “a phonological syllabification mechanism in which syllable structure is defined in terms of a discrete boundary within the linear string of segments,” although he admits that “other structural definitions may be more appropriate” (1979: 246). A range of essays from this period do not generally present specific evidence to support syllable-internal structure, though they make frequent use of the terms onset, rhyme, and coda, at least informally. Any argument which identifies a phonological syllable requires some means of identifying syllable boundaries, however, and in so doing we also confront evidence for permitted onset or coda clusters, almost by necessity. Arguments that attempt to lay out principles of syllable division in a particular language are at least implicitly arguments in favor of a particular syllable shape, VC-CVC vs. V-CCVC, for example. These have been taken to be arguments demonstrating internal syllable constituencies, such as arguments for a particular structure of the rhyme. If a specific intervocalic sequence of consonants is not a permissible onset cluster, one consonant may be forced into the coda of the previous syllable. Syllable structure thus regularly plays a role in analyses of quantity-sensitive metrical structure (chapter 57: quantity-sensitivity). For example, Kenstowicz (1994) points to differences in the syllabification of certain internal consonant clusters to explain certain facts of English stress patterns. Secondary stress falls on the first syllable of the words in (13a), while the words in (13b) show no stress on the initial syllable: (13)

English stress pattern differences (Kenstowicz 1994: 251) a.

Montana arcade Atlantic arthritic

‘mon.’tana ‘ar.’cade ‘at.’lantic ‘ar.’thritic

b.

America Nebraska atrocious astronomy

a.’merica ne.’braska a.’trocious a.’stronomy

The difference in patterns of secondary stress between the words in (a) and (b) above depends crucially on the fact that the word-initial syllables in (a) all terminate in a coda consonant, and therefore receive some stress; the word-initial syllables in (b) are open and thus unstressed. The consonant cluster “interludes” in (a) cannot form permissible word-initial clusters, and therefore it is claimed a syllable boundary must divide them. Thus what can count as a “possible onset” is ultimately defined in terms of what can serve as a possible word-initial cluster.

Syllable-internal Structure

11

This is essentially what Venneman (1972) terms the “Law of Initials”: that “medial syllable-initial clusters should be possible word-initial clusters” (Venneman 1972: 11); or what Bell (1976) terms “the KuryÓowicz condition,” that “initial and final clusters of medial syllables conform to the same constraints as those in initial and final syllables” (Bell 1976: 255).6 On the other hand, there are languages which place more restrictions on word-internal consonant clusters, and allow extra consonants at word edges, such as Polish (Kenstowicz 1994: 262ff.). A number of chapters in Féry and van de Vijver (2003) tease out just these inconsistencies between word-internal clusters and clusters found in either word-initial or word-final position (Cho and King 2003; Green 2003; Kiparsky 2003; Wiltshire 2003). See also Dixon (1970) and many others, as well as chapter 36: final consonants.

3.4

Moraic phonology and syllable-internal structure

While the syllable itself has been in wide use throughout the past century or more, clear arguments providing evidence of the internal structure of the syllable are less common than one might expect. Many arguments which state a convincing case for the syllable as a phonological unit in fact fail to motivate syllable-internal structure. The constituent structure of onset, rhyme, nucleus, and coda intersects in complex ways with a moraic theory of syllable organization, as sketched in (14) below. While a moraic analysis often appears to supersede one employing syllable structure, in fact the notion of moraic weight is interwoven with an understanding of syllable structure, in particular the structure of the rhyme. (14)

The mora in the prosodic hierarchy Prosodic Word Foot Syllable Mora

The mora provides a useful means of representing syllable weight, in quantitysensitive languages where this is required. In languages such as English or Latin, a syllable with a short vowel is monomoraic, while syllables with a long vowel (VV), or vowel + coda consonant (VC), are bimoraic. We note, however, that only consonants in the rhyme may be moraic; onset consonants never contribute to 6

Recall that in this article Bell argues against “the distributional syllable.” Nevertheless, he does not entirely conclude that phonology can do without the syllable: “Let us, however, guard against too narrow a view, against confusing a tool with the problem. ‘Defining the syllable’ and ‘proving the existence of the syllable’ are pseudo-problems. Segment organization is the problem. If assumption of a syllabic unit leads to explanation of regularities of segment organization, so much the better. If not, we will be awaiting a more general theory of organization, and the syllable may enter the museum’s Hall of Scientific Constructs, taking its place beside ether, the noble savage, and the like” (Bell 1976: 261).

12

Anna R. K. Bosch

syllable weight (chapter 47: initial geminates; chapter 55 onsets). Thus we return to some notion of constituent structure within the syllable, if only to identify the domain in which moras are projected. Furthermore, whether the mora truly serves as a “constituent” is unclear; while the syllabic nucleus is typically affiliated with a mora, the affiliations of onset and non-moraic coda consonants are less clear. Non-moraic elements are sometimes associated with the syllable node directly, or sometimes argued to share the mora with the nuclear vowel. Essays in Ziolkowski et al. (1990) demonstrate the range of arguments regarding moraic structure within the syllable. Hyman (1985) originally suggested that a syllable-initial consonant links to the mora of the following vowel, creating what looks like a “body–coda” structure. More commonly, the syllable-initial consonant is assumed to associate directly to the syllable node, as in Hayes (1989) and many others. For a more recent position favoring the mora, Yip argues explicitly that the evidence in favor of the constituents onset and rhyme “is scanty and inconsistent” (Yip 2003: 779), and relies on a moraic model of the syllable to account for the behavior of pre-nuclear glides in English and Mandarin Chinese. I leave it to other contributors to this Companion to tease out the intricacies of moraic phonology in more detail (see chapter 39: stress: phonotactic and phonetic evidence; chapter 40: the foot; chapter 41: the representation of word stress).

4

Experimental studies

Experimental studies, both of acoustic properties of speech and of human behavioral responses to syllabification tasks, have also been constructed to explore this question of the nature of the syllable. The great majority of experimental work over the past fifty years involves studies of simple syllabification, with a view to accounting for syllable boundaries. Most of these studies place a particular focus on the syllabification of an intervocalic consonant or consonants; see for example studies on Dutch, Finnish, German, French, Japanese, and English (e.g. Fallows 1981; Gillis and DeSchutter 1996; Schiller et al. 1997; Berg and Niemi 2000; Content et al. 2001; Goslin and Frauenfelder 2001; Ishikawa 2002; Redford and Randall 2005). Still, a number of experimental studies have been adduced to test the validity of the internal constituents of the syllable; the majority of these studies focus on the primary constituents of onset and rhyme. Evidence regarding hierarchical structure within the syllable is mixed, with arguments drawn from language games (both “traditional” and invented/experimental), slips of the tongue, perceptual studies, investigations with children, and other experimental paradigms (chapter 96: experimental approaches in theoretical phonology) to investigate whether onset, rhyme, nucleus, and coda are or are not syllable constituents. A series of experiments by Treiman and co-authors argue in favor of an onset– rhyme structure, based on subjects’ performance on various word games (Treiman 1986; Fowler et al. 1993; Treiman et al. 1994; Treiman et al. 1995). Most recently, Kapatsinski (2009) claims to show from an experimental study that English speakers are able to learn rhyme–affix associations more easily than body–affix associations, basing this argument on the claim that associations should be easier to learn within rather than across constituents, given a hierarchical structure to the syllable. However, Pierrehumbert and Nair (1995) replicate Treiman’s word game paradigm

Syllable-internal Structure

13

(Treiman 1983) and conclude that flat models of the syllable are sufficient to account for the results; see Treiman and Kessler (1995) for a response. Although few experimental studies look for evidence for constituent structure within the rhyme, one study by Hindson and Byrne (1997) found that children had less difficulty learning a word game which kept the final consonant cluster intact compared to one which broke it up. They conclude that these results support a model “which attributes internal hierarchical structure to the rime, with the coda as a constituent” (Hindson and Byrne 1997). Also focusing on constituent structure within the rhyme, Hayes et al. (2009) examine the ability of 9-month-old infants to detect change in vowel, in coda, or in both, based on the head-turn procedure. One experiment suggests that infants detect a difference most easily when both vowel and coda are altered – perhaps an unsurprising conclusion. However, the authors argue that pre-verbal 9-month-olds do parse syllables into units smaller than just onset and rhyme, presumably those of nucleus and coda. An additional area of research focuses on the relation between syllables, syllable constituency, and literacy (e.g. Derwing 1992). It is argued variously that syllable constituents may assist in visual word recognition (Álvarez et al. 2004) and the development of literacy; and that literacy itself influences phonological judgments about syllable structure (Treiman et al. 2002). Orthography was found to influence onset–rhyme segmentation in Portuguese, for example, in a CVC blending task: “the C/VC segmentation of pseudo-words and homophones was much more frequent in a context of words spelled CVC than in a context of mute-ewords” (Ventura et al. 2001). Orthography was also found to influence Pig Latin production by adult English speakers, when speakers trained on singleton and true cluster onsets extended their production to Cj- and sC(C)- clusters (Barlow 2001). Handwriting production tasks by French schoolchildren also indicated the influence of orthography on syllabification, again in particular where “silent e” is orthographically employed (Kandel et al. 2009). Criticism has occasionally been voiced that some of the experimental work cited above is methodologically flawed. Davis (1989) points out that many of these studies are based on speech errors or word games that involve only monosyllabic words; when polysyllabic words are considered, we find evidence not for an onset–rhyme distinction, but for a distinction between onset and “the remainder of the word,” or onset vs. “everything else.” Davis aims this criticism in particular at studies by Treiman (1983, 1986), who investigates language game productivity, and Fudge (1987), who argues for internal syllable structure primarily drawing on evidence from monosyllabic speech-error blends. Other studies have not found relevant distinctions between onset and rhyme under experimental contexts. Geudens and Sandra (Geudens and Sandra 2003; Geudens et al. 2005) conducted four experiments with Dutch children, both pre-readers and young readers, concluding that their subjects “did not treat onsets and rimes as cohesive units of the syllable.” Recent articulatory studies based on gestural analyses of speech in fact call into question the evidence for syllable constituents based on speech error and slip-of-the-tongue data. These criticisms target methods of data collection for speech error studies – transcription-based methods that have changed little since studies in the 1970s specifically termed them “field studies” (e.g. Fromkin 1973 and others). Laboratory-based studies, such as the gestural studies of speech errors as described in Pouplier (2007, 2008) seem to indicate that many errors are not

14

Anna R. K. Bosch

merely substitutions of segmental units – errors in “selection” – but instead may be examples of gestural intrusion or mis-timing (Pouplier and Goldstein 2005). Various studies conducted on languages other than English appear to show that if there is an onset–rhyme distinction it may be a language-specific one; experiments with native speakers of Korean indicate that Korean syllables “contain a cohesive CV or body unit, in contrast to the VC or rhyme unit of English” (Yoon and Derwing 2001); one of the five experiments described here studied preliterate children, indicating that literacy could not be a confounding factor. A study involving Chinese–English bilingual children found “a preference for matching body over rime in Chinese, and for matching rime over body in English,” concluding that there must be cross-language differences in processing spoken syllables (Wang and Cheng 2008). Acoustic studies have examined timing relationships within the syllable to identify syllable constituents. Conducting an acoustic study on English disyllables and casual speech vowel reduction, in triplets of words such as support/sport/s’pport (reduced support) Fokes and Bond (1993) conclude that there were no invariant acoustic cues determining syllabicity. While the authors concede that sport and s’pport may in fact be phonetically distinct, the study found no invariant cues to distinguish them. Certainly, attempts to isolate acoustic or articulatory invariants of the syllable date from as early as Stetson’s (1928) “chest pulse” theory; however, there is no current consensus on either acoustic or articulatory definitions of the syllable, let alone of structure internal to the syllable. While Selkirk (1982: 340) set the stage for a good deal of ensuing research with her note that “other phonological, or shall we say phonetic, phenomena such as duration and closeness of transition between segments might also be taken as revealing of the immediate constituent structure of the syllable,” we still find very little clear evidence of any invariant property pointing to syllable-internal hierarchical structure.

5

Conclusion

Despite the lack of phonetic evidence for invariant acoustic or articulatory measures of syllable structure, research in this area too continues apace. As Ladefoged noted: There is no single muscular gesture marking each syllable . . . (but) there is evidence . . . that speakers organize the sequences of complex muscular events that make up utterances in terms of a hierarchy of units, one of which is the size of a syllable; and it is certainly true that speakers usually know how many syllables there are in an utterance. We will therefore assume that a neurophysiological definition is possible, even if one cannot at the moment state it in any way (Ladefoged 1971: 81).

Even those who argue against the use of syllable structure to account for phonotactics acknowledge the usefulness of the terms referring to syllable-internal structure: onset, peak, coda, and even rhyme. “Syllable structure, whether directly perceived or inferred, is an undeniable aspect of phonological representations,” claims Steriade (1999), although she goes on to argue against employing syllable constituents in a phonological analysis, concluding that syllable position “does not condition segment realization.” Steriade argues instead that knowledge of

Syllable-internal Structure

15

syllable structure, and syllable edges in particular, derives from or is founded on the speaker’s perception of word-based phonotactic regularities. Her claim is essentially that we have put the cart before the horse in arguing that phonotactic constraints are built upon syllable structure; instead, these phonotactic regularities may be precisely what allow us to identify syllable position. In any case, the labels we use to identify internal constituents of the syllable – onset, coda, and rhyme – remain convenient terminology, and seem likely to remain in common usage. Nevertheless, it also seems clear that a conservative view of linguistic structure – a view shaped by Occam’s razor, perhaps – would concede that these terms, while useful, may not be supported by empirical evidence. Acoustic and experimental studies offer only mixed results, while language-specific phonological studies continue to differ widely in their use of (and claims for) some particular organization of syllable-internal structure. How we use syllable structure to represent the patterns and organization of human language will differ depending on the questions we ask and the problems we confront in the specific languages we investigate. Syllable structure may turn out to be an organizational tool, rather than an object available for independent manipulation.

REFERENCES Álvarez, Carlos, Manuel Carreiras & Manuel Perea. 2004. Are syllables phonological units in visual word recognition? Language and Cognitive Processes 19. 427–452. Anderson, Stephen R. 1985. Phonology in the twentieth century. Chicago: University of Chicago Press. Barlow, Jessica. 2001. Individual differences in the production of initial consonant sequences in Pig Latin. Lingua 111. 667–696. Bell, Alan. 1976. The distributional syllable. In Alphonse Juilland (ed.) Linguistic studies offered to Joseph Greenberg, vol. 2, 249–262. Saratoga, CA: Anma Libri. Berg, Thomas & Jussi Niemi. 2000. Syllabification in Finnish and German: Onset filling vs. onset maximization. Journal of Phonetics 28. 187–216. Blevins, Juliette. 1995. The syllable in phonological theory. In John A. Goldsmith (ed.) The handbook of phonological theory, 206–244. Cambridge, MA & Oxford: Blackwell. Botma, Bert, Colin J. Ewen & Erik Jan van der Torre. 2008. The syllabic affiliation of postvocalic liquids: An onset-specifier approach. Lingua 118. 1250–1270. Cho, Young-mee Yu & Tracy Holloway King. 2003. Semisyllables and universal syllabification. In Féry & van de Vijver (2003), 183–212. Clements, G. N. & Samuel J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Content, Alain, Ruth K. Kearns & Uli H. Frauenfelder. 2001. Boundaries versus onsets in syllabic segmentation. Journal of Memory and Language 45. 177–199. Davis, Stuart. 1989. On a non-argument for the rhyme. Journal of Linguistics 25. 211–217. Davis, Stuart. 1992. The onset as a constituent of the syllable: Evidence from Italian. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26. 71–79. Derwing, Bruce L. 1992. A pause-break task for eliciting syllable boundary judgments from literate and illiterate speakers: Preliminary results for 5 diverse languages. Language and Speech 35. 219–235. Dixon, R. M. W. 1970. Olgolo syllable structure and what they are doing about it. Linguistic Inquiry 1. 273–276. Fallows, Deborah. 1981. Experimental evidence for English syllabification and syllable structure. Journal of Linguistics 17. 309–317.

16

Anna R. K. Bosch

Feinstein, Mark. 1979. Prenasalization and syllable structure. Linguistic Inquiry 10. 245–278. Féry, Caroline & Ruben van de Vijver (eds.) 2003. The syllable in Optimality Theory. Cambridge: Cambridge University Press. Firth, J. R. & B. B. Rogers 1937. The structure of the Chinese monosyllable in a Hunanese dialect. Bulletin of the School of Oriental Studies 8. 1055–1074. Fischer-Jørgensen, Eli. 1952. On the definition of phoneme categories on a distributional basis. Acta Linguistica 7. 8–39. Fokes, Joann & Z. S. Bond. 1993. The elusive/illusive syllable. Phonetica 50. 102–123. Fowler, Carol A., Rebecca Treiman & Jennifer Gross. 1993. The structure of English syllables and polysyllables. Journal of Memory and Language 32. 115–140. Fromkin, Victoria A. (ed.) 1973. Speech errors as linguistic evidence. The Hague: Mouton. Fudge, Erik C. 1969. Syllables. Journal of Linguistics 5. 253–286. Fudge, Erik C. 1987. Branching structure within the syllable. Journal of Linguistics 23. 359–377. Geudens, Astrid & Dominiek Sandra. 2003. Beyond implicit phonological knowledge: No support for an onset–rime structure in children’s explicit phonological awareness. Journal of Memory and Language 49. 157–182. Geudens, Astrid, Dominiek Sandra & Heike Martensen. 2005. Rhyming words and onset– rime constituents: An inquiry into structural breaking points and emergent boundaries in the syllable. Journal of Experimental Child Psychology 92. 366–387. Gillis, Steven & Georges DeSchutter. 1996. Intuitive syllabification: Universals and language specific constraints. Journal of Child Language 23. 487–514. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Goslin, Jeremy & Uli H. Frauenfelder. 2001. A comparison of theoretical and human syllabification. Language and Speech 44. 409–436. Green, Antony Dubach. 2003. Extrasyllabic consonants and onset well-formedness. In Féry & van de Vijver (2003), 238–253. Hamp, Eric P., Fred W. Householder & Robert Austerlitz. 1966. Readings in Linguistics II. Chicago: University of Chicago Press. Harris, John. 1994. English sound structure. Oxford: Blackwell. Haugen, Einar. 1956. Syllabification in Kutenai. International Journal of American Linguistics 22. 196–201. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Hayes, Bruce. 2009. Introductory phonology. Malden, MA & Oxford: Wiley-Blackwell. Hayes, Rachel A., Alan M. Slater & Christopher A. Longmore. 2009. Rhyming abilities in 9-month-olds: The role of the vowel and coda explored. Cognitive Development 24. 106–112. Hindson, Barbara Anne & Brian Byrne. 1997. The status of final consonant clusters in English syllables: Evidence from children. Journal of Experimental Child Psychology 64. 119–136. Hoard, James E. 1971. Aspiration, tenseness, and syllabication in English. Language 47. 133–140. Hockett, Charles F. 1947. Peiping phonology. Journal of the American Oriental Society 67. 253–267. Hockett, Charles F. 1955. A manual of phonology. Baltimore: Waverly Press. Hooper, Joan B. 1972. The syllable in phonological theory. Language 48. 525–540. Hulst, Harry van der & Nancy Ritter (eds.) 1999. The syllable: Views and facts. Berlin & New York: Mouton de Gruyter. Hyman, Larry M. 1985. A theory of phonological weight. Dordrecht: Foris. Ishikawa, Keiichi. 2002. Syllabification of intervocalic consonants by English and Japanese speakers. Language and Speech 45. 355–385. Kahn, Daniel. 1980. Syllable-based generalizations in English phonology. New York: Garland.

Syllable-internal Structure

17

Kandel, Sonia, Lucie Hérault, Géraldine Grosjacques, Eric Lambert & Michel Fayol. 2009. Orthographic vs. phonologic syllables in handwriting production. Cognition 110. 440–444. Kapatsinski, Vsevolod. 2009. Testing theories of linguistic constituency with configural learning: The case of the English syllable. Language 85. 248–277. Kaye, Jonathan. 1990. “Coda” licensing. Phonology 7. 301–330. Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1985. The internal structure of phonological elements: A theory of charm and government. Phonology Yearbook 2. 305–328. Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1990. Constituent structure and government in phonology. Phonology 7. 193–231. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kiparsky, Paul. 2003. Syllables and moras in Arabic. In Féry & van de Vijver (2003), 147–182. KuryÓowicz, Jerzy. 1948. Contribution à la théorie de la syllabe. Bulletin de la Société Polonaise de Linguistique 8. 80–113. KuryÓowicz, Jerzy. 1949. Linguistique et théorie du signe. Journal de Psychologie 42. 170–180. Ladefoged, Peter. 1971. Preliminaries to linguistic phonetics. Chicago: Chicago University Press. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. McCarthy, John J. 1979. On stress and syllabification. Linguistic Inquiry 10. 443–465. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Pierrehumbert, Janet B. & Rami Nair. 1995. Word games and syllable structure. Language and Speech 38. 77–114. Pike, Kenneth L. 1975a. Suprasegmentals in reference to phonemes of item, of process, and of relation. Bibliotheca Phonetica 11. 45–56. Pike, Kenneth L. 1975b. Tests for prosodic features of pitch, quantity, stress. Bibliotheca Phonetica 11. 4–5. Pike, Kenneth L. & Eunice V. Pike. 1947. Immediate constituents of Mazateco syllables. International Journal of American Linguistics 13. 78–91. Pouplier, Marianne. 2007. Tongue kinematics during utterances elicited with the SLIP technique. Language and Speech 50. 311–341. Pouplier, Marianne. 2008. The role of a coda consonant as error trigger in repetition tasks. Journal of Phonetics 36. 114–140. Pouplier, Marianne & Louis Goldstein. 2005. Asymmetries in the perception of speech production errors. Journal of Phonetics 33. 47–75. Rand, Earl. 1968. The structural phonology of Alabaman, a Muskogean language. International Journal of American Linguistics 34. 94–103. Redford, Melissa A. & Patrick Randall. 2005. The role of juncture cues and phonological knowledge in English syllabification judgments. Journal of Phonetics 33. 27–46. Saussure, Ferdinand de. 1922. A course in general linguistics. New York: Philosophical Library. Scheer, Tobias. 2004. A lateral theory of phonology, vol. 1: What is CVCV, and why should it be? Berlin & New York: Mouton de Gruyter. Schiller, Niels O., Antje S. Meyer & Willem J. Levelt. 1997. The syllabic structure of spoken words: Evidence from the syllabification of intervocalic consonants. Language and Speech 40. 103–140. Selkirk, Elisabeth. 1982. The syllable. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 337–383. Dordrecht: Foris. Steriade, Donca. 1999. Alternatives to syllable-based accounts of consonantal phonotactics. In Osamu Fujimura, Brian D. Joseph & Bohumil Palek (eds.) Item order in language and speech, 205–245. Prague: Karolinum Press. Stetson, Raymond H. 1928. Motor phonetics. Haarlem: Société Hollandaise de Science. Treiman, Rebecca. 1983. The structure of spoken syllables: Evidence from novel word games. Cognition 15. 49–74.

18

Anna R. K. Bosch

Treiman, Rebecca. 1986. The division between onsets and rimes in English syllables. Journal of Memory and Language 25. 476–491. Treiman, Rebecca & Brett Kessler. 1995. In defense of an onset–rime syllable structure for English. Language and Speech 38. 127–142. Treiman, Rebecca, Kathleen Straub & Patrick Lavery. 1994. Syllabication of bisyllabic nonwords: Evidence from short-term memory errors. Language and Speech 37. 45–59. Treiman, Rebecca, Carol A. Fowler, Jennifer Gross, Denise Berch & Sarah Weatherston. 1995. Syllable structure or word structure? Evidence for onset and rime units with disyllabic and trisyllabic stimuli. Journal of Memory and Language 34. 132–155. Treiman, Rebecca, Judith A. Bowey & Derrick Bourassa. 2002. Segmentation of spoken words into syllables by English-speaking children as compared to adults. Journal of Experimental Child Psychology 83. 213–238. Vennemann, Theo. 1972. On the theory of syllabic phonology. Linguistische Berichte 18. 1–18. Ventura, Paulo, Régine Kolinsky, Carlos Brito-Mendes & José Morais. 2001. Mental representations of the syllable internal structure are influenced by orthography. Language and Cognitive Processes 16. 393–418. Wang, Min & Chenxi Cheng. 2008. Subsyllabic unit preference in young Chinese children. Applied Psycholinguistics 29. 291–314. Wiltshire, Caroline. 2003. Beyond codas: Word and phrase-final alignment. In Féry & van de Vijver (2003), 254–268. Yip, Moira. 2003. Casting doubt on the onset-rime distinction. Lingua 113. 779–816. Yoon, Yeo Bom & Bruce L. Derwing. 2001. A language without a rhyme: Syllable structure experiments in Korean. Canadian Journal of Linguistics 46. 187–237. Ziolkowski, Michael, Manuela Noske & Karen Deaton (eds.) 1990. Papers from the 26th Annual Regional Meeting, Chicago Linguistic Society: Parasession on the syllable in phonetics and phonology.

34

Precedence Relations in Phonology Charles Cairns Eric Raimy

1

Introduction

“Precedence” in phonology refers to the fact that elements occur in ordered sequences. An understanding of precedence relations is key to explicating notions of locality, adjacency, and left–right asymmetries, which have played significant roles in the phonological literature, especially since the advent of autosegmental phonology (Goldsmith 1976; chapter 14: autosegments). McCarthy (1989: 71) writes “nonlinear phonology imposes strict requirements of locality on phonological rules,” and “locality . . . ensures that the elements referred to in phonological transformations and constraints are adjacent at some level of representation.” We must agree on a common set of questions in order to compare and contrast linguists’ notions of precedence relations, a requirement that presupposes a formal framework. Since the earliest days of phonology, the sequential nature of speech and the progression of letters across the printed page have been assumed to be sufficient to understand precedence. It is always salutary to explicate tacit assumptions, so this chapter explores the implications of a formally rigorous understanding of precedence. The formal rigor is supplied by graph theory, a branch of mathematics that we will use to unpack the question of what it means for phonemes to appear in an ordered sequence (Wilson 1996 is one of several good introductions). Phonology is concerned with the characteristics of precedence in human language, so graph theory itself can be no more than a useful tool. But because graph theory is an explicit mathematical model that provides specific and well-understood possible answers to questions of precedence, we explore its implications in this chapter. Once we have introduced the relevant aspects of graph theory, we go on to examine a sample of the claims and assumptions that have been made in the literature on phonological precedence, with varying degrees of explicitness. In particular, we examine the characterizations of various approaches to autosegmental phonology within graph theory. We will further explicate the nature of precedence in phonology by considering the basic operation of deletion. All phonologists must take as bedrock assumptions that phonemes appear in a sequence and that there exist phonological processes with the capacity to delete phonemes from a sequence. One interesting result of this exercise is that commonly assumed antagonistic theories of phonology converge on a common model of deletion. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0034

2

Charles Cairns & Eric Raimy

Before proceeding, we first ask if precedence relations are primitives of phonological representations or if they are derived. Van der Hulst (2008), for example, proposes that precedence relations can be derived from underlying syllables. Consider a highly articulated theory of the syllable with labeled nodes, e.g. Fudge (1969, 1987); Cairns (1988); see also chapter 33: syllable-internal structure. The idea is that if at the lexical level featural information such as [back] or [coronal] were stored in syllabic nodes like onset and rhyme, then perhaps the number and order of phonemes on the surface could be predicted from the inherent order of syllabic constituents. There are at least four natural limits to deriving precedence relations from syllable structure. First, syllabification is not always exhaustive; many languages are known to have sequences of unsyllabified consonants (Bagemihl 1991; Czaykowska-Higgins and Willett 1997; Vaux and Wolfe 2009). Second, it would in any case be necessary to specify the order among syllables. Consider the English loan from Tamil catamaran; hypothetical *matacaran and *tamacaran would serve as equally plausible loans into English, yet they differ from the existing word only by syllable order. The possibility of using foot structure to order syllables only moves this question higher in the prosodic hierarchy and requires more prosodic information to be stored in the lexicon. The necessity of stipulating explicit sequencing information in the lexicon cannot be escaped. Third, reference to precedence relations at the segmental level is necessary to properly account for phonotactic constraints. Blevins (2003), building on Steriade (1999), shows numerous compelling examples where phonological sequencing generalizations and cross-linguistic universal patterns refer to properties of the phonological string and not to syllable structure. Of course, as Blevins points out, there are many cases “where phonotactic constraints and syllable structure appear to converge.” She suggests that “this is because syllabifications are derivative of phonotactics, not vice versa” (2003: 393). Finally, we are encouraged in our focus on segmental precedence by the fact that resyllabification is rampant throughout phonology and phonetics; the effervescence of the syllable makes it a poor candidate for the bearer of lexical precedence relations. §2 sketches the basic elements of graph theory and their application to phonology. §3 presents the complications that arise in connection with considering precedence relations in autosegmental phonology and provides a plausible explanation of why the Obligatory Contour Principle (OCP) is so variably valid. §4 demonstrates how investigating the process of deletion in graph theory illuminates how different theoretical models of phonology converge on the same understanding of deletion in phonology. §5 demonstrates how theories of phonology that view segments as entities that occur in real time can benefit from the consideration of precedence in graph-theoretic terms. §6 shows how graph theory can illuminate issues in phonology such as the “no crossing constraint” and local vs. long-distance adjacency. §7 illustrates some extensions of graph theory approaches to different phonological and morphological phenomena. §8 concludes this chapter.

2

Precedence relations and graph theory

Phonologists differ about whether to view a phonological sequence as a string of discrete, point-like objects or as a series of possibly overlapping segments which exist in real time; Trubetzkoy (1939) described phonemes as essentially timeless

Precedence Relations in Phonology

3

(zeitlos), abstract entities like the dits and dots of Morse code, where each unit is unaffected by the producing process (although possibly influenced by proximal symbols). The alternative view, that phonological patterning is defined by the dynamic articulation of actual speech, goes back to Sweet (1877), Sievers (1881), and Saussure (1916). This serves as the fundamental assumption of such schools of thought as Articulatory Phonology (Browman and Goldstein 1986, 1989, 1990a, 1990b; chapter 5: the atoms of phonological representations), which views the elements that appear in phonological sequences to be events in real time. The consensus that phonetic sequences involve continuous, overlapping elements does not extend to more abstract levels of phonology. Theories of morphological operations such as reduplication and infixation and of many phonological processes such as morphophonemics, deletion, etc., generally operate on representations made up of discrete, point-like objects, and these theories have achieved considerable descriptive success; we will proceed on this basis and turn to the event-based outlook later. Consider the representation of the Margi word /tágú/ ‘horse’ (from Kenstowicz 1994: 312) in a generic version of autosegmental phonology in (1). This is a picture of what Coleman and Local (1991: 309) call “paddlewheel graphs.” It is drawn so as to induce the reader to visualize three half planes, each emanating from a common line. The three-dimensional metaphor is useful because it helps us break down general questions of precedence into smaller and hence more manageable ones. This imagery can be misleading, however, because, as Coleman and Local (1991) demonstrate, autosegmental representations are not necessarily threedimensional (i.e. non-planar, as defined in §6) in a mathematical sense. (1)

Paddlewheel graph of Margi /tágú/ syllable plane q #

x t

x a

x g

q x

%

u feature plane

H tone plane Lines are referred to as “tiers” in autosegmental phonology, and the line defined by the intersection of the three planes depicted in (1) is the “anchor tier.” The anchor tier contains the string of symbols # x x x x %, where # and % indicate the beginning and end, respectively, of the phoneme sequence (we ignore these now, but return to them below). Elements are also arrayed in tiers on the “feature plane,” the “syllable plane,” and the “tone plane” of (1). The feature plane is shown with the alphabetic symbols t, a, g, u in lieu of the familiar feature trees (we employ this notation throughout the remainder of the chapter); in §3 we will see that this is more complicated than

4

Charles Cairns & Eric Raimy

a simple plane. These elements are connected by “association lines” to elements on the anchor tier. Autosegmental phonology since Goldsmith (1976) has taken for granted that different distinctive features may be separated onto distinct tiers on the feature plane, and that all tiers are parallel to the anchor tier. Many phonologists posit a rich prosodic hierarchy on what we dub the syllable plane, but we eschew further discussion of this plane (see Cairns and Raimy 2009). A third plane represents tone; Kenstowicz (1994) gives the familiar arguments that the high tone in tágú is represented by a single element on the “tonal tier,” and this element is connected by association lines to two segments on the anchor tier. Phonologists differ on what elements inhabit each plane and tier, how they are organized with respect to each other on each plane, and how representations on different levels interact with one another. For example, the field has not agreed whether the elements on the anchor tier are root nodes, moras, or empty x-slots (see chapter 54: the skeleton). We will achieve our expository purposes by consistently representing anchor elements as x-slots under the belief that pretty much the same accounts we provide below could be supplied under mora theory, mutatis mutandis. The remainder of this section will introduce the formal logic of precedence relations with a focus only on the anchor tier; we will turn to representational implications of positing multiple tiers on several planes in the next section. In fact, if we were to analyze only the anchor tier and ignore the others we would be discussing only a sequence such as x x x x. Because this would make for awkward exposition, we indulge in the convenience of labeling the anchor tier elements with the familiar phoneme labels, suppressing information from the tone plane; we turn to how the features and tones are connected to their appropriate x-slots when we discuss precedence in the context of autosegmental representations in §3. The articulation of the phoneme sequence tagu obeys the principles of asymmetricity and irreflexivity. Asymmetricity means that if t precedes a then a does not precede t, assuming that there are single instances of t and a. The constraint of irreflexivity applies because no speech sound may precede or follow itself (see Bird and Klein 1990 for an alternative view). These principles follow from formal properties of ordered sequences of elements, as we will show later in this section. As will be explained below, whether transitivity holds of this utterance depends on whether we understand precedence to apply locally or non-locally. If precedence is local, by which we mean that only adjacent segments can be “seen,” then the phoneme sequence is not transitive; if t precedes a and a precedes g then t cannot precede g. If adjacency is non-local, so it can see beyond immediately adjacent segments, then precedence is transitive; if t precedes a and a precedes g then t must precede g. We see from this reasoning that the question of whether phoneme strings obey the principle of transitivity boils down to the question of locality. These principles are best understood in terms provided by graph theory, so we defer further exploration of asymmetry, irreflexivity, and transitivity until after a brief introduction to graph theory as it applies to phonology. Graphs are defined as sets of “vertices” and “edges,” as shown in (2) (see again Wilson 1996). Graph-theoretic representations of phoneme sequences consider vertices to stand for phonological segments and edges to symbolize the precedence relations among them; accordingly, whenever we use the term vertex, the reader can understand it as equivalent to a phonological segment. Consider the two different graph-theoretic representations for the word tagu in (2a) and (2b).

Precedence Relations in Phonology (2)

a. b.

t—a—g—u vertices: edges (unordered):

5

{t, a, g, u} {t a, a g, g u}

(2a) and (2b) contain the same information. (2a) depicts the graph in a visual manner, while (2b) defines the graph as a list of vertices and a list of edges. Each edge is defined by a pair of vertices, and these pairs are set off typographically from each other by commas in (2b). A list of edges suffices for many graphs, because the vertices can be determined from the list of edges. Note that the lists of vertices and edges are literally lists and not sets. Because the edges in (2b) are undirected, it is called an undirected graph, where the two vertices that define each edge are unordered with respect to each other; for example, the edge {t a} is equivalent to {a t}. Because this type of graph contains information only about adjacency of vertices and does not specify any order, it appears to be a poor candidate for a model to represent phoneme strings. For one thing, lexical representations must contain ordering information. Consider the existence of pairs of words like cat and tack in English. The adjacency pairs (undirected edges) for these two words are identical; both words have the edge set {t æ, æ k}. Beyond the obvious fact that ordering is distinctive is the observation made by de Lacy (2007) that if phonological graphs were undirected, we would predict that mirror-image rules or constraints like those that appeared in early versions of SPE-type phonology would be commonplace. An example would be rules of the form x → y / {z __ , __ z}, as suggested by Bach (1968), Langacker (1969), and Anderson (1974). This means that x is rewritten y if it either precedes or follows z. If we were to adopt undirected graphs as the representation for phonological forms we would predict, contrary to fact, that mirror operations should be the most common operations found in phonology. Because a theory containing only undirected edges cannot distinguish between cat and tack and makes false predictions about the directionality of phonological operations, we consider graphs where the vertices that specify each edge are ordered with respect to each other. Information about sequential order can be added to the graph in (2) by making it a “directed graph” (or “digraph”), as in (3). An edge in a digraph is an ordered pair of vertices; the vertex mentioned first in an edge specification precedes the one mentioned second. (3)

a. t → a → g → u b. vertices: edges (ordered): c. a u t g

{t, a, g, u} {t a, a g, g u}

(3a) is the graphic representation of the information in (3b). Because the edges in (3b) are ordered, they specify that t precedes a, a precedes g, and g precedes u. (3a) is not the only diagram consistent with (3b), however; for example, the diagram in (3c) is equally consistent with (3b), but not as convenient to read as (3a). They are formally equivalent; in fact, (3a), (3b), and (3c) all represent the same

6

Charles Cairns & Eric Raimy

graph. All essential characteristics of graphs are given by specifying the set of edges and the set of vertices, such as in (3b). For expedience, when we portray graphs we will continue to supplement the lists of vertices and edges with convenient diagrams and indicate that the graph is directed by using arrows as edges. The graph in (3) represents a kind of digraph that has properties that match widely held yet tacit assumptions about precedence in phonological representations. For example, it is a “connected graph.” In a connected graph, it is possible to construct a route following edges between each two vertices. In other words, a connected graph contains only vertices that share an edge with at least one other vertex. Determining whether a graph is connected or not consists of traversing the graph by moving from one vertex to another, following edges. The traversing of a graph is called a “walk,” and a connected graph is one where there is a possible walk from any vertex to any other vertex. The requirement that the phoneme strings that constitute the phonological representation of words be represented by a connected graph is an explication within graph theory of the usually tacit assumption that words be pronounceable from beginning to end. The graph in (3) exemplifies a “chain graph”; properties of chain graphs, like those of connected graphs in general, illuminate our understanding of asymmetry and irreflexivity as constraints on phonological sequences, as we now demonstrate. The definition of chain graphs involves a concept known as the degree of a vertex, which is the number of edges that it appears in. In (3), there are two vertices with a degree of one; all others have a degree of exactly two. Because the vertex t precedes only a and is not preceded by any other vertex, it appears in only one edge, {t a}; conversely, because u is preceded by only one vertex, g, and does not precede any other vertex, it also appears in only one edge, {g u}. Therefore, the two vertices t and u are of degree one. The remaining vertices, a and g, each have degree two; a follows t and precedes g while g follows a and precedes u, so each must appear in two edges. If a connected graph has two vertices of degree one, and if any other vertices have a degree of exactly two, it is a chain graph, and if it is also a digraph it is known as a “directed chain graph.” Directed chain graphs must obey the principle of asymmetry, because the requirement that edges are directed and the limits on the degrees of vertices together entail that if a graph like (3) has an edge such as {a g}, it cannot also have an edge {g a}; such a move would make the vertices {a} and {g} appear in three edges, violating a requirement of chain graphs. Directed chain graphs also obey the principle of irreflexivity for essentially the same reason. A reflexive edge is one where the same vertex appears twice in the definition of an edge; if g, for example, were to follow (or precede) itself, then there would be an edge with the specification {g g}. Adding this edge to the set in (3b) would make the degree of g three, because it would now appear in three edges. It follows from the preceding that the principles of asymmetry and irreflexivity follow from properties of directed chain graphs. The same is also true of the notion of transitivity, which, as mentioned above, can be considered locally as well as non-locally. Locally means that only edges (i.e. adjacent vertices) are considered, so precedence relations are not transitive from a local perspective. From a more long-distance perspective, of course, graphs like those in (3) must be considered to be transitive; there is a clear sense in which, for example, t precedes g and u. This sense of transitivity follows from the requirement that there be a walk through the graph; the walk through (3) traverses the vertex t before it reaches

Precedence Relations in Phonology

7

g or u, so in this sense it is true that if t precedes a and a precedes g, then t precedes g. Before concluding this section, note that it is convenient to add explicit beginning and end symbols, which we represent as # and %, respectively. Positing these symbols allows us to say that all vertices on the anchor tier that are of degree two are available for serving as phonological segments, and only the abstract terminal symbols are of degree one. These symbols are also convenient to define the environments of initial and final segments, needed by many phonological processes. Also, chain graphs supplemented with special beginning and end symbols allow for the definition of free vs. bound morphemes: a free morpheme like tagu is a chain graph with # and % at beginning and end; a bound morpheme would lack one or both of these symbols. The graph in (4) is the type we will use to explicate precedence issues when we consider deletion in §4. (4)

a. b.

#→t→a→g→u→% vertices: {#, t, a, g, u, %} edges (ordered): {# t, t a, a g, g u, u %}

This section has described the basic principles of graph theory as they apply to notions of phonological representations held by virtually all schools of thought in phonology. The overwhelming majority of phonologists operate on the assumptions, usually tacit, that sequences of phonemes have the properties of asymmetry, irreflexivity, and (in the qualified senses explicated above) transitivity, and the preceding paragraphs have shown that these are best understood in graph theory, where they derive from the properties of directed chain graphs. However, as stated at the beginning of this section, we have been considering only precedence relations on the anchor tier of (1). We now turn to a consideration of precedence relations in autosegmental phonology.

3

Precedence relations and autosegmental phonology

In the preceding section we adopted the expository convenience of depicting phonological content of segments on the anchor tier by means of phoneme symbols. This was a shorthand way of showing feature trees, sketched in (5), which is an elaboration of the diagram in (1). This section is devoted to analyzing the precedence relations among elements that are on different tiers, and we will see that this exercise provides insight into the OCP. A brief explanation of (5) is in order first. (5)

Schematized autosegmental representation of [tagu] # → x



x



x



x →%

laryngeal tier place tier

COR

DOR

VEL

DOR

feature tier

[ant]

[low]

[bk]

[hi]

anchor tier root tier

8

Charles Cairns & Eric Raimy

(5), like (1), is drawn so as to invite the reader to visualize a three dimensional representation. The anchor tier is the same as that in (1), except that the directed edges are shown. The branching below the root tier in (5) depicts the autosegmental nature of features. We assume the general proposals on feature geometry (see chapter 27: the organization of features) from Goldsmith (1990) and Archangeli and Pulleyblank (1994). The representation in (5) raises the question of whether precedence is encoded only on the anchor tier or whether each tier has its own precedence relations. The difference between these two options can be graphically seen by comparing (6) with (5). (5) contains precedence relations (indicated by arrows) only on the anchor tier, while (6) has precedence relations on every tier. Goldsmith (1976: 28) states that “each autosegmental level is a totally ordered sequence of elements,” so it appears that (6) is what is meant in the standard theory of autosegmental phonology. This conclusion requires some inferences on our part, because autosegmental representations in the literature universally assume that blank spaces between the printed symbols arrayed left to right across the printed page indicate sequential order. (6)

Within-tier precedence # → x

→ →

x

→ →

x



x →%



anchor tier root tier

laryngeal tier place tier

COR → DOR → VEL → DOR

feature tier

[ant] → [low] → [bk] → [hi]

The model in (6) raises a number of thorny issues. One is revealed in Archangeli and Pulleyblank’s discussion of adjacency and its role in the understanding of Obligatory Contour Principle (OCP) effects (Leben 1973; McCarthy 1986; Odden 1986, 1988). Recall that the OCP rules out adjacent, identical elements. The problem of course is to define “adjacency” in autosegmental terms; (7) is Archangeli and Pulleyblank’s definition of adjacency. (Note that we are now switching our example from Margi tagu to using the more abstract symbols employed by Archangeli and Pulleyblank, because we wish to hew as closely as possible to their statements.) (7)

Adjacency (Archangeli and Pulleyblank 1994: 35) a is structurally adjacent to b iff: a. at least one of the two is unassociated, both are on the same tier, and no element intervenes between the two on that tier; or, b. both a and b are associated to the same anchor tier and no anchor intervenes on that tier between the two anchors to which a and b are associated.

Archangeli and Pulleyblank argue that the definition in (7), when coupled with the OCP, rules out representations in (8a) and (8b) but allows (8c) and (8d). The diagrams in (8) are adapted from Archangeli and Pulleyblank and follow their

Precedence Relations in Phonology

9

convention of representing directed edges by means of the absence of any printed symbol between two segments on the same tier. In the disallowed (8a) and (8b), the two a elements are adjacent, so they are ruled out by the OCP. This is not the case in (8c) and (8d). (8)

a.

* x0 a

a

b. *x

x

a

a

c.

x

x

x

a

d. x

x

a

x a

The two occurrences of a elements are adjacent in (8a) because they are adjacent on the tier that houses them (recall that a blank space on the printed page indicates a directed edge), and they are adjacent in (8b) both on their own tier and through their links to adjacent elements on the anchor tier. (8c) has only a single a and therefore cannot violate the OCP even though it is associated to more than one x-slot on the timing tier. Finally, (8d) has an intervening x-slot that creates a gap between the two occurrences of a, thus supposedly preventing them from being adjacent. If we follow Goldsmith’s claim that precedence inheres on all tiers, as in (9), then we see that Archangeli and Pulleyblank’s representations and invocation of the OCP in fact yield quite different results. (9)

Precedence on all tiers a.

b. *x→ x

* x0 a→a

c.

x→x→x

d. x → x → x

a

a →

a→a

a

Once precedence is explicitly indicated in the representations, we see that (8a), (8b), and (8d) all violate the OCP, because there are two a elements that are adjacent (i.e. have an edge between them). The specific problem is that if precedence is encoded on every tier, then (8d) will have the two a elements adjacent to each other, regardless of whether an x-slot intervenes on the timing tier or not. The only representation that does not violate the OCP is (8c), because there is only a single a element. Let us now consider what representations violate the OCP if precedence is encoded only on the timing tier, as in (10). (10)

Precedence on only the timing tier a.

b. *x→ x

* x0 a

a

a

a

c.

x→x→x a

d. x → x → x a

a

These representations capture the contrast between (10b) and (10d) that Archangeli and Pulleyblank were getting at: the two a elements are adjacent in (10b), because the x-slots they are associated to are adjacent on the anchor tier, while the two a elements in (10d) are not adjacent, because of the intervening x-slot. What rules out (10a)? It follows from the fact that association lines are really undirected edges that there cannot be any floating elements because they would

10

Charles Cairns & Eric Raimy

violate the requirement that phonological representations be equivalent to connected graphs; floating elements would be represented by vertices that are not connected to the rest of the graph. Consequently, “floating” features must be connected to the rest of a phonological representation; this seems to be consistent with current thinking on this topic (see chapter 82: featural affixes). We will assume that precedence is encoded only on the anchor tier for the remainder of this chapter. This view improves our understanding of Odden’s (1986, 1988) critique of the OCP as a language universal: in general, the OCP appears to behave in a cross-linguistically arbitrary way, because the locality of distinctive features is mediated by the timing tier. Consequently, whether two elements are adjacent or not must be determined either directly from the list of edges or by calculating whether a walk exists between the two elements. The implementation of a walk appears to coincide with proposals about “searches” in phonology by Mailhot and Reiss (2007), Samuels (2009), and Nevins (2010) (see §7). This perspective has the advantage that it supplies one universal account of adjacency, which has two different specifications: two elements are adjacent if they either share an edge or if there is a walk connecting them. Each language-particular process must, of course, specify which definition of adjacency is required for its application. The existence of two ways of specifying adjacency is plausibly a major reason why the OCP appears superficially to be so variably valid.

4

Deletion and precedence

Deletion is a fundamental phonological phenomenon that must be accounted for by any phonological theory. The naive view of deletion is that segments can be simply eliminated from a representation without any complicating entailments; this is based, of course, on the conception of precedence portrayed by the left to right array of symbols across the printed page. Deletion is far more complicated when we consider how precedence structures are altered when a segment is deleted. For purposes of explicating how deletion affects precedence relations, we return to using our Margi example and suppose that a phonological process of Margi were to delete the g from tagu. It does not matter what triggers this operation, nor in what phonological theory this operation is described; we are interested here in specifying precisely what it means to delete a segment when it is considered as a vertex in a directed chain graph. In this section we describe the two characterizations of deletion (see chapter 68: deletion) that are possible within directed chain graphs. One consists of skipping over (or underparsing) the segment to be deleted, and the other involves merging two segments into one. Both appear to be empirically attested. A particularly significant part of this presentation is that two models of Optimality Theory (Containment and Correspondence) as well as the derivational model known as Precedence Based Phonology (Raimy 2000, 2009) converge on the representations and operations revealed by a graph-theoretic explication. Without considerations about the nature of precedence, (11a) presents the naive mapping between representations undergoing the deletion of g, where the symbol “>” means merely “becomes,” without reference to the nature of the operations involved. The question at hand is: once precedence relations are specified, how exactly does (11b) become (11c)?

Precedence Relations in Phonology (11)

a. b. c.

11

tagu > tau #→t→a→g→u→% #→t→a→u→%

The first-blush assumption might be that we can simply delete the g vertex from (11b). But this has the immediate effect of removing the two members of the edge set that mention the vertex g, because each directed edge is defined by an ordered pair of vertices; if g were to become unavailable to form edges, edges that contained that vertex prior to its removal cease to exist. Therefore, the result of understanding deletion as the simple removal of a segment (i.e. vertex) and its precedence relations (i.e. the edges it is associated with) is the production of a disconnected graph, as in (12). (12)

a. b.

#→t→au→% vertices: {#, t, a, u, %} edges (ordered): {# t, t a, u %}

The problem is that there is no walk from the beginning symbol # to the end symbol % and specifically there is no precedence relation from a to u, so (12) is not a connected graph; a new precedence relationship must be introduced to assure well-formedness after deletion. We will return to this problem below, but first let us observe that virtually all schools of thought in phonology ignore the fact that deletion must create a new precedence relationship. A clear way of seeing this aspect of deletion is to consider the explicit precedence structures that would support deletion as “underparsing” in the containment model of Optimality Theory proposed by Prince and Smolensky (1993). This version of OT claims that there is no loss of structure in deletion processes, but instead the deleted elements are unparsed in the output. A representation of precedence relations that meets this requirement for our hypothetical case of deletion is in (13). It is important to note that (13) is not a directed chain graph, because both a and u are mentioned in three edges; we return to this point below. (13)

a. #→t→a→g→u→% b.

vertices: edges (ordered):

{#, t, a, g, u, %} {# t, t a, a u, a g, g u, u %}

The operation of deletion in a containment model is achieved through the addition of the edge {a u} in (13b) to the input graph in (4), in effect creating a detour path around the g and rendering it underparsed. The “jump link” in (13a) illustrates the graph that results from adding this edge. As we show below, the only walk through (13) follows the added edge {a u} rather than {a g}, yielding the string tau. The representation in (13) contains all the information present in the input tagu and the output tau. It follows that in the containment model, GEN produces the structure in (13) as a candidate. Without considering the parochial constraint hierarchy that might select it as a candidate, the preceding suffices to show that the addition of a new edge is a viable way to achieve deletion in a containment model.

12

Charles Cairns & Eric Raimy

All graphs representing phonological structure must be interpreted by some sort of phonetic implementation mechanism, so the question now before us is how to assure that this mechanism follows the added “a to u” precedence relation, and not the old “a to g” precedence relation. There are two general solutions to the favoring of the newly added precedence relation and they differ in whether computation in phonology is “parallel” or “derivational.” “Parallel” computation is the type found in the containment model of OT, where there is a GEN function that creates a list of different candidates; there is also the EVAL function, which determines which candidate is the most harmonic given a language-specific ranking of CON. In this model, GEN would be free to create candidates that contain any number of novel precedence relations that do not exist in the input. Therefore, GEN will generate candidates with underparsed segments, and these candidates would indicate that phonetic implementation will follow the “deletion edge” and not the edges associated with the deleted g. Whether candidates with detours might emerge as the most harmonic will be the result of parochial constraint interactions. In short, preference for newly added precedence links that produce deletion can be implemented easily in the containment model of OT. This approach to deletion can be straightforwardly carried over to the Correspondence model of OT (McCarthy and Prince 1995). The difference between the implementation of deletion by underparsing in the Containment and Correspondence models of OT is in the status of a representation with conflicting precedence specifications as in (13). Whereas the Containment model can directly produce this type of representation as its output with the effect that segments that are “detoured” around are the underparsed segments, the Correspondence model can produce the chain graph in (11c) directly from (11b) by deleting the vertex g and by adding the new edge {a u}, operations freely performed by GEN. This computation requires the comparison of both (11c) and (11b) in order to determine the Max, Dep, and Contiguity violations. Thus it appears that the Containment and Correspondence models of OT converge on deletion as the addition of a new precedence link. Preference for newly added links and consequent deletion can be implemented in a derivational model of phonology by imposing an order on the list of edges. Recall that (13) is not a directed chain graph, a requirement for phonetic implementation. We return to this point in §7, but for now note that, as Idsardi and Raimy (forthcoming) point out, a derivational theory of phonology requires representations to have the characteristics of a directed chain graph only at the interface between modules. “Serialization” is a process that creates a walk from the beginning symbol # to the end symbol %; this walk creates a representation that is a directed chain graph. Idsardi and Shorey (2007) propose that serialization is a walk that uses the order of the list of edges to make decisions about which precedence link to follow when there is a choice. Thus, in (13b), when the walk reaches the a vertex, it must choose between following either the {a u} or the {a g} edge. This choice will be made by the order of the edge list, because one of these edges will be “first” and thus will be followed. In order for an added precedence link to produce deletion effects it must be added to the list of edges in a manner which places it before the precedence links associated with the segment to be deleted. We emphasize that the detour approach to deletion is virtually the same in three theories of phonology, i.e. the Containment and Correspondence models of OT

Precedence Relations in Phonology

13

and the derivational model. The differences among the phonological theories arise only from specific details on how the deletion process is actually computed in the different models. A second approach to deletion is also possible in graph-theoretic terms and can be understood as involving a type of coalescence, as suggested by de Lacy (2007) for OT. Another way to map a representation like (11b) to (11c) is to merge two of the vertices into one, without any addition of new precedence relations. We must first decide whether the “deleted” segment is merged with the preceding or following segment. For purposes of explication in our hypothetical example we merge the deleted g with the preceding a. The operation of vertex merger will be broken into steps in (14), so that important questions can be identified. (14)

Deletion as vertex merger a.

#→t→a→g→u→% vertices {#, t, a, g, u, %} edges (ordered) {# t, t a, a g, g u, u %}

b.

# → t → [ag] → u → %

c.

vertices edges (ordered) # → t → [a] → u vertices edges (ordered)

{#, t, [ag], u, %} {# t, t [ag], [ag] [ag], [ag] u, u %} →% {#, t, [a], u, %} {# t, t [a], [a] u, u %}

(14a) presents the representation for tagu as in (4). Vertex merger (14b) coalesces every occurrence of a and g into a new vertex ag. This produces the representation in (14b), where the merger of the a and g is indicated by the [ag] composite segment. The inevitable result of this is the production of the edge {ag ag}, or the edge that loops back onto itself, thus violating the constraint requiring irreflexivity. This reflexive edge will produce a geminate or long version of the composite [ag] segment. Although this is not the desired result for plain deletion, this situation produces compensatory lengthening effects (Hayes 1989; Sloan 1991: 80–87) in a straightforward manner without recourse to moras (see §7). The final steps that are required to produce (14c) are to eliminate the looping back arrow if it is not desired and to specify the phonetic interpretation of the vertex ag. There are a number of ways of accomplishing the former, one of which is to specify a parameter that allows languages to eliminate any “reflexive” edges (edges that are defined by two mentions of the same vertex) whenever they are formed. Any theory of phonology that has the resources to separate the melodic content of a segment from its timing slot easily handles the phonetic interpretation aspect of this process. Some languages retain features of both segments in the composite vertex; for example classic coalescence in Sanskrit /a/ + /i/ > /e/. In our example, we want the segment to simply be interpreted as a, which will require some extra statement. The preceding description of deletion in connection with compensatory lengthening and coalescence is theory-neutral, and will be implemented differently in different theories. The point is that considering explicit precedence relations

14

Charles Cairns & Eric Raimy

shows how deletion, coalescence, and compensatory lengthening are deeply connected regardless of how different theories focus on the various surface effects of deletion. One question that arises from this discussion is whether there is an embarrassment of wealth in the representational possibilities offered by graph theory to produce deletion. It appears that all the ways of accomplishing deletion are attested in different languages. For example, Tohono O’odham vowel syncope in reduplication is best characterized by the detour approach (Raimy 2000: 113–114). Chumash /l/-deletion (Raimy 1999: 82–83) exhibits deletion that is best characterized with the coalescence approach, as are examples of classic coalescence. Some languages appear to have both options available in competition with each other (Indonesian nasal assimilation; Raimy 2000: 99–112). Consequently, all of the analytical options based on different representational opportunities appear to be attested. Questions about deletion are thus typological in nature, where phonologists should ask which kind of deletion any particular instantiation in a particular language exhibits: does the type of deletion that occurs correlate with compensatory lengthening and/or coalescence processes, and what diagnostics are there to distinguish the different types of deletion? An important theme of this section is that making precedence relations explicit via graph theory is useful to all theories of phonology. This should not be misunderstood as suggesting that there are no differences among different theories of phonology; our point is that specific differences with respect to precedence can be identified explicitly through differences in the necessary graph structures. Put another way, graph theory provides a reasonably neutral lingua franca to discuss and explore the nature of precedence in phonology. Of course, up until this point we have been discussing phonology whose elements are discrete, timeless entities; we now turn to considering the role of precedence in theories that view segments as containing real-time articulatory gestures.

5

Precedence and discrete point theory

One assumption in the preceding sections of this chapter that can and should be called into question is that phonology operates strictly on abstract discrete elements. This is the standard assumption in the class of theories of phonology that we will call formal phonology. Not all theories of phonology make this assumption. According to the theory of Articulatory Phonology (Browman and Goldstein 1986, 1989, 1990a, 1990b), vertices represent “gestures,” i.e. abstract representations of dynamic articulatory events or actions. Thus the basic elements exist in real time, and are not abstract, timeless points. Such theories are also amenable to graph-theoretic representation, because they can be seen to have multiple precedence structures, one for each articulator in a segment. In fact, from a precedence structure point of view, it may be just the difference of whether there is a single abstract precedence structure or multiple more concrete articulator based precedence structures that produces an overall precedence structure that allows or disallows overlap. Gafos (2002) argues that it is the nature of precedence that is at issue in the formal vs. gestural models of phonology. Gafos (2002: 270) observes that “the phonologically relevant notion of time is overlap of dynamic units”; this is in contrast

Precedence Relations in Phonology

15

to “linear order of static units [being] the only relevant notion of time in phonology.” Both approaches still order phonological elements; they differ only in the nature of the precedence graphs. For formal phonology, the vertices are segments that are timeless points, and overlap is not definable. In gestural phonology, the vertices are gestures that have both spatial and temporal aspects, so there is a more complicated relation of overlap. Bird and Klein (1990) explicitly develop the model of precedence for phonological representations that can overlap. Another way of conceptualizing precedence in gestural phonology is to assign weight to the edges in a precedence graph to indicate the amount of overlap between each gesture; alternatively, the weights may indicate the actual time span between the gestures. Any model of phonology can be cast by precedence graphs. Different models of phonology simply argue for or assume different characteristics for the relevant precedence graph. The different approaches to phonology – formal and gestural – are only incompatible if one assumes that either model will account for all phonological phenomena. Both models are necessary in their own domains and provide insights into different aspects of phonology. Formal phonology with atemporal segments represents phenomena closer to the morphology–phonology interface, while gestural phonology represents phenomena closer to the phonetics– phonology interface.

6

Graph theory as a tool for phonology

One major advantage of adopting graph theory as the formal underpinning of precedence in phonology is that it supplies general and specific knowledge that can be applied to uniquely phonological questions. Two topics in phonology that directly benefit from general knowledge of the nature of graphs are the no crossing constraint and locality in phonology. The no crossing constraint originates in Goldsmith (1976: 27) as the statement “Association lines do not cross.” Archangeli and Pulleyblank (1994: 39) present this constraint as a ban on the representation in (15a), which we augment with explicit precedence relations in (15b). (15)

The no crossing constraint (Archangeli and Pulleyblank 1994: 39) a. [i [j

b. [i→[j

b

a→ b

a

The problem with (15) is that it appears to encode conflicting precedence relations. The association lines indicate that a and [j occur together, as do b and [i. The two tiers thus provide conflicting information about which feature precedes the other feature, because it encodes both a → b and b → a. Two assumptions dictate that (15b) is ineluctably derived from (15a): that the left-to-right array of printed symbols encode directed edges and that association lines must be straight. Coleman and Local (1991) argue that the no crossing constraint is an incoherent concept in autosegmental phonology, in part because there is no mathematical

16

Charles Cairns & Eric Raimy

justification for insisting on straight lines; that restriction is merely a convention for drawing diagrams and has no formal content. (15a) is formally equivalent to all the diagrams in (16), which do not portray a no crossing constraint violation. (16a) is a graphic trick making an association line curved; (16b) makes the diagram appear three-dimensional, and (16c) simply moves one of the elements to the other side of a tier. (16)

Graphic workarounds a.

[i [j a

c. [i

b. [i [j

b a

b

a

b [j

More importantly, Coleman and Local argue that the no crossing constraint implies that autosegmental representations must be “planar,” an important question of graph theory that bears directly on when lines cross or not. Kuratowski (1930) proves that two kinds of graph cannot be drawn within a single plane without lines crossing. One is the K3,3 graph in (17). This is a bipartite graph, which means that the vertices can be partitioned into two sets such that each edge of the graph has one end in one set and the other in the other. The presence of a K3,3 graph structure means that the graph which has it is “nonplanar” if the lines are not to cross. According to Kuratowski’s Theorem, a graph is non-planar if and only if it contains either the subgraph K3,3 or K5 (see Kuratowski 1930 for the K5 graph, which does not appear to be relevant to phonology). (17)

Kuratowski’s K3,3 graph

Coleman and Local (1991) establish that most supposedly three-dimensional (non-planar) representations in the literature are in fact planar; on the other hand, they also point out that phonology must countenance non-planar representations. The autosegmental representation of the word room in Guyanese English in (18), adapted from Coleman and Local (1991: 330), is an example of a non-planar graph (i.e. it contains a K3,3 graph); the three features [nasal], [round], and [back] are on independent tiers and each is associated with all three segments. The first x-slot represents the phoneme /r/, the second /u/, and the last /m/. Coleman and Local report that the three features depicted here spread to each of the three segments independently of one another, and therefore each should be on a separate tier. It is not possible to depict these associations in two-dimensional space without lines crossing.

Precedence Relations in Phonology (18)

[nasal]

17

[round]

x

x

x

[back] The import of Coleman and Local’s work is that autosegmental representations can be non-planar, which means that different distinctive features will not be able to cross lines by definition, because the association lines will be in different planes. The practical result is that phonologists must be explicit about the nature of precedence in phonological representations and distinguish between conventions on drawing convenient diagrams and formal properties of precedence. Restrictions on precedence relations in phonology do not necessarily reside in the representations themselves but may result from how they are implemented. Graphs are “abstract data structures” (Aho et al. 1985) that can be implemented in many different ways. The manner of implementation affects what operations are easier (or possible) to perform than others. For example, we have so far portrayed phonological representations as a list of vertices (x-slots and associated features) and edges (precedence relations) as the basis for implementation. Some scholars (e.g. Heinz 2007) use adjacency tables as the basis for implementation, as exemplified for tagu in Table 34.1 (which is technically a local adjacency table). An adjacency table is made by listing the vertices of the graph as the headers for columns and rows and indicating in each cell whether the relevant precedence relationship holds or not. The row headers in Table 34.1 indicate which segment precedes the column headers; Table 34.1 encodes that t is the word-initial segment, because no segment precedes it, indicated by the lack of any mark in the t column; t precedes a, as indicated by the mark in the t row’s a column, a precedes g, and so on. This type of precedence encoding makes information about immediate precedence easily accessible, reflecting the frequency of phonological operations that are strictly local. A drawback to this type of precedence encoding is that long-distance relationships required to account for phenomena like long-distance assimilation and vowel harmony (see chapter 77: long-distance assimilation of consonants; chapter 91: vowel harmony: opaque and transparent vowels; chapter 118: turkish vowel harmony; chapter 123: hungarian vowel harmony) must be calculated by a walk (see §2) through the representation. Adjacency tables can encode “transitive precedence.” Table 34.2 does this by marking in each row all of the other segments that a particular segment precedes Table 34.1 Immediate adjacency

Table 34.2 Transitive precedence

follows

t a g u

a

g

u

x x x

t precedes

precedes

t

follows

t a g u

a

g

u

x

x

x

x

x x

18

Charles Cairns & Eric Raimy

transitively. Table 34.2 differs from Table 34.1 in that for each row, a mark is put in the column for every segment that the row’s segment precedes transitively. Consequently, t is encoded as the word-initial segment because no other segment precedes it, and it precedes all other segments, as indicated by the marks in the a, g, and u columns in the t row; u is in word-final position because it precedes no other segment, and all segments precede it. The advantages of this type of precedence encoding is that long-distance relationships are directly encoded. For example, the fact that a has a mark in the u column indicates that there is a direct phonological relationship between these two segments, which would support processes like vowel harmony. The main disadvantage to this type of precedence encoding is that local adjacency has to be calculated by determining that there are no segments “between” the segments in question. Adjacency tables in general have two more drawbacks. The first one is that a new adjacency table will have to be created when a new segment is added to a representation, because the table is defined by the segments in the representation. This is computationally expensive. The second drawback is that adjacency tables are not very economical for the type of graphs that would be common for phonological representations. The point to understand here is that if we look at Table 34.1, there are a total of 16 cells that make up the table, but only three of them encode precedence information relevant to local operations. All cells in a table impose a representational cost, so if there is any pressure to keep phonological representation economical, adjacency tables have a lot of empty costs involved. Phonological representations in general are going to be “sparse graphs,” favoring the use of lists of vertices and edges (Aho et al. 1985). We approach the question of how to encode precedence information in phonological representations by considering the computations that are required to implement them. The type of graph introduced in (3) requires a walk through it to ensure that it is connected. The local adjacency Table 34.1 requires a walk through it to calculate long-distance relationships, and the temporal precedence Table 34.2 requires computation to determine local adjacency. This general conclusion coincides with recent work in both derivational and parallel models of phonology which argues that phonological environments need to be calculated in some manner. See Mailhot and Reiss (2007), Samuels (2009), and Nevins (2010) for the derivational model of searching, and work along the lines of Rose and Walker (2004) and Krämer (1999) for parallel Correspondence Theory approaches.

7

Extensions of precedence graphs

Two general phonological phenomena are directly derivable from the previous discussion of how deletion is accomplished. First, it is natural to recognize that epenthesis (see chapter 67: vowel epenthesis) should be the mirror image of deletion and in fact this is the case. (19) presents the hypothetical case of the form tagu undergoing epenthesis of r after the a. (19)

a. b.

#→t→a→g→u→% #→t→a→r→g→u→%

Precedence Relations in Phonology

19

Just as there are two distinct ways to produce the deletion of a segment, there are two distinct ways to epenthesize a segment. The first way is to explicitly add precedence relations to and from a new segment from the relevant points in the representation. This would take (19a) and produce the representation in (20). (20)

# → tt → →aa→ →gg→→uu→→%% r

The new structure added to (19a) to produce (20) is offset by having the segment r below the underlying segments. Notice the parallels between (19) and (13a), and recall that the actual graphic layout does not matter. The difference is that deletion is detouring around an old segment, while epenthesis is detouring through a new segment. Another parallel between the epenthesis and deletion representations is that both need to be “serialized” to resolve the precedence conflict present in them. As earlier in the chapter, all models of phonology have the relevant resources to produce the proper output to this form. “Fission” is to epenthesis as coalescence is to deletion. To see this, consider the explicit representation of x-slots and melodies in (21), which shows the splitting of an x-slot to produce a new segment. (21)

a.

t

a

g

u

# → x → xi → x → x → % b.

t

a

g

u

# → x → xi → xi → x → x → % This type of epenthesis raises the question how the new x-slot is formed without taking the associated melody along with it; reasons of space preclude a full exposition of this question here beyond pointing out that the interpretation of a bare x-slot will be determined on a language specific basis (see chapter 67: vowel epenthesis; chapter 58: the emergence of the unmarked). The differences between these two types of epenthesis suggest differences in the type of epenthetic segment. The approach in (20) supports a prespecified type of epenthetic segment, which cannot be derived from markedness conditions, while the approach in (21) suggests an “Emergence of the Unmarked” (McCarthy and Prince 1994) type of epenthetic segment, because an empty x-slot needs to be interpreted. A final note is that just as the coalescence approach to deletion creates an intermediate representation that can account for compensatory lengthening effects, so this fission approach can also account for simple segment lengthening effect by fissioning the melody along with the x-slot. If segmental length is encoded on the x-tier as opposed to being marked by a mora (see Ringen and Vago 2010), there are two representations for true geminates (Hayes 1986; Schein and Steriade 1986) that follow naturally. The structure in (22a) is the traditional multiply linked melody representation, while (22b) is a novel looped segment representation for a geminate.

20 (22)

Charles Cairns & Eric Raimy Two types of geminates a. traditional m x→x

b. looped m x

There appears to be evidence from Tohono O’odham (Raimy 2000: 116) that shows that both types of geminate segments can exist in a single language. The presence or absence of geminate inalterability effects (Hayes 1986; Schein and Steriade 1986) may also be seen as a difference in representations. The representations for deletion, epenthesis, and geminates given above reveal that there are natural graph representations that do not follow the constraints that define a chain-graph representation. The representations for deletion and epenthesis involve the addition of at least one precedence relation that causes the graphs to no longer be chain graphs. It appears that phonology tolerates graphs that do not follow the strict conditions that define a chain graph, at least for a portion of the phonological computation, however it is conceived. This suggests that other (morpho)phonological phenomena will have natural analyses based on different modifications to precedence graphs. Hypocoristic formation is one such process. Alber (2007, 2009) presents data showing hypocoristic formation in northern Italian dialects. The data in (23) show that hypocoristics are formed in these northern Italian dialects by truncating all phonological material after the first vowel of the form. (23)

Northern Italian hypocoristics (Alber 2007, 2009) Truncated Fra Cri Lu Ste

Source name Francesca Cristina Luisa Stefana

Hypocoristics are truncated forms, so we should expect truncation to parallel deletion. This can be implemented by allowing the morphological rule of hypocoristic formation to add a precedence link from the “first vowel” to the end symbol, %, as in (24). (24)

Hypocoristic formation in Italian via truncation

#→f→r→a→n→c→e→s→c→a→% The representation in (24) is not a chain graph, so it will undergo serialization and the truncated form will result. The main advantage to this approach is that the deletion involved in truncation is derived from the addition of phonological material, namely a new precedence link.

Precedence Relations in Phonology

21

A minimal extension of the epenthesis as addition of a precedence link produces infixing phenomena. Toba Batak infixation, taken from Halle (2001), is presented in (25) (stress is suppressed for expository purposes). (25)

Nominalizer -al- in Toba Batak batuk ogo

b-al-atuk al-ogo

‘ladder’ ‘wind’

This infixing pattern can be described as the affix al preceding the first vowel of the form. Whether al appears as an infix or a prefix is derived from whether the first vowel of the form is in word-initial position. This variation does not change the positional generalization about where this affix is concatenated to the base. (26) provides the precedence graphs for the examples in (25). (26)

a.

a→ l #→b→a→t→u→k→%

b. a → l #→o→g→o→% As with previous examples, the addition of the affix al produces a representation that violates the characteristics of a chain graph, which forces the representation to be serialized at some point in time. See Yu (2007) for further discussion of infixation. Our last example of (morpho)phonological processes that are illuminated by explicit precedence graphs is from Spokane, in (27). Repetitive morphology in Spokane is interesting because it combines elements of infixation and reduplication. The content of the repetitive morpheme is /e/; it appears to infix if the root begins with two consonants (27a), but causes reduplication if the base begins with a single consonant (27b). (27)

Repetitive morphology in Spokane (simplified, from Bates and Carlson 1992) a.

repetitive petax qesip

base ptax qsip

‘spit’ ‘long ago’

b.

repetitive sesil kekul

base sil kul

‘chop’ ‘make’

The precedence graphs that capture this behavior are shown in (28). (28)

a.

e #→p→t→a→x→%

b.

e #→s→i→l→%

22

Charles Cairns & Eric Raimy

Although the representations in (28) appear to be fundamentally distinct, they can be constructed with a single generalization on how the /e/ is concatenated to the base. The repetitive morpheme /e/ follows the first segment of the base and precedes the consonant before the first vowel. In (28a) and bases with two consonants in general, this description causes infixing of the /e/, because the two parts of the description are distinct, while in bases that begin with a single consonant, the /e/ loops back to this single consonant. The standard analysis of serializing precedence graphs that do not conform to the conditions for a chain graph produces a single repetition when there is a “loop” in a precedence graph like that in (28b). This observation forms the basis of the approach to reduplication proposed by Raimy (2000, 2009) (see chapter 100: reduplication).

8

Conclusions

Adjacency and linearity have been important topics in phonology for many years. As de Lacy (2007) noted, all rules/constraints seem to operate on elements that are in some sense adjacent; McCarthy (1989) spoke of “strict locality” as a central topic in phonology. Our understanding of adjacency is that all questions of adjacency are questions of precedence. Graph theory provides a theoryneutral framework in which formal accounts of precedence can be proposed and evaluated. By having a formal model of precedence, murkier questions of how adjacency operates in different models of phonology can be addressed straightforwardly. An explicit understanding of the logic of phonological sequences allows us to better understand phonological “action,” especially “action at a distance.” Mailhot and Reiss (2007), Samuels (2009), and Nevins (2010) propose that fundamental phonological operations involve search-and-copy algorithms. The immediate question raised by these proposals is what it is that these search procedures operate on. We suggest the precedence graphs discussed in this chapter as the answer to this question. This chapter has presented a survey of potentially useful graphs matched with fundamental phonological and morphophonological phenomena. All models of phonology require a theory of representations, because computation is inherently connected to the representations (and vice versa) (McCarthy 1988: 84). Thus, by increasing our precision and attention to the representation of precedence in phonology, the precision of our knowledge about computation is advanced. Questions of locality in phonology are long-standing, but the adoption of graph theory as a representational tool for precedence in phonology revolutionizes the questions that can be asked about phonology in the same way that syllabic representation (Kahn 1976), autosegmental representation (Goldsmith 1976), and prosodic representation (McCarthy 1981) have done in the past for phonology.

Precedence Relations in Phonology

23

REFERENCES Aho, Alfred, John Hopcroft & Jeffrey Ullman. 1985. The design and analysis of computer algorithms. Reading, MA: Addison-Wesley Publishing Company. Alber, Birgit. 2007. Deutsche und Italienische Kurzwörter im Vergleich. In Claudio Di Meola, Livio Gaeta, Antonie Hornung & Lorenza Rega (eds.) Perspektiven Zwei, 101–112. Rome: Istituto Italiano di Studi Germanici. Alber, Birgit. 2009. The foot in truncation. Paper presented at the CUNY Conference on the Foot, January 2009. Available (May 2010) at http://www.cunyphonologyforum.net/ footconf.php. Anderson, Stephen R. 1974. The organization of phonology. New York: Academic Press. Archangeli, Diana & Douglas Pulleyblank. 1994. Grounded phonology. Cambridge, MA: MIT Press. Bach, Emmon. 1968. Two proposals concerning the simplicity metric in phonology. Glossa 2. 128–149. Bagemihl, Bruce. 1991. Syllable structure in Bella Coola. Linguistic Inquiry 22. 589–646. Bates, Dawn & Barry F. Carlson. 1992. Simple syllables in Spokane Salish. Linguistic Inquiry 23. 653–659. Bird, Steven & Ewan Klein. 1990. Phonological events. Journal of Linguistics 26. 33–56. Blevins, Juliette. 2003. The independent nature of phonotactic constraints: An alternative to syllable-based approaches. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 375–403. Cambridge: Cambridge University Press. Browman, Catherine P. & Louis Goldstein. 1986. Towards an articulatory phonology. Phonology Yearbook 3. 219–252. Browman, Catherine P. & Louis Goldstein. 1989. Articulatory gestures as phonological units. Phonology 6. 201–251. Browman, Catherine P. & Louis Goldstein. 1990a. Gestural specification using dynamically defined articulatory structures. Journal of Phonetics 18. 299–320. Browman, Catherine P. & Louis Goldstein. 1990b. Representation and reality: Physical systems and phonological structure. Journal of Phonetics 18. 411–424. Cairns, Charles. 1988. Phonotactics, markedness and lexical representation. Phonology 5. 209–236. Cairns, Charles & Eric Raimy. 2009. Architecture and representations in phonology. In Raimy & Cairns (2009), 1–16. Coleman, John & John Local. 1991. The “No Crossing Constraint” in autosegmental phonology. Linguistics and Philosophy 14. 295–338. Czaykowska-Higgins, Ewa & Marie Louise Willet. 1997. Simple syllables in Nxa’amxcin. International Journal of American Linguistics 63. 385–411. de Lacy, Paul. 2007. The formal properties of phonological precedence. Paper presented at the CUNY Conference on Precedence Relations, January 2007. Available (May 2010) at http://www.cunyphonologyforum.net/forum.php. Fudge, Erik C. 1969. Syllables. Journal of Linguistics 5. 253–286. Fudge, Erik C. 1987. Branching structure within the syllable. Journal of Linguistics 23. 359–377. Gafos, Adamantios I. 2002. A grammar of gestural coordination. Natural Language and Linguistic Theory 20. 269–337. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Published 1979, New York: Garland. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Halle, Morris. 2001. Infixation versus onset metathesis in Tagalog, Chamorro, and Toba Batak. In Michael Kenstowicz (ed.) Ken Hale: A life in language, 153–168. Cambridge, MA: MIT Press.

24

Charles Cairns & Eric Raimy

Hayes, Bruce. 1986. Inalterability in CV phonology. Language 62. 321–351. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Heinz, Jeffrey. 2007. The inductive learning of phonotactic patterns. Ph.D. dissertation, University of California, Los Angeles. Hulst, Harry van der. 2008. The syllable in RCVP: Structure and licensing. Paper presented at the CUNY Conference on the Syllable, January 2008. Available (May 2010) at http://www.cunyphonologyforum.net/syllconf.php. Idsardi, William J. & Eric Raimy. Forthcoming. Three types of linearization and the temporal aspects of speech. In Theresa Biberauer & Ian Roberts (eds.) Challenges to linearization. Berlin & New York: Mouton de Gruyter. Idsardi, William J. & Rachel Shorey. 2007. Unwinding morphology. Paper presented at the CUNY Phonology Forum Conference on Precedence Relations, January 2007. Available (May 2010) at http://www.cunyphonologyforum.net/forum.php. Kahn, Daniel. 1976. Syllable-based generalizations in English phonology. Ph.D. dissertation, MIT. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Krämer, Martin. 1999. A correspondence approach to vowel harmony and disharmony. (ROA-293.) Kuratowski, Kazimierz. 1930. Sur le problème des courbes gauches en topologie. Fundamenta Mathematicæ 15. 271–283. Langacker, Ronald. 1969. Mirror image rules II: Lexicon and phonology. Language 45. 844–862. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. Mailhot, Frédéric & Charles Reiss. 2007. Computing long-distance dependencies in vowel harmony. Biolinguistics 1. 28–48. McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12. 373–418. McCarthy, John J. 1986. OCP effects: Gemination and antigemination. Linguistic Inquiry 17. 207–263. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. McCarthy, John J. 1989. Linear order in phonological representation. Linguistic Inquiry 20. 71–99. McCarthy, John J. & Alan Prince. 1994. The emergence of the unmarked: Optimality in prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Rutgers University (ROA-13). McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249–384. Amherst: GLSA. Nevins, Andrew. 2010. Locality in vowel harmony. Cambridge, MA: MIT Press. Odden, David. 1986. On the role of the Obligatory Contour Principle in phonological theory. Language 62. 353–383. Odden, David. 1988. Anti antigemination and the OCP. Linguistic Inquiry 19. 451–475. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Raimy, Eric. 1999. Representing reduplication. Ph.D. dissertation, University of Delaware. Raimy, Eric. 2000. The phonology and morphology of reduplication. Berlin & New York: Mouton de Gruyter. Raimy, Eric. 2009. Deriving reduplicative templates in a modular fashion. In Raimy & Cairns (2009), 383–404. Raimy, Eric & Charles Cairns (eds.) 2009. Contemporary views on architecture and representations in phonology. Cambridge, MA: MIT Press.

Precedence Relations in Phonology

25

Ringen, Catherine & Robert M. Vago. 2010. Geminates: Heavy or long? In Charles Cairns & Eric Raimy (eds.) Handbook of the syllable. Leiden: Brill. Rose, Sharon & Rachel Walker. 2004. A typology of consonant agreement as correspondence. Language 80. 475–531. Samuels, Bridget. 2009. The structure of phonological theory. Ph.D. dissertation, Harvard University. Saussure, Ferdinand de. 1916. Cours de linguistique générale. Lausanne & Paris: Payot. Schein, Barry & Donca Steriade. 1986. On geminates. Linguistic Inquiry 17. 691–744. Sievers, Eduard. 1881. Grundzüge der Phonetik zur Einführung in das Studium der Lautlehre der indogermanischen Sprachen. Leipzig: Breitkopf & Härtel. Sloan, Kelly D. 1991. Syllables and templates: Evidence from Southern Sierra Miwok. Ph.D. dissertation, MIT. Steriade, Donca. 1999. Alternatives to syllable-based accounts of consonantal phonotactics. In Osamu Fujimura, Brian D. Joseph & Bohumil Palek (eds.) Item order in language and speech, 205–242. Prague: Karolinum Press. Sweet, Henry. 1877. A handbook of phonetics. Oxford: Clarendon Press. Trubetzkoy, Nikolai S. 1939. Grundzüge der Phonologie. Göttingen: van der Hoeck & Ruprecht. Translated 1969 by Christiane A. M. Baltaxe as Principles of phonology. Berkeley & Los Angeles: University of California Press. Vaux, Bert & Andrew Wolfe. 2009. The appendix. In Raimy & Cairns (2009), 101–143. Wilson, Robin J. 1996. Introduction to graph theory. Longman: London. Yu, Alan C. L. 2007. A natural history of infixation. Oxford: Oxford University Press.

35

Downstep Bruce Connell

1

Introduction

Downstep is a pitch-lowering phenomenon that is widely recognized to occur in tone languages, particularly those of sub-Saharan Africa, in which it was first identified. It is also attested in several languages of the Americas, though only very rarely in Asia. The concept of downstep has also more recently been extended to account for phenomena associated with intonation in non-tonal languages,1 and is indeed perhaps better known to non-specialists in this context. Downstep is most commonly described as the lowering influence of a low tone (L) on a following high tone (H), such that a new, lower, “ceiling” is set for all subsequent Hs within a specifiable domain or prosodic unit. One of the early and still primary areas of debate in the study of downstep pertains to the sameness or otherwise of downstep as effected by a surface L as opposed to an underlying or floating L. Another area of interest has to do with the implications of its recurrent and cumulative nature for the analysis of contrastive tone levels; a language with only two apparent contrastive lexical levels (e.g. H, L) will manifest several intervening levels in actual speech, including situations in which a H occurring late in an utterance may be realized at a lower pitch than a L early in the same utterance. Another debate centers on the analysis of downstep applying to tones other than H; while the paradigm case is the lowering of H in a language with two tones (H, L), downstep does occur in languages with more than two tones, and in some such languages, cases of downstepped mid (M) and L tones are also attested. Whether such cases parallel downstepping of H remains moot. In addition, there are apparent cases of “upstep,” a tone raising phenomenon that ideally would be symmetrical to downstep in the details of its realization, though few if any of the attested cases are precisely so.

1

An anonymous reviewer suggests the term “non-tonal language” is a misnomer, given that “there is no essential qualitative difference between tones based on their origin in the lexicon or in phrasal phonology.” While I am sympathetic to the view that “non-tonal” is problematic, the claim that “there is no essential difference” is contentious. This discussion is outside the scope of the present chapter, but Hyman’s (2001: 1368) definition of a tone language serves to distinguish the two types of language, a tone language being one, “in which an indication of pitch enters into the lexical realization of at least some morphemes.” The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0035

Downstep

\ 2

Discussion as to how best to characterize downstep from a theoretical perspective has benefited from and contributed to general phonological theory, with proposals being shaped by phonetic and phonological approaches. This debate remains unresolved, with some scholars advocating the view that downstep is best accommodated in the phonetic implementation component (e.g. Poser 1984; Beckman and Pierrehumbert 1986; Pierrehumbert and Beckman 1988), and others arguing that it is phonological (e.g. Snider 1998, 1999). The discussion of downstep was particularly fruitful in the 1960s and then again through the 1980s and 1990s, with the advent of autosegmental phonology (chapter 14: autosegments) and feature geometry (chapter 27: the organization of features). Since then the debate has subsided somewhat, though two good general presentations of downstep have appeared in recent years, in Yip (2002) and Gussenhoven (2004). The most recent detailed argument for a particular theoretical approach to downstep is Snider (1999). This chapter characterizes downstep and related tonal phenomena, summarizing with representative data the key issues and current views. In doing so, I draw on those sources just mentioned, as well as several other published (and some unpublished) studies that have contributed to our understanding of downstep. The remainder of the chapter is divided as follows. §2 is an overview of work leading to the recognition and study of downstep in tone languages. This section defines the key terms used in the discussion, presents relevant illustrative data, and introduces some of the important theoretical issues involved. The third section then discuss downstep-related issues, including its distribution, downstepping of tones other than H and in languages with more than just H and L tones, and downstep in non-tonal languages. What triggers downstep is discussed in §4, phonetic aspects of downstep in §5, and issues pertaining to upstep and H-raising in §6. A particular personal concern has been the inconsistent and conflicting use of terminology found in the discussion of downstep and related pitch phenomena, and its importance for developing an adequate understanding of these phenomena; these issues are examined in §7. §8 presents instrumental evidence from the Bantoid language Mambila that bears on a resolution to the issues discussed in §7. Inevitably, some aspects of the topic receive fuller treatment than others; in particular, the substantial literature in which the notion of downstep is used in the analysis of non-tonal languages does not get the attention it deserves. This is in large part due to space constraints, but it is to some extent also a reflection of my own expertise and familiarity with the subject, as well as an attempt to address a perceived imbalance in the general theoretical phonology literature; while downstep is primarily a phenomenon of tone languages, it has perhaps received greater attention for its operation in non-tonal languages, and this aspect of the discussion is more accessible to the general reader. For related discussion, see also chapter 45: the representation of tone; chapter 114: bantu tone.

2 2.1

An overview of downstep in language study: Phonological issues Early studies

The phenomenon now known as downstep was first noted in print well over a century ago, by Christaller (1875: 15). In his discussion of Fante (Kwa, Ghana)

3

Bruce Connell

grammar, Christaller talks of three tones – high, middle, and low – but explains the middle tone as “high tones abating by one step or successive steps”; thereby recognizing the underlying sameness of the middle and high tones. Subsequent scholars appear to have followed Christaller, though the concept of downstep was not fully recognized until much later. Ward (1933), for example, describes Efik (Benue-Congo, Nigeria) as having “three well defined levels of tone . . . high, mid and low,” and the mid tone as “a subsidiary of the high group of tones” (1933: 34–35). Ward’s examples2 illustrate what are now recognized as “automatic downstep,” the lowering of a surface High tone under the influence of an immediately preceding surface Low, and “non-automatic downstep,” in which a H is lowered by an underlying or floating L3: e.g. in /eJe/ [ – – ] ‘he’, and /ubom/ [ – – ] ‘canoe’, the second tone is said to be lowered by the preceding L, since the sequence [ – – ] is unattested. Ward applies a similar analysis to words like /edep/ [ – – ] ‘he buys’ (“the second tone is a high tone lowered from the preceding high prefix to show a particular tense usage”; 1933: 35), despite the absence of a preceding L. She then argues, however, that there is a Mid, distinct from the lowered H, occurring in words such as /DbDI/ [ – – ] ‘chief’ contrasted with /DbDI/ [ – – ] ‘mosquito’, which “can in no way be considered as a high tone lowered as in edep.” Ward makes no attempt to account for this “mid” tone, and does not discuss why it cannot be identified with the lowered tone of /edep/. A brief unattributed description of Igbo (Benue-Congo, Nigeria) in The principles of the IPA (IPA 1949), moves a step closer to recognizing the underlying identity of H and what was called M for Efik, now referring to it (in Igbo) as “a lowered high tone.”4

2.2

Recognition of downstep

A closer understanding of the nature of such tone systems, however, was not offered until Winston (1960), who, like Ward, worked on Efik. Winston’s analysis expressly recognizes just two phonemic tones for Efik, H and L, together with a “downstep” phoneme, which effects a lowering of following H. Both the term “downstep” and the notational convention since widely adopted – a raised exclamation mark preceding the affected syllable – were introduced by Winston. (The exclamation mark is still current, though increasingly a downward pointing arrow ↓ has been adopted, following IPA conventions.) Winston also recognized that H tones subsequent to the downstepped H did not rise above the height of the downstepped H; i.e. a new “ceiling” for H is set by downstepping, creating a terracing effect. Although the details of this were not addressed explicitly by Winston, e.g. questions pertaining to the domain or extent of downstep remained unanswered, it is clear from his examples that downstep is cumulative and that there are limits to its application. Five sentences drawn from Winston’s data permit a simplified version of his arguments sufficient to illustrate the basic nature of downstep (Winston 1960: 185–186): 2

Throughout the chapter, transcriptions from the work of different authors have been standardized to follow IPA conventions. 3 The terms “automatic” and “non-automatic downstep” are due to Stewart (1965), discussed below. 4 Attributed to Ward in Welmers (1973: 85).

Downstep (1)

idi:(e) DbDI ke Ikut [–– ‘It isn’t a mosquito that I see.’ b. idi:(e) DbDI ke Ikut [–– ‘It isn’t a piece of cane that I see.’ c. idi:(e) DbDI ke Ikut [–– ‘It isn’t the chief that I see.’ d. ekpeJDI edi ufDk [–– ‘Ekpenyong came to the house.’ e. ekpeJDI emen inuen DJDI edi ufDk [–– ‘Ekpenyong picked up the bird and came home.’ a.

4

–––––] –

––––]

–––––] –––– ] – –––––––– –––]

The first three sentences, which differ only in the italicized words (i.e. the tone pattern of /DbDI/) show, in (1a) all Hs, in (1b) Hs where an intervening L has conditioned a lowering of following Hs, and in (1c) a similar lowering of the last four Hs, though without a preceding L to effect the lowering. Sentences (1d) and (1e) show successive lowering of what are analyzable as Hs, with only the initial and final tones being L. The “terracing” seen with the Hs in these two sentences shows the classic effect of downstep. These data reveal the unsatisfactory nature of an analysis that sees the lowered tones as M. First, such an analysis would require /ké I↓kút/ ‘that I see’ in sentences (1a)–(1c) to be identified first as HHH (1a) and later as MMM (1c); in (1b), while the tones of /ké I↓kút/ are phonetically similar to those in (1c), they are best treated as HHH, since they represent a conditioned lowering. Further, the successive lowering in (1d) and (1e) would require a number of mid tones (i.e. Ms of different heights), with /édí/ ‘came’ bearing a different, lower, M in (1e) than in (1d). Winston examines briefly a different solution, which contrasts non-low and low, with non-low tones divided into H and M, and with restrictions on the distribution of M, but finds this equally awkward. He instead proposes “two distinct systems of contrasting tonal units” (1960: 187): first, H vs. L, which accounts for sentence (1a) vs. (1b), and second, “a unit of ‘downstep’,” which operates only in the context of HH, and accounts for sentence (1a) vs. (1c). Winston draws attention to the fact that not only does downstep distinguish sentences, but it is phonemic in its own right, as the words /FbFI/ ‘mosquito’, /D↓bFI/ ‘chief’, and /GbGI/ ‘cane’ demonstrate. His analysis is also insightful in that it focuses attention not on the nature of the tones themselves – e.g. the last four tones of (1a)–(1c) – but on the relation between these tones and preceding tones; the drop is the realization of downstep.

2.3

Automatic and non-automatic downstep

As mentioned, the Efik data included instances of “non-automatic” downstep, in which a H is lowered by an underlying or floating L, and the lowering of a surface H tone under the influence of a preceding surface L, termed “automatic” downstep. These terms were introduced in Stewart (1965) and continue to be used, though many writers use (simply) downstep to refer to “non-automatic” downstep, and “downdrift” when referring to automatic downstep. As discussed below in §7, however, there are in fact different tonal processes grouped together as downdrift, and the use of downdrift in referring to automatic downstep as well

5

Bruce Connell

as other pitch-based phenomena such as declination has led to a lack of clarity in discussions of tonal phenomena. Stewart’s rationale for introducing the two terms was the recognition that in both cases – automatic and non-automatic downstep – the lowering is triggered by a L, in the former case a surface L and in the latter underlying. It should be pointed out that the underlying L was not simply an abstract postulate, but one that, at least in a great many cases, could be confidently established on either synchronic or diachronic evidence. The suggested essential unity of the two cases leads to the reasonable expectations that, first, at the phonetic level the degree of lowering introduced in both automatic and non-automatic downstep should be the same and, second, in languages in which a floating L triggers downstep, a surface L should do the same. While these expectations are indeed often met, this is not always the case, and this has been the subject of research and debate, returned to in §5 below.

2.4

Terracing and discrete level languages

The lowering of tones through downstep, with the new lower height continuing in subsequent tones giving a step-like F0 contour, has come to be known as “terracing.” Languages characterized by terracing were opposed to “discrete level” languages (e.g. Welmers 1959, 1965, 1973), in which “two or more contrastive levels of pitch are maintained from pause to pause with no intersection of actual pitch” (Welmers 1965: 50). Welmers preferred to see the difference in these terms (i.e. terracing vs. discrete) while others, e.g. Stewart (1965), in accord with Winston (1960), viewed terracing as a concomitant result of downstep. Welmers adds that in a discrete system there is generally no restriction on tone sequences; for example, in a three-tone language such as Jukun (Platoid, Nigeria), with H, M, and L, all possible combinations of these tones – i.e. HM, HL, LH, LM, MH, ML – are in principle permissible (Welmers 1973). In a terracing language with two tones (H, L), ↓H (as opposed to M) is recognizable by its distribution; there is no contrast between (phonetic) LH and LM; i.e. only ↓H follows L. A detailed debate between Stewart on the one hand and Welmers and Schachter on the other was published (Schachter 1965; Stewart 1965; Welmers 1965), with Stewart’s view ultimately holding sway. This view remained the basis of our understanding of downstep until the advent of autosegmental phonology (Leben 1973; Goldsmith 1976; chapter 14: autosegments) and feature geometry (Clements 1983; chapter 27: the organization of features), and although mechanisms available in these theories permitted an enhanced understanding of downstep, it is also accurate to say the existing understanding of downstep, together with other tonal phenomena in sub-Saharan African languages, laid the foundation for the development of autosegmental theory. Welmers’ strict dichotomy between terracing and discrete level languages is now seen as untenable, in that there seem to be clear cases of languages that are to some extent discrete level, but in which there is at least limited overlap of tones permitted. Terracing, however, is recognized as an integral aspect of downstep. Clements (1979) addresses the question of terracing in some detail, first examining what was one of the central issues in downstep-related discussion, the iterative application of phonological rules, i.e. whether a rule can apply to its own output (chapter 74: rule ordering). This was seen by some authors as the only way to account for the successive or cumulative lowering of Hs. Clements

Downstep

6

(1979) proposed an alternative view, which helped to lay the foundation for most current views of downstep. Clements’s proposal saw terracing as “the result of intonational processes applying to the tone level frame itself, rather than directly to individual tones” (1979: 358). The occurrence of actual downstepped tones was restricted to initial position in sequences in which pitch was lowered, but they have the effect of precipitating a (downward) register shift, re-establishing the levels at which subsequent tones within a given prosodic unit are realized. Whether this shift affects all tones (i.e. H and L, and M in a three-tone system) within a given prosodic unit, or just H, became an empirical and language specific question.

2.5

Downstep in autosegmental phonology

The development of autosegmental phonology (Leben 1973; Goldsmith 1976) provided the rest of the foundation for our current understanding of downstep and related tonal phenomena. Perhaps the key contribution of autosegmental phonology to our understanding of downstep was its ability to represent tone on a separate tier, and consequently different tone types on different tiers. Several authors have contributed to this debate and its development, exploiting this insight in different ways and to different degrees. Among them are Hyman (1979, 1993), Clements (1983, 1990), Stewart (1983, 1993), Pulleyblank (1986), Yip (1989, 1993), Snider (1990, 1999), and Clark (1993). The most recent and most detailed of these contributions is that of Snider (1999), who provides a proposal for an understanding of downstep that lays considerable emphasis on the incorporation of upstep as a phenomenon to be accounted for by the same means as downstep, as well as a critique of the related approaches, at least where they differ from his own. Snider’s (1999) proposal is presented within the theoretical framework of Register Tier Theory (RTT), which exploits the mechanisms of autosegmental phonology and feature geometry, and their tiered and hierarchical representations. RTT incorporates a Register tier admitting two features, h and l, a Tonal tier with two features, H and L, a Tonal Root Node tier (TRN), and a Tone-Bearing Unit (TBU) tier. Features on the register and tonal tiers are linked to a node on the TRN tier, and each TRN node is in turn linked to a mora on the TBU tier. This permits the specification of four level tones, which Snider labels Hi (= h, H), Mid2 (= h, L), Mid1 (= l, H), and Lo (= l, L). In a two-tone language (i.e. with H and L, as is frequently found in sub-Saharan Africa), the register feature associated with a particular TBU permits a register shift relative to the preceding TBU’s register: h = higher than the previous register setting, l = lower than the previous register setting. Automatic downstep, then, is represented by the spread of the register feature l, which effects a downward shift, realizing the following high tone a step lower than on the preceding register setting. Snider’s tonal features H and L equate well with Yip’s (1980) [±High], Pulleyblank’s (1986), and Yip’s (1989) [±Raised]. However, his h and l register features provide an advantage over Yip’s and Pulleyblank’s [±Upper], in that they are relative features, whereas [±Upper] is non-relative; it is either high or low. Both Snider’s system and the Yip/Pulleyblank system, then, account well for phonemic tone levels, but the relative nature of Snider’s register features permits description of the cumulative nature and the terracing associated with downstep.

7

3 3.1

Bruce Connell

Distributional characteristics of downstep Downstep type

Non-automatic downstep (often equated with phonologically distinctive downstep) is usually found only in systems that also have automatic (or non-distinctive) downstep, though the two need not co-occur. Automatic downstep is not uncommon in the absence of non-automatic downstep; however, the reverse – i.e. cases of non-automatic downstep in the absence of automatic downstep – is rare. Automatic downstep has been reported in languages such as Hausa (Chadic, Nigeria) by Leben (1984), Lindau (1986), Inkelas and Leben (1990), and Leben et al. (1989), and in Yoruba (Benue-Congo, Nigeria) by Connell and Ladd (1990), Laniran (1992), Akinlabi and Liberman (1995), and Laniran and Clements (2003). In neither of these languages is non-automatic downstep found, and downstep is neither lexically nor grammatically distinctive. Both automatic and non-automatic downstep occur in a great many languages, including not only the paradigm examples of Akan, Efik, and Igbo, but also languages such as Baule (Kwa, Ivory Coast; Ahoua 1996), Bimoba (Gur, Togo; Snider 1998), Chumburung (Kwa, Ghana; Snider 1999), Yala-Ikom (Benue-Congo, Nigeria; Armstrong 1968), and Zande (Adamawa-Ubangi, Democratic Republic of the Congo; Boyd 1981). For only very few languages has non-automatic downstep been reported in the absence of automatic downstep. Three such languages are Dschang (Grassfields Bantu, Cameroon), Ikaan (Benue-Congo, Nigeria; Salffner 2009), and Kikuyu (Bantu, Kenya; Clements and Ford 1980), and each of these present analytic complexities for which consensus has yet to be reached.

3.2

Geographical distribution

All of the above mentioned languages are geographically located in subSaharan Africa, the region best known for downstep, and representative of several different language families and phyla. Downstep is also found in the Americas, the best-known examples being in Central America. Isthmus Zapotec (Oto-Manguean, Mexico; Mock 1981, cited in Yip 2002) shows downstep functioning in a manner expected from research on African languages, i.e. it is triggered by a floating L. Other languages of this region show both downstep and upstep; varieties of Mixtec (Oto-Manguean, Mexico) are discussed in §6 below. Only very rarely has downstep been reported among Asian languages. One such language is Kuki-Thaadow (Tibeto-Burman; Hyman 2007), spoken in northeast India and Burma. Contrary to the well-reported differences between Asian and African tone languages, Kuki-Thaadow’s tone system behaves very much like those found in Africa. Hyman analyzes Kuki-Thaadow as having three underlying tones: HL, H, and L. Downstep occurs when HL precedes H, with L being realized as a downstep on the following H, as shown in (2) (Hyman 2007: 6): (2)

/mêeI vóm thúm hí/ → méeI ↓vóm thúm hí [ – – – – ] ‘these three black cats’

Downstep

8

According to Hyman’s report, downstep in Kuki-Thaadow is realized phonetically by raising of the preceding H, with the amount of raising being determined by the number of downsteps to follow; though no instrumental data is presented, this is reminiscent of Rialland’s (2001) findings for Dagara (see §6).

3.3

Downstep in languages with more than two tones and with tones other than H

The discussion thus far has centered on languages with just two tones, H and L, and downstepping of H tones. This is clearly the most commonly attested situation, though downstep does occur in languages with more than two tones, and with tones other than H, including in some languages with just two underlying tones. The first language with more than a basic two-tone system recognized as having downstep, and as downstepping tones other than H, was Yala-Ikom (Benue-Congo, Nigeria; Armstrong 1968). Yala-Ikom has three contrastive tones, H, M, and L. H is lowered after both L and M, and M is lowered following L; this occurs regardless of whether the tone causing the lowering is underlying or surface (i.e. floating or associated), and terracing results in both situations. The example below (Armstrong 1968: 53) illustrates downstepping of M following a M (Armstrong used the diacritic ’ to indicate downstep); note that the downstepped M is followed by H: (3)

F tabÖl↓ÖnÜ ní [ – – – – – – ] HMMMMH ‘It did not begin in the evening.’

Downstep begins with the third M, a result of the underlying (“latent” in Armstrong’s terms) floating L of /là/ ‘in’, which remains after vowel elision. Evidence for M triggering downstep of H is provided in examples such as the derived verbal noun, /òré↓ré/ ‘eating’; the presence of a floating M is confirmed by cross-dialectal comparison, where M surfaces in Yala-Ogoja /òródré/ ‘eating’. In Vute (Bantoid, Cameroon), also a three-tone language with H, M, and L, H apparently undergoes both automatic and non-automatic downstep (Guarisma 1978; Thwing and Watters 1987). Guarisma describes the second H of a HLH as being lowered, but does not comment on whether subsequent Hs are terraced, nor does she comment on the interesting situation of M, and the possibility of overlap of ↓H and M, or whether Ms and Ls lower correspondingly. However by Guarisma’s examples, non-automatic downstep does extend beyond the first tone of the sequence and so terracing exists. Guarisma describes non-automatic downstep as marking associative constructions, but her examples of both nonautomatic and automatic downstep are with associatives. (4)

Igwé sèhí +pr jémí

→ →

[–––] [–––]

‘head (of) abscess’ ‘spots (of) leopard’

Bamileke-Dschang (Grassfields Bantu, Cameroon) is a language with two tones, H and L. Dschang has received considerable attention in the literature on downstep (Tadajeu 1974; Hyman 1985; Pulleyblank 1986; Clark 1993; Manfredi 1993; Stewart 1993; Bird and Stegen 1995; Snider 1999) as the first language (and one of

9

Bruce Connell

very few languages) analyzed as having downstep affecting L as well as H; it has a four-way surface contrast, H, ↓H, L, ↓L. There is considerable divergence among the views of these authors as to the nature of downstep in Dschang; the downstepped H is attributable to a following floating L, with leftward spreading, and this seems largely agreed (Tadajeu 1974; Pulleyblank 1986; Snider 1999). Snider attributes the downstepped L to a floating H, which, when inserted between two Ls, results in downstepping of the second L. Clark (1993), however, prefers to see Dschang with a basic four-tone system and only relatively few occurrences of downstep. So, while there are languages like Yala-Ikom with more than two tones in which both H and M may be downstepped, and others like Vute in which only H is affected, there are no languages reported where (non-automatic) downstep affects M (or L) but not H. Similarly, like Dschang, there are no languages reported where ↓L occurs but not ↓H. Interestingly, for Yala-Ikom, discussed above, Armstrong (1968) reports that the effect of downstep triggered by M is indistinguishable from that triggered by L; i.e. ↓H lowers the same degree regardless of whether L or M is the trigger. On the other hand, in Yala-Ikom ↓H remains higher than M. This is contrary to reports for other three tone languages in which H is downstepped and for which it is claimed ↓H is indistinguishable from M (e.g. Supyire, Gur, Mali; Carlson 1983; Moba, Gur, Togo; Russell 1996, cited in Snider 1999; Bimoba, Gur, Ghana; Snider 1998). Snider (1998) provides instrumental evidence for the phonetic equivalence of ↓H and M in Bimoba.

3.4

Downstep in “non-tonal” languages

The possibility that downstep could account for pitch phenomena in non-tonal languages was first introduced in Pierrehumbert’s (1980) work on English, in which it was proposed that declination could, largely, be accounted for as the successive lowering of pitch accents (chapter 116: sentential prominence in english). The term “catathesis” rather than downstep was adopted for a time (Poser 1984; Beckman and Pierrehumbert 1986), in order to avoid the terminological conflicts inherent in the use of downdrift vs. downstep and automatic vs. non-automatic downstep (e.g. as mentioned briefly in §2.3, and more on which in §7.1.3); usage has since reverted to, and settled on, downstep. Pierrehumbert’s model has evolved considerably, both in her own work and that of others following, broadly speaking, the same tack. It has been applied to several other languages, e.g. Japanese (Japonic, Japan; Pierrehumbert and Beckman 1988; Kubozono 1989), and Dutch (Germanic, The Netherlands and Belgium; van den Berg et al. 1992), with discussion on the nature of downstep in these languages largely being separate from the debate on its functioning in African and other tone languages. One of the key issues has been the extent to which downstep is distinguishable from declination; in other words, can or does downstep account for all downward movement of pitch across a phrase or utterance? While there has been some effort to include both downstep and declination in models of intonation in such languages, resulting in a move away from the restrictive position in Pierrehumbert (1980), Ladd (2008) points out the methodological/empirical difficulty in separating the two. (In tone languages this is less problematic; see §7.1.2.) Ladd’s general view of downstep as part of intonation (e.g. 1992, 1993, 2008) sees (intonational)

Downstep

10

downstep as a result of metrical factors; for example (Ladd 2008: 77–78), the English phrase my mother’s diaries is realized either strong–weak (my mother’s diaries) or weak–strong (my mother’s diaries). The weak–strong version has two realizations, one of which has two accentual peaks, with that on diaries being the same as or higher than that on mother’s, and the other with unaccented diaries realized at a pitch lower than that of the peak on mother’s, but not as low as it is in the strong–weak realization; i.e. it is downstepped. So while phonetically similar to the strong–weak realization, the second weak–strong variant, that with downstep, is pragmatically distinct (i.e. focus on mother’s vs. focus on the entire phrase). This view of intonational downstep has become widely accepted.

4

The triggers of downstep

In the paradigm cases it is relatively straightforward to establish, on independent grounds, the existence of a floating L as the trigger of non-automatic downstep. Floating L, however, is not seen as the only possible trigger for downstep. First, as seen earlier in the discussion of Yala-Ikom, downstep (of H) may be triggered by M as well as L. Beyond this, in several languages where there is no independent evidence for floating L, an alternative analysis is preferable. For Kishambaa (Bantu, Kenya), Odden (1986) argues that the three-way surface contrast of HL, HH, and H↓H, is underlyingly HL, H, HH; that is, two surface Hs are treated as a single underlying H associated with two TBUs, while the second in a sequence of two underlying Hs is downstepped: (5)

HH ngoto → ngó↓tó

H ‘sheep’

njoka → njóká

‘snake’

Odden considers and rejects the option of inserting a floating L between the two Hs of /ngó↓tó/, arguing that there is no independent evidence for it, and its sole purpose would be to protect the assumed universality of the Obligatory Contour Principle (OCP), which places a constraint against sequences of identical tones in lexical entries. A similar analysis is proposed by Odden as being appropriate for other languages, including certain dialects of Shona (Bantu, Zimbabwe) and Temne (Atlantic, Sierra Leone).

5

Phonetic aspects of downstep

The description of downstep as the lowering of a H tone, with the establishment of a new, lower ceiling on the realization of subsequent Hs, leaves open the question of what happens to Ls within the same prosodic unit. That is, in sequences such as HLHHLHHLHL, in which the Hs following Ls are all subject to automatic downstep (and recalling that downstep is iterative), do the Ls remain at the same level, or do they also lower, in terrace-like fashion? And if so, do they lower at the same rate as do the Hs? It has been claimed (e.g. Welmers 1973; Hombert 1974) that L tones in such environments do not descend, or if they do,

11

Bruce Connell

it is at a slower rate than for Hs. This claim is called into question by e.g. Clements (1983), who notes that evidence from instrumental studies typically shows that low tones as well as highs fall significantly in pitch in such tone spans. Snider (1998) provides a detailed instrumental analysis of the phonetics of downstep in Bimoba (Gur, Ghana), finding in addition to ↓H being equivalent to M, as mentioned earlier, that in sequences involving downstepping of Hs, both Hs and Ls decline at a roughly similar rate (in percentage terms), and that this rate is substantially greater than what might be expected from the influence of declination alone. Laniran and Clements (2003), for Yoruba (Benue-Congo, Nigeria), also report that Ls lower, though at lesser absolute rate (as measured in Hertz) and with greater variability across speakers than is evident for H tones. Similarly, Urua (1996–97) reports a progressive lowering of Ls in LHLHLH sequences in Ibibio, again at a lower rate than for Hs. In contrast to these findings, Gibbon (1987: 293; see also Gibbon 2001) describes Tem (Gur, Togo) as having “downstep, downdrift, phonetically constant (non-terraced) low tone,” though instrumental data are not presented. In the early literature, downstepped H was typically identified as M, indicating a phonetic realization midway between H and L. This appears to be the typical case, and in some three-tone languages with M, such as Bimoba discussed above, they are phonetically equivalent. There are, however, languages in which a downstepped H is realized at the same level as a preceding L, giving rise to a distinction between partial and total downstep (the terminology apparently due to Meeussen 1970). Kikuyu (Bantu, Kenya; Clements and Ford 1980) and Chumburung (Kwa, Ghana; Snider 1998) are among languages reported to have total downstep.

6 6.1

Upstep and H-raising Upstep

A tonal process (or, more accurately, a collection of processes) termed “upstep” has been reported in a number of languages. It sometimes appears to be the converse of downstep, as the term seems to imply; Snider (1990, 1999), for example, refers to it as an upward register shift, but elsewhere any instance where H is raised has been labeled “upstep.” Snider also draws attention to the fact that while downstep occurs in languages in the absence of upstep, there is only one reported case of a language having upstep in the absence of downstep, viz. Acatlán Mixtec (Oto-Manguean, Mexico; Pike and Wistrand 1974; Snider 1999). In the ideal view, and as predicted by most models, upstep is the opposite of downstep, its symmetrical mirror image that is distinct from the “resetting” found at the beginning of a new prosodic unit following one that has been downstepped. That is, upstep should not only raise a H tone, but set a new, higher, ceiling for Hs subsequent to the upstepped H within the same prosodic unit, and its affect would be cumulative, creating upward terracing; in other words, an upward register shift occurs. The only languages reported as approaching this ideal are Acatlán Mixtec (Pike and Wistrand 1974), Yombe (Bantu, Democratic Republic of the Congo; Meeussen and Ndembe 1964, cited in Welmers 1973), and Mankon (Grassfields Bantu, Cameroon; Hyman 1993).

Downstep

12

In Acatlán Mixtec upstep occurs whenever a morpheme-initial H is followed by another H, including upstepped Hs, so upward terracing is possible and appears unlimited, as shown in (6) (data from Snider 1999: 106; ↑ indicates upstep): (6)

kó neg

Œí↑tú kisses

wá so

ní 2sg

méè baby



[kó ↑Œí↑tú ↑wá ↑ní ↑méè] – – –– – – –

Welmers (1973: 90), citing Meeussen and Ndembe (1964), reports Yombe as having upstep, after which “following highs continue on the same level until a low or downstep; the upstep simply shifts the entire sequences of terraces back up one level.” Whether successive upsteps are possible is not reported. In Mankon, as in Acatlán Mixtec and Yombe, H(s) following the upstepped H remain at the new level, but the number of Hs in such sequences appears restricted, and as in Yombe, the occurrence of two successive upstepped Hs is unattested; i.e. upward terracing does not occur. It might also be expected that tones other than H should be affected by upstep; just as Ls are lowered by the downward register shift associated with downstep, Ls should be raised by the upward register shift associated with upstep. This seems unattested, or is at least unreported, e.g. for Acatlán Mixtec. Krachi (Kwa, Ghana; Snider 1990) is another language argued to exhibit upstep of H following L (including floating L), and at a first glance it appears the L is also raised. However, Snider analyzes these cases of apparent L-raising as instances of H-spread and delinking. For upstepped Hs, Krachi shows no evidence of a cumulative effect; a H immediately following an upstepped H return to the level of a H preceding the upstep. Snider argues that this is due to a floating L, immediately downstepping the next H and thereby shifting the register downward, though it is not clear there is any independent evidence for the postulated insertion of a floating L. Ahoua (2002) and Leben and Ahoua (1997) report a raising of H tones in Baule (Kwa, Ivory Coast) which, on the surface appears to mirror declination: a sequence of Hs will start relatively low and increase in pitch throughout an utterance. It differs, however, in that it is appears to be restricted to no more than the first four syllables of an utterance, after which H plateaus, and it appears to be phonologically conditioned. A similar upward pitch movement is reported for Peñoles Mixtec (Oto-Manguean, Mexico; Daly 1993, cited in Yip 2002), and described as upstep, with each H following a floating H a step higher.

6.2

H-raising

Other instances that have sometimes been identified as “upstep” include the H-raising effect reported for several languages, including Yoruba (Connell and Ladd 1990; Laniran 1992; Akinlabi and Liberman 1995; Laniran and Clements 2003), Bimoba (Snider 1998), and Dagara (Gur, Burkina Faso; Rialland and Somé 2000; Rialland 2001), in which (typically) an initial H is raised before L. This can be interpreted in at least two different ways. On the one hand, the H-raising before L may be seen as dissimilation, a form of contrast enhancement; this may be supported by evidence from Yoruba that initial L is lowered before H (Akinlabi and Liberman 1995; Laniran and Clements 2003). An analysis such as this may prove to be appropriate in the case of languages such as Engenni (Benue-Congo, Nigeria;

13

Bruce Connell

Thomas 1974; Snider 1990, 1999), in which every H is raised when preceding a L tone. Alternatively, it can be considered an anticipatory raising, to provide the speaker with sufficient room for subsequent lowering resulting from downstep. Rialland (2001), on the basis of instrumental findings, argues in Dagara that the amount by which initial H is raised is determined by the number of downsteps that follow, although the precise strategy used may vary from speaker to speaker. This form of raising is frequently referred to as “reset,” and in its most usual manifestation involves the raising (or resetting) of H at the beginning of a new prosodic unit. There continues to be debate as to what determines both the height of the reset, and the left edge of the new prosodic unit. As far back as the original (impressionistic) work on Akan (Kwa, Ghana), Stewart (1965) and Schachter (1965) held opposing views as to whether it was determined by preplanning (i.e. the greater the number of subsequent downsteps, the higher the reset) or was the same regardless of the number of following downsteps. As for where in an utterance reset occurs, two general claims have been mooted, one being that speakers must reset when they have reached the bottom of their range, and the other, that prosodic boundaries are determined by higher order units in the overall hierarchical structure of an utterance. The raising of Hs in each of these latter cases discussed is quite different from upstep as reported for Acatlán, and should be treated as a distinct phenomenon. Whatever form the raising of H tones takes, it appears that there are several characteristics whose absence render problematic attempts to unify “upstep” and downstep within a single theoretical treatment. Interestingly, upstep is apparently restricted to H tones, as there are no attested instances of M or L being upstepped, and so far there are no clearly established instances of its being lexically contrastive; Snider (1990) claims that for Krachi it is, though no clear examples are offered.

7

Distinguishing downstep and related phenomena

The discussion thus far has focused on different aspects of downstep and how our understanding of downstep has developed. One problem that has been alluded to, but not addressed explicitly, is the confounding of similar or related phenomena with downstep, reflected in terminological conflicts in the downstep literature. This leads not only to difficulty in comparing findings and claims of different authors, but potentially a lack of clarity in the formulation of research questions; it is thus also related to more theoretical questions, such as whether the different effects are phonetic or phonological in nature, whether they are “local” or “global” phenomena, and what they mean for our understanding of the pitch register or span, or tonal space, and how speakers manipulate this space. In this section I look at the relation between downstep (both automatic and non-automatic), downdrift, and declination, in an effort to show how they are similar and/or different and with the hope of introducing a greater degree of consistency in how these different but related phenomena are labeled and hence integrated into phonological and phonetic theory. Results from experimental work on Hausa (Chadic; Lindau 1986) and Ibibio (Cross River, Benue-Congo, Nigeria; Urua 1996–97, 2002) are presented that illustrate the differences between these types of downtrend. In §8, experimental work on Mambila (Bantoid, Benue-Congo,

Downstep

14

Cameroon) is presented that sheds additional light on the nature of these downtrends, illustrating a particular local lowering that it is suggested could more appropriately be labeled downdrift.

7.1

Downtrends and labeling downtrends

Several different types of pitch downtrend have been identified, including downstep, downdrift, and declination.5 Some writers have distinguished two types of downstep – automatic and non-automatic – and, as presented above (§2.3), have argued for a relationship between the two; others, while not denying a possible connection between the two, have referred simply to downstep and downdrift, with the latter being equivalent to automatic downstep. Other writers (and indeed sometimes the same) have used the term downdrift as equivalent to what is now more commonly referred to as declination, either implicitly or explicitly attributing the same underlying mechanism to both (automatic) downstep and declination.

7.1.1 Declination Declination would seem to be the most basic of these, as it is generally considered to be a phonetic universal (Ladd 1984). The term is commonly used in work on intonation, particularly on European languages, though it is less well known or used in work on African tonal systems, where studies on downstep have been most prevalent. Declination refers to “a gradual modification (over the course of a phrase or utterance) of the phonetic backdrop against which the phonologically specified F0 targets are scaled” (Connell and Ladd 1990: 2). Despite its assumed universality, it may, however, be suspended in questions and other sorts of nondeclaratives (see e.g. Lindau 1986 for Hausa), or in situations where tonal contrasts might be endangered (Hombert 1974; Connell 1999). As mentioned earlier, Ladd (2008) points out the difficulty in distinguishing the effects of downstep from those of declination in languages like English, though he suggests “there is nothing in principle to rule out both downstep and true (i.e. time dependent) declination” (Ladd 2008: 80, fn. 12). He also draws attention to difficulties in devising methods that would permit separating the effects of each. This is clearly more true of nontonal languages than of tone languages. In the latter, the existence and influence of declination can most clearly be seen in phrases consisting of tones all of which have the same phonological value – e.g., all Hs, all Ms, or all Ls – and in a given language constructing test sentences comprised of such sequences may be possible (e.g. Lindau 1986; Connell and Ladd 1990; Connell 2002). In such sequences, any lowering of F0 over the course of an utterance may be attributed to this phonetic effect. (However, declination can occur regardless of tonal combination, and so it is possible that it may be problematic to distinguish declination from automatic downstep in a HLHLHL sequence.) Declination is illustrated in Figure 35.1, with data from Hausa (Lindau 1986), in a phrase that consists of all Hs, and has a decline of approximately 14 percent per second. In this case, declination would seem to be the only factor that contributes to the lowering of F0. 5

A fourth, final lowering, could perhaps also have been included in the present discussion, although it typically has been held separate from declination, and never in my experience been confounded with downstep. However, it is not always a straightforward matter to decide which effect, declination or final lowering, is responsible for an observed steep decline in F0 at the end of an utterance.

15

Bruce Connell 200

100 H H M u u di i

H ya a

H zoo

H H gi daa

Figure 35.1 Declination illustrated in a sequence of Hausa High tones in the sentence Muudii yaa zoo gidaa ‘Muudii came home’, adapted from Lindau (1986)

7.1.2 Downdrift Downdrift is somewhat more difficult to characterize, precisely because, as mentioned earlier, the term has been used in different senses. Most commonly, it has been used synonymously with Stewart’s automatic downstep. Hombert (1974: 171), for example, describes downdrift as “the progressive lowering of a high tone after a low tone,” and in a footnote explicitly equates it with automatic downstep. Similar views are expressed by a range of authors including, more recently, Snider and van der Hulst (1993) and Hyman (2001). However, Hombert also attributes an intonational element to downdrift, observing that Ls also descend, and then suggests that the term downdrift refer to “the lowering of like tones (consecutive or not)” (1974: 172, fn. 6). Given this proviso concerning like tones (or, when it applies to consecutive like tones), it appears to be the same phenomenon as that described earlier as declination; however as we have seen, declination is clearly something quite independent of automatic downstep. Both of these views of downdrift – i.e. that it involves a local assimilation between Ls and Hs, and that it is a phrase or sentence level effect – are found elsewhere in the literature, though it is most frequently characterized as being equivalent to automatic downstep. An illustration of downdrift of a presumably assimilatory nature, through the alternation of Hs and Ls, can again be taken from Lindau’s (1986) study of Hausa (Chadic, Nigeria), and is shown in Figure 35.2. In this figure, as in Figure 35.1, a trend line for the H tone only is represented (in Lindau’s representation, the second sentence also shows a downtrend affecting the Ls, and both map the actual F0 trace). What is important in comparing the two is that the slope in Figure 35.2 is steeper than that in Figure 35.1, with a decline of 33 percent per second; i.e. the slope shown in Figure 35.2 combines the declination of Figure 35.1 with the effect of downdrift, the local assimilation of Hs to Ls.6

7.1.3 Downstep An important, indeed defining, feature of downstep, in addition to its lowering of a H relative to a preceding H (or lowering of other tones relative to preceding 6

Gussenhoven (2004: 101) cites similar evidence for Japanese from Poser (1984).

Downstep

16

200

100 H L Ma a l a m

H ya a

L H auni

L H l eem oo

Figure 35.2 Downdrift in alternating H and L tones in the Hausa phrase Maalam yaa auni leemoo ‘The teacher weighed the oranges’, adapted from Lindau (1986)

340 320 300 280 260 240 220 200 180 160 Hz

é

ms

k

í

300 H

H

k

é r é

600 H H

j(è) ú

k

á

r(á) í

900 L H

H

d é

m

1200 H H

1500

Figure 35.3 Illustration of automatic downstep in the Ibibio phrase /ékíkéré jè úkárá ídém/ (HHHHLHHHH) ‘thought and self-rule’

tones of like phonological value) via a L (either surface or floating) that conditions the lowering is that, within specifiable bounds, the downstepped H sets a new ceiling for all subsequent Hs within a specifiable domain; i.e. these Hs do not rise above the height of the downstepped one, hence the descriptive label terracing. So downstep is generally accepted as a downward shift in register. A further characteristic of downstep, it will be remembered, is its cumulative nature: successive downsteps result in successively lower pitch levels. These characteristics are illustrated in Figures 35.3 and 35.4, showing automatic downstep and non-automatic downstep, respectively, in Ibibio.7 Figure 35.3 shows a sequence of Hs followed by a L, followed by another sequence of Hs; the downward shift of the register is noticeable: not only is the first H following the L realized at a lower F0, but the same applies to all subsequent Hs in the phrase. In Figure 35.4, there are no surface Ls, but the sequence of Hs is interrupted at three locations, indicated by ↓, and four distinct levels, or terraces, are in evidence in this phrase. 7

Thanks to Eno-Abasi Urua for contributing the recordings of the phrases used in Figs. 35.3 and 35.4.

17

340 320 300 280 260 240 220 200 180 160 Hz

Bruce Connell

ú b

ms H

F k

300 H

í

w

H

↓á

ú

600 !H H

b

↓DD I

900 !H

F

b ↓DD I

1200 H !H

1500

Figure 35.4 Illustration of non-automatic downstep in the Ibibio phrase /úbFk í↓wá ú↓bFFI F↓bFFI/ (HHH↓HH↓HH↓H) ‘hand of cassava of king’s kingship’

7.2

Summary and discussion

Declination, then, is generally agreed to be a phonetic effect, a backdrop lowering of F0 that has an extended, or global, domain, often the entire utterance. Downstep, whether automatic or non-automatic, appears to come about through the local interaction of adjacent tones; its terracing effect (the lowering of the ceiling) has frequently been described as key lowering (Stewart 1983) or register shift (Snider and van der Hulst 1993). And while downstep is in a sense “local,” in that a specific site – the adjacent tones – can be identified as to where it is triggered, it is typically global in its domain, as all subsequent tones within a given prosodic unit are affected. However, unlike declination, it is arguably a phonological effect. While some writers attribute the downward shift in register that comes about with downstep to phonetic implementation rules, much of the evidence for this comes from research on languages in which tone is not phonologically contrastive. The fact that in many, perhaps most, tone languages in which downstep occurs it (i.e. at least non-automatic downstep) is phonologically contrastive suggests rather strongly that it must be accounted for in the phonological component of the grammar. This and other arguments for a phonological view of downstep are developed in Snider (1990, 1999) and Snider and van der Hulst (1993). The question therefore arises as to whether each of these terms is identified with a distinct phenomenon, or whether one or more of them is in fact redundant. If some phenomena called downdrift are the same as automatic downstep, and other types of downdrift are the same as declination, why not abandon the term downdrift altogether and simply label things consistently as either automatic downstep or declination, as the case may be, and get rid of the terminological “confusion”? (Or, as has been the practice for some writers, eschew “automatic” and “non-automatic” in favor of “downdrift,” “downstep,” and “declination.”) In fact, the lack of terminological consistency is more than simply the existence of competing labels for the same thing; as shown in the following section, it actually masks to some extent our understanding of tone systems, and may therefore

Downstep

18

lead to the formulation of inappropriate research questions and to questionable theoretical conclusions, including the conflation of what are typologically different languages.

8

Downdrift in Mambila

Mambila (Bantoid, Cameroon) is a language with four level tones; it does not have either automatic or non-automatic downstep (Perrin 1974; Connell 1999, 2002, 2003), and the presence of declination appears to be variable, occurring consistently only with the lowest tone (T4), and never with the highest (T1). Connell (1999, 2002) discusses a pitch phenomenon in Mambila that resembles what has often been called downdrift: a successive lowering of F0 in sequences of alternating tones of different levels. The graphs in Figure 35.5 show averaged plots of pitch traces for a single male speaker for like tone sequences in Mambila. Measurements were taken at the apparent tonal target in each syllable. Since Mambila has essentially level tones, the tonal target was identified as the level portion of F0 in each syllable. This was typically straightforward and involved no more than excluding perturbations resulting from adjacent consonants. The graphs illustrate the absence of declination, or any downtrend, for all tones but T4. For each tone, sentences of different lengths (short and long, or short, medium, and long for T4, are superimposed) in the figure. Figure 35.6 shows averaged pitch plots of sentences comprised of alternating sequences of Hs and Ls for the same speaker, which show a slight downtrend; again, three sentences – short, medium, and long – are represented. Figure 35.7 shows alternating sequences of higher mid (T2) and lower mid tones (T3).

F0 (Hz)

180 160

T1

140

T2 T3

120

T4

100 80 0

2

4

6

8

10

12

Number of syllables Figure 35.5 Averaged pitch traces for like tone sequences in Mambila. Tone 1, long utterance = filled diamonds, short utterance = open squares; Tone 2, long utterance = open diamonds, short utterance = filled squares; Tone 3, long utterance = open squares, short utterance = filled squares; Tone 4: long utterance = open squares, medium utterance = open triangle, short utterance = filled triangle

Bruce Connell

19 180

F0 (Hz)

160 140 120 100 80 0

2

6

4

8

10

Number of syllables Figure 35.6 Averaged pitch traces for T1–T4 alternating tone sequences of three different lengths. Long utterance = filled squares; medium utterance = filled triangle; short utterance = open squares

160

F0 (Hz)

150

140

130

T2

120 0

T3 2

T2

T3 4

T2

T2 6

T3

T2 8

10

Number of syllables Figure 35.7 Averaged pitch traces for T2–T3 alternating tone sequences of two different lengths. Long utterance = filled diamonds; short utterance = open squares

Statistical analyses presented in Connell (2002) confirm the existence of a downtrend in Mambila: a progressive lowering of tones through the interaction of adjacent lower and higher tones; i.e. it appears to meet the criteria for, and can be termed, downdrift. Figure 35.8, on the other hand, shows pitch plots from two speakers, again averages of at least five repetitions, for a phrase with a tonal sequence of T1T4T1T1T1, i.e. HLHHH, and Figure 35.9 for a phrase with a tonal sequence of T4T4T1T1T1 (LLHHH). What is noticeable in these utterances is that the lowered H, i.e. the one immediately following the low, does not establish a new ceiling for subsequent Hs; these rise to the height of the initial H in Figure 35.8 (indeed slightly higher) and a similar rise is in evidence in Figure 35.9.

Downstep

20

180 170

Speaker 1 Speaker 2

160 150 F0 (Hz)

140 130 120 110 100 90 80 1

2

3

4

5

Number of syllables Figure 35.8 Pitch traces of T1T4T1T1T1 (HLHHH) for two Mambila speakers, showing the absence of automatic downstep 180 170

Speaker 1 Speaker 2

160 150 F0 (Hz)

140 130 120 110 100 90 80 1

2

3

4

5

Number of syllables Figure 35.9 Pitch traces of T4T4T1T1T1 (LLHHH) for two Mambila speakers, showing the absence of automatic downstep

Rather than downstep, Mambila has a local interaction, a lowering of a high(er) tone following a low(er) tone, that appears to be simply a phonetic effect that is corrected once the low tone is not involved, allowing the H to regain its former height. That the effect of this interaction is cumulative, i.e. in a

21

Bruce Connell

sequence of HLHLHL the downward trend continues, gives it the appearance of what has often been called automatic downstep, but without the register shift it can hardly be seen as the same phenomenon (viz. automatic downstep) that was illustrated in Figure 35.2 for Ibibio. One may wish to argue that Mambila has both downstep and upstep, i.e. the lowering seen is corrected by a floating H (cf. Stewart 1993 on Ebrié), but this analysis is found wanting on at least two grounds. First, the time it takes for T1 (H) to regain or attain its target is variable: typically one or two syllables, but sometimes within one, and at other times more than two syllables. This, impressionistically at least, appears to correlate with speech rate/style. Second, the tonal system of Mambila becomes unnecessarily complicated in such an analysis: the floating H would simply be a diacritic, with no independent evidence to motivate its existence; there would seem to be no reason to postulate such a device when the effect in question is well explained by other means. What happens in Mambila, then, is not the same as what happens in Ibibio, or other languages (Akan, Baule, Bimoba, Efik, Hausa, Igbo, etc.) that have automatic downstep; recognizing the difference between the two types of downtrend leads to the formulation of a different set of questions about the nature of tone systems and pitch realization. Mambila demonstrates tone co-articulation (e.g. Gandour et al. 1994; Xu 1994), a local, phonetic effect in pitch realization. The effect is cumulative in that lowering will continue in the event of a strict alternation (e.g. HLHL). Such a cumulative effect may appropriately be termed “downdrift.”

9

Conclusion

The paradigm case of downstep occurs in a language with two tones, H and L. H is downstepped before any other tone, regardless of whether the language has two or three (or more) tones, and its lowering introduces a terracing effect: a new ceiling is set for subsequent tones of the same phonological value. In this respect a clear connection exists between non-automatic and automatic downstep. Declination and downstep are distinct phenomena; there is, however, perhaps some question as to what constitutes downdrift, whether it is synonymous with automatic downstep, with declination, or is something separate. I have suggested here that it should be seen as a separate phenomenon. While there currently appears to be a consensus around some of the main issues involved in understanding downstep, several issues have not been discussed here in any depth; there remains a great deal which is unresolved and in need of further research.

REFERENCES Ahoua, Firmin. 1996. Prosodic aspects of Baule. Cologne: Rüdiger Köppe Verlag. Ahoua, Firmin. 2002. The phonology of upsweep. In Rainhard Rapp (ed.) Linguistics on the way into the third millennium, 697–704. Frankfurt: Peter Lang. Akinlabi, Akinbiyi & Mark Liberman. 1995. On the phonetic interpretation of the Yoruba tonal system. In Kjell Elenius & Peter Branderud (eds.) Proceedings of the 13th International Congress of the Phonetic Sciences, vol. 1, 42–45. Stockholm: KTH & Stockholm University.

Downstep

22

Armstrong, Robert G. 1968. Yala (Ikom): A terraced-level language with three tones. Journal of West African Languages 5. 49–58. Beckman, Mary E. & Janet B. Pierrehumbert. 1986. Intonational structure in Japanese and English. Phonology Yearbook 3. 255–309. Bendor-Samuel, John (ed.) 1974. Ten Nigerian tone systems. Jos: Institute of Linguistics. Berg, Rob van den, Carlos Gussenhoven & Toni Rietveld. 1992. Downstep in Dutch: Implications for a model. In Docherty & Ladd (1992), 335–359. Bird, Steven & Oliver Stegen. 1995. The Bamileke Dschang associative construction: Instrumental findings. Edinburgh: Centre for Cognitive Science, University of Edinburgh. Boyd, Raymond. 1981. À propos de la notion de faille tonale. In Gladys Guarisma (ed.) Tons et accents dans les langues africaines, 39–64. Paris: SELAF. Carlson, Robert. 1983. Downstep in Supyire. Studies in African Linguistics 14. 35–45. Christaller, J. G. 1875. A grammar of the Asante and Fante language. Basel: Basel Evangelical Missionary Society. Clark, Mary. 1993. Representation of downstep in Dschang Bamileke. In van der Hulst & Snider (1993), 29–73. Clements, G. N. 1979. The description of terraced-level tone languages. Language 55. 536–558. Clements, G. N. 1983. The hierarchical representation of tone features. In Ivan R. Dihoff (ed.) Current approaches to African linguistics, vol. 1, 145–176. Dordrecht: Foris. Clements, G. N. 1990. The status of register in intonation theory: Comments on the papers by Ladd and by Inkelas and Leben. In Kingston & Beckman (1990), 58–71. Clements, G. N. & Kevin C. Ford. 1980. On the phonological status of downstep in Kikuyu. In Didier Goyvaerts (ed.) Phonology in the 1980s, 309–357. Ghent: E. Story-Scientia. Connell, Bruce. 1999. Four tones and downtrend: a preliminary report on pitch realization in Mambila. In Paul Kotey (ed.) New dimensions in African linguistics and languages: Trends in African linguistics, vol. 3, 75–88. Trenton, NJ: Africa World Press. Connell, Bruce. 2002. Downdrift, downstep and declination. In Gut & Gibbon (2002), 3–12. Connell, Bruce. 2003. Pitch realization and the four tones of Mambila. In Shikeki Kaji (ed.) Cross-linguistic studies of tonal phenomena: Historical development, phonetics of tone, and descriptive studies, 181–197. Tokyo: ILCAA. Connell, Bruce & D. Robert Ladd. 1990. Aspects of pitch realization in Yoruba. Phonology 7. 1–29. Daly, John P. 1993. Representation of tone in Peñoles Mixtec. Dallas: Summer Institute of Linguistics. Docherty, Gerard J. & D. Robert Ladd (eds.) 1992. Papers in laboratory phonology II: Gesture, segment, prosody. Cambridge: Cambridge University Press. Gandour, Jackson, Siripong Potisuk & Sumalee Dechongkit. 1994. Tonal coarticulation in Thai. Journal of Phonetics 22. 477–492. Gibbon, Dafydd. 1987. Finite state processing of tone systems. In Proceedings of the 3rd Conference of the European Chapter of the Association for Computational Linguistics. 291–297. Gibbon, Dafydd. 2001. Finite state prosodic analysis of African corpus resources. In Paul Dalsgaard, Børge Lindberg, Henrik Benner & Zheng-hua Tan (eds.) Proceedings of Eurospeech 2001, 83–86. Aalborg: ISCA Archive. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Guarisma, Gladys. 1978. Etudes vouté (langue bantoïde du Cameroun). Paris: SELAF. Gussenhoven, Carlos. 2004. The phonology of tone and intonation. Cambridge: Cambridge University Press. Gut, Ulrike & Dafydd Gibbon (eds.) Typology of African prosodic systems. Bielefeld: University of Bielefeld. Hombert, Jean-Marie. 1974. Universals of downdrift: Their phonetic basis and significance for a theory of tone. Studies in African Linguistics. Supplement 5. 169–183. Hulst, Harry van der & Keith L. Snider (eds.) 1993. The phonology of tone: The representation of tonal register. Berlin & New York: Mouton de Gruyter.

23

Bruce Connell

Hyman, Larry M. 1979. A reanalysis of tonal downstep. Journal of African Languages and Linguistics 1. 9–29. Hyman, Larry M. 1985. Word domains and downstep in Bamileke-Dschang. Phonology Yearbook 2. 47–83. Hyman, Larry M. 1993. Register tones and tonal geometry. In van der Hulst & Snider (1993), 75–108. Hyman, Larry M. 2001. Tone systems. In Martin Haspelmath, Ekkehard König, Wulf Oesterreicher & Wolfgang Raible (eds.) Language typology and language universals: An international handbook, vol. 2, 1367–1380. Berlin & New York: Mouton de Gruyter. Hyman, Larry M. 2007. Kuki-Thaadow: An African tone system in Southeast Asia. University of California Berkeley Phonology Lab Annual Report. 1–19. Inkelas, Sharon & William R. Leben. 1990. Where phonology and phonetics intersect: The case of Hausa intonation. In Kingston & Beckman (1990), 17–34. IPA. 1949. The principles of the International Phonetic Association. London: University College. Kingston, John & Mary E. Beckman (eds.) 1990. Papers in laboratory phonology I: Between the grammar and physics of speech. Cambridge: Cambridge University Press. Kubozono, Haruo. 1989. Syntactic and rhythmic effects on downstep in Japanese. Phonology 6. 39–67. Ladd, D. Robert. 1984. Declination: A review and some issues. Phonology Yearbook 1. 53–74. Ladd, D. Robert. 1992. An introduction to intonational phonology. In Docherty & Ladd (1992), 321–334. Ladd, D. Robert. 1993. In defense of a metrical theory of intonational downstep. In van der Hulst & Snider (1993), 109–132. Ladd, D. Robert. 2008. Intonational phonology. 2nd edn. Cambridge: Cambridge University Press. Laniran, Yetunde. 1992. Intonation in tone languages: The phonetic implementation of tones in Yoruba. Ph.D. dissertation, Cornell University. Laniran, Yetunde & G. N. Clements. 2003. Downstep and high raising: Interacting factors in Yoruba tone production. Journal of Phonetics 31. 203–250. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. Leben, William R. 1984. Intonation in Chadic languages. Studies in African Linguistics. Supplement 9. 191–195. Leben, William R. & Firmin Ahoua. 1997. Prosodic domains in Baule. Phonology 14. 113–132. Leben, William R., Sharon Inkelas & Mark Cobler. 1989. Phrases and phrase tones in Hausa. In Paul Newman & Robert Botne (eds.) Current approaches to African linguistics, vol. 5, 45–61. Dordrecht: Foris. Lindau, Mona. 1986. Testing a model of intonation in a tone language. Journal of the Acoustical Society of America 80. 757–764. Manfredi, Victor. 1993. Spreading and downstep: Prosodic government in tone languages. In van der Hulst & Snider (1993), 133–184. Meeussen, A. E. 1970. Tone typologies for West African languages. African Language Studies 11. 266–271. Meeussen, A. E. & D. Ndembe. 1964. Principes de tonologie yombe (Kongo occidental). Journal of African Languages 3. 135–161. Mock, Carol C. 1981. Tone sandhi in Isthmus Zapotec: An autosegmental account. Paper presented at the Ithaca Symposium of PILEI, Cornell University. Odden, David. 1986. On the role of the Obligatory Contour Principle in phonological theory. Language 62. 353–383. Perrin, Mona J. 1974. Mambila. In Bendor-Samuel (1974), 93–108. Pierrehumbert, Janet B. 1980. The phonology and phonetics of English intonation. PhD. dissertation, MIT. Pierrehumbert, Janet B. & Mary E. Beckman. 1988. Japanese tone structure. Cambridge, MA: MIT Press.

Downstep

24

Pike, Eunice V. & Kent Wistrand. 1974. Step-up terrace tone in Acatlán Mixtec (Mexico). In Ruth M. Brend (ed.) Advances in tagmemics, 81–104. Amsterdam: North Holland. Poser, William J. 1984. The phonetics and phonology of tone and intonation in Japanese. Ph.D. dissertation, MIT. Pulleyblank, Douglas. 1986. Tone in Lexical Phonology. Dordrecht: Reidel. Rialland, Annie. 2001. Anticipatory raising in downstep realization: Evidence for preplanning in tone production. In Shigeki Kaji (ed.) Cross-linguistic studies of tonal phenomenon: Tonogenesis, typology, and related topics, 301–322. Tokyo: ILCAA. Rialland, Annie & P. A. Somé. 2000. Dagara downstep: How speakers get started. In Vicki Carstens & Fredrick Parkinson (eds.) Advances in African linguistics: Trends in African linguistics, 251–263. Trenton, NJ: Africa World Press. Russell, Jann. 1996. Some tonal sandhi rules in Moba. Togo: Summer Institute of Linguistics. Salffner, Sophie. 2009. Tone in the phonology, lexicon and grammar of Ikaan. Ph.D. dissertation, University of London. Schachter, Paul. 1965. Some comments on John M. Stewart’s “The typology of the Twi tone system.” Bulletin of the Institute of African Studies 1. 28–42. Snider, Keith L. 1990. Tonal upstep in Krachi: Evidence for a register tier. Language 66. 453–474. Snider, Keith L. 1998. Phonetic realization of downstep in Bimoba. Phonology 15. 77–101. Snider, Keith L. 1999. The geometry and features of tone. Dallas: Summer Institute of Linguistics & University of Texas at Arlington. Snider, Keith L. & Harry van der Hulst. 1993. Issues in the representation of tonal register. In van der Hulst & Snider (1993), 1–27. Stewart, John M. 1965. The typology of the Twi tone system. Preprint from the Bulletin of the Institute of African Studies 1. 1–27. Stewart, John M. 1983. Downstep and floating low tones in Adioukrou. Journal of African Languages and Linguistics 5. 57–78. Stewart, John M. 1993. Dschang and Ebrie as Akan-type total downstep languages. In van der Hulst & Snider (1993), 185–244. Tadajeu, Maurice. 1974. Floating tones, shifting rules, and downstep in DschangBamileke. Studies in African Linguistics. Supplement 5. 283–290. Thomas, Elaine. 1974. Engenni. In Bendor-Samuel (1974), 13–26. Thwing, Rhonda & John Watters. 1987. Focus in Vute. Journal of African Languages and Linguistics 9. 95–121. Urua, Eno-Abasi E. 1996–97. A phonetic analysis of Ibibio tones: A preliminary investigation. Journal of West African Languages 26. 15–25. Urua, Eno-Abasi E. 2002. The tone system of Ibibio. In Gut & Gibbon (2002), 65–85. Ward, Ida C. 1933. The phonetic and tonal structure of Efik. Cambridge: Heffer. Welmers, William E. 1959. Tonemics, morphotonemics, and tonal morphemes. General Linguistics 4. 1–9. Welmers, William E. 1965. Some comments on John M. Stewart’s The typology of the Twi tone system. Preprint from the Bulletin of the Institute of African Studies 1. 28–42. Welmers, William E. 1973. African language structures. Berkeley: University of California Press. Winston, F. Dennis. 1960. The “mid” tone in Efik. African Language Studies 1. 185–192. Xu, Yi. 1994. Production and perception of coarticulated tones. Journal of the Acoustical Society of America 95. 2240–2253. Yip, Moira. 1980. The tonal phonology of Chinese. Ph.D. dissertation, MIT. Yip, Moira. 1989. Contour tones. Phonology 6. 149–174. Yip, Moira. 1993. Tonal register in East Asian Languages. In van der Hulst & Snider (1993), 245–268. Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press.

36

Final Consonants Marie-Hélène Côté

1

Introduction

Final consonants, in the stem, the word, or the phrase, often display properties that set them apart from consonants in other positions. Basic principles of syllabification predict that final consonants are codas (see also chapter 33: syllable-internal structure) and, as such, are expected to pattern like non-final codas. Final consonants thus pose an analytical challenge when this expectation is not fulfilled. Languages in which final consonants simply mirror internal codas are referred to as “symmetrical.” In Manam, only nasals appear in both positions, and stress is regularly attracted to closed syllables, internal and final (1a); default stress is penultimate in the absence of closed syllables (1b). So final consonants in Manam display the same segmental profile and stress-attracting power as internal codas (Buckley 1998; Piggott 1999). (1)

a.

b.

[’embegi] [?u’laI] [ura’pundi] [wa’bubu]

‘sacred flute’ ‘desire’ ‘I waited for them’ ‘night’

Spanish and Selayarese offer other illustrations of the correspondence between internal codas and final consonants. As shown by Harris’s (1983: 14–15) list of word-medial and final rimes in Spanish, the set of permissible codas is the same in both positions and includes any consonantal category, possibly followed by [s]. Final syllables closed by consonants also attract stress, which is consistent with their contributing weight to the final syllable, as coda consonants regularly do cross-linguistically (chapter 57: quantity-sensitivity). In Selayarese (Mithun and Basri 1986), medial codas are restricted to homorganic nasals, the first parts of geminates, and [?]; final consonants are limited to [?] and [I]. Assuming that [?] and [I] lack place specification (e.g. Paradis and Prunet 1993; Lombardi 2002), all codas can be characterized by the absence of independent place features (chapter 7: feature specification and underspecification; chapter 22: consonantal place of articulation): they are placeless or acquire the place of the following onset. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0036

Final Consonants

2

In many other languages, final consonants pattern differently from internal codas. First, the right edge of constituents regularly hosts more consonants than internal codas may accommodate. Second, final consonants may be ignored in the application of metrical processes, while internal codas cannot be. These two tendencies, formulated in (2), define final consonant exceptionality. (2)

a. b.

Segmental immunity Final consonants escape segmental constraints that apply to internal codas. Metrical invisibility Final consonants are ignored in the application of metrical processes.

Both patterns occur in Cairene Arabic. While only one consonant is allowed in phrase-internal codas, two may appear phrase-finally (Wiltshire 2003); the additional final consonant is said to escape the coda conditions applicable elsewhere in the phrase. In addition, word stress is attracted to non-final CVC syllables and final CVCC/CVVC, as opposed to CV and final CVC, as if the final consonant were invisible to the stress assignment algorithm (Hayes 1995). Final consonant exceptionality has attracted considerable attention in the development of modern phonological theory. Segmental immunity (2a), clearly the most widely discussed aspect of final exceptionality, is treated in §2–§4, starting in §2 with a review of various representative patterns (generalizations). Analyses of the special behavior of final consonants have almost exclusively relied on special accommodations to syllable structure in final position; different representational devices are examined and compared in §3 (representations). Representations, however, provide only part of the story: they offer a formal frame for the expression of the specificity of final consonants, but no explanation for it. §4 (motivations) is concerned with the formal, grammatical, or functional factors that have been called upon to account for the freedom of occurrence of right edge consonants. Metrical invisibility (2b) and its relationship to segmental immunity are addressed in §5. Final consonants are also implicated in other processes, which are not reviewed in this chapter, since they fall under different topics. First, if final consonants appear with greater freedom, they are also regularly subject to deletion processes (chapter 68: deletion). Final clusters variably simplify in many languages. The factors that govern simplification, however, appear to be relevant to final and non-final clusters alike (e.g. Côté 2004), and I have chosen not to address this topic. Single final consonants also delete, giving rise to various types of C/Ø alternations. Examples include French liaison (see chapter 112: french liaison), linking [r] in non-rhotic dialects of English (Hay and Sudbury 2005; among many others), and Maori verbal forms (Blevins 1994). Interestingly, such cases may involve a re-analysis of historically word-final consonants as epenthetic consonants (Vennemann 1972). Finally, final consonants are subject to resyllabification with a following initial segment. Both C/Ø alternations and resyllabification fall under the scope of external sandhi phenomena.

2

Segmental immunity: Generalizations

The immunity of final consonants emerges in static segmental distributions in the lexicon (§2.1) and in the asymmetrical application of segmental processes (§2.2).

Marie-Hélène Côté

3

More consonants are licensed in final than in internal coda position, allowing additional segmental slots (size effects) or a wider range of place, manner, or laryngeal contrasts (feature effects).

2.1

In the lexicon

Eastern Ojibwa (Piggott 1991, 1999) and Tojolabal (Supple and Douglass 1949; Lombardi 1995) exemplify increased licensing possibilities in final position in manner and laryngeal features. In Ojibwa, while nasals and fricatives are permissible codas in all positions in the word (3a), stops are only allowed word-finally (3b). In Tojolabal, the contrast between plain/aspirated and laryngeal stops and affricates is neutralized in word-internal codas, where only plain segments appear (4a), but remains active in onsets and word-finally (4b). (3)

a.

b.

(4)

a. b.

[baIgisin] [mo(œkine(] [wi(ja(s] [nindib] [ninik]

‘it falls’ ‘it is full’ ‘meat’ ‘my head’ ‘my arm’

[hutp’in-] [?atnija] [potot’] [k’ak]

‘to push’ ‘you bathed’ ‘class of plant’ ‘flea’

French illustrates the presence of additional consonantal slots word-finally. While it admits a large variety of final clusters of up to four consonants (5), all morphemeinternal clusters may be analyzed with codas limited to one consonant (Dell 1995). Final clusters include sequences of rising sonority, in violation of the Sonority Sequencing Principle (e.g. Clements 1990; chapter 49: sonority). This can be taken as a further indication that final consonants are not regular codas: they exceed the possibilities offered by the syllable template applicable elsewhere in the word not only in terms of the number of segments, but also in their relative autonomy with respect to general syllabic principles. (5)

[adDpt] [serkl] [√bidekstr]

‘adopt’ ‘circle’ ‘ambidextrous’

English offers different kinds of final exceptionality effects. As in French, more consonants are found finally than in internal codas: up to three in monomorphemic words (e.g. next) and four with the addition of word-level suffixes (e.g. thousandths) vs. only one internally (exceptionally two, as in empty; see Borowsky 1986). Unlike French, however, English does not tolerate word-final sequences of an obstruent followed by a non-sonorant. In addition, English displays asymmetries in vowel + consonant combinations. Word-finally, long vowels are followed by any consonant (6a); morpheme-internally, long vowels in closed syllables appear in restricted contexts: before fricative + stop (6b) or a sonorant homorganic with the following onset (6c), often with additional combinatorial constraints. Coronal obstruents enjoy

Final Consonants

4

a special status in final position (chapter 12: coronals). For example, long vowels are not followed by clusters except coronal ones (6d), [d] is the only voiced stop allowed after nasal consonants (6e), and vowel reduction is more likely to apply before coronals than non-coronals (6f) (e.g. Borowsky 1986; Burzio 2007). (6)

2.2

a. b. c. d. e. f.

soap, reach pastry, auspices, after1 council, chamber, example wild, paint blind, bond (vs. bomb, long) Unreduced final vowel: Adirondack, insect, chipmunk Reduced final vowel: Connecticut, Everest, elephant

In segmental processes

The asymmetrical application of segmental processes may also give rise to final exceptionality effects. For example, word-internal complex codas may be simplified by consonant deletion or vowel epenthesis, while final clusters are left intact; final vowel deletion may create configurations that are not tolerated in medial syllables. Various feature-changing processes may also result in additional contrasts being tolerated in final consonants. Although the majority of cases of final immunity appear to involve the word, exceptionality effects have also been reported at the stem and phrase levels. Interestingly, both morphosyntactic (stem) and prosodic (phrase) constituents appear to be targeted. However, the morphosyntactic or prosodic status of the word is usually unclear, being either left unspecified or assumed without argumentation for its status. Note that certain processes described at the word level may actually involve the phrase, since words are often considered in isolation. (See chapter 51: the phonological word.) Numerous patterns can be identified, depending on the specific configuration that is asymmetrically tolerated in final position, the process (deletion, epenthesis, or other) subject to asymmetrical application, and the level (stem, word, phrase) at which it applies. Some combinations are illustrated below (see Côté 2000 for additional cases). Kamaiurá allows codas only word-finally (Everett and Seki 1985). This language has a reduplication process that copies to the right the last two syllables of the base. When the base ends in a consonant, this consonant is lost word-medially, and surfaces only in the reduplicant (7). (7)

/o-mo-kon-mo-kon/ /je-umirik-mirik/

[omokomokon] [jeumirimirik]

‘he swallowed it frequently’ ‘I tie up repeatedly’

Kayardild (Evans 1995) displays a similar effect involving vowel deletion at the phrase level. Word-final [a] deletes phrase-finally but is kept before another word. This is illustrated in (8) with the two words [cirkuç-uI-ka] ‘from the north’ and [}a(-}a] ‘he returned’ pronounced in either order: only the second, phrase-final word loses its final [a]. See Piggott (1991) for other cases of apocope. 1

Long vowels before [ft] (6b) and [mp] (6c) are restricted to [A(], and only occur in some dialects, e.g. Southern British English.

Marie-Hélène Côté

5 (8)

[cirkuçuIka }a(-}] [}a(-}a cirkuçuIk]

Cairene Arabic allows CVCC syllables phrase-finally (9a), but enforces a CVC template phrase-internally. Vowel epenthesis prevents complex codas when clusters of more than two consonants are created through suffixation (9b) and word concatenation (9c) (e.g. Broselow 1980; Wiltshire 2003). (9)

a. b. c.

/katab-t/ /bint/ /katab-t-l-u/ /katabt gawaab/ /bint nabiiha/

[katabt] [bint] [katabtilu] [katabtigawaab] [bintinabiiha]

‘you wrote’ ‘girl’ ‘I/you wrote to him’ ‘you (masc) wrote a letter’ ‘an intelligent girl’

Ondarroa Basque exemplifies a more complex case, in which exceptionality effects apply simultaneously at the word and phrase levels. Three different processes conspire to prevent the appearance of stops and affricates in coda position: vowel epenthesis, stop deletion, and affricate simplification. The choice of the repair strategy depends on a complex interplay of lexical and syntactic factors; see Côté (2000) for details. Root-final stops and affricates are excluded before consonantinitial suffixes inside words (10a), are optionally retained before consonant-initial words inside phrases (10b), and remain intact phrase-finally (10c). (10)

a.

/kiœket-tsat/ /lapits-Œo/

b.

/i7u kiœket bota dot/ /semat mutil/ /eskats bat/

c.

/lau silbot/ /bost okots/

[kiœketatsat] [lapitsaŒo] [lapisŒo] [i7ukiœket(a)bota7ot] [sema(t)mutil] [eskatsbat] [eskasbat] [eskatsabat] [lausilbot] [bostokots]

‘lock-prolative’ ‘pencil+dim’ ‘I have thrown three locks’ ‘how many boys’ ‘a/one kitchen’

‘four prominent bellies’ ‘five chins’

Balantak displays different feature restrictions, involving manner and place of articulation at the stem level (Broselow 2003). Codas are limited to homorganic nasals inside morphemes (11a) and at the prefix–root juncture, where impossible codas are avoided by place assimilation (11b), deletion (11c), or epenthesis (11d). In contrast, all consonants other than voiced stops and glides may appear rootfinally (12a), and root-final nasals fail to assimilate in place to the following consonant (12b), pointing to the privileged status of the root-final position. (11)

a.

gampal uIgak b. /niI-borek/ /miI-sapit/ c. /saI-loloon/ /mo?-tokol/ d. /mVI-roIor/

[nimborek] [minsapit] [saloloon] [motokol] [moIoroIor]

‘underlayer’ ‘hornbill bird’ ‘lied’ ‘hidden’ ‘one thousand’ ‘to lie down’ ‘to hear’

Final Consonants (12)

a. b.

/siok-ta/ /bantil-kon/ /laigan-ku/ /wuruI-ta/

[siokta] [bantilkon] [laiganku] [wuruIta]

6

‘our (incl) chicken’ ‘inform (benefactive)’ ‘my house’ ‘our (incl) language’

Patterns so far have been described in terms of more consonants being allowed at the right edge. Another language type has been put forward, which requires constituents to end in a consonant. This possibility is instantiated in Yapese, the case most commonly discussed (Piggott 1991, 1999; Broselow 2003; Wiltshire 2003). This language has no internal codas, but a generalized final short vowel deletion process, which results in words ending in a consonant on the surface. (Final long vowels shorten but do not delete.) The status of Yapese as a distinct type is questionable. As in Kayardild (8), vowel deletion applies finally but not internally (at the word level rather than the phrase), leading to the same generalization as other cases of final consonant immunity: consonants are more easily tolerated in final position. The Yapese pattern may be interpreted as favoring vowel deletion to the extent that it results in phonotactically acceptable forms, rather than actively requiring words to end in a consonant. Menominee is another language in which words end in a consonant, lexically or as a result of final vowel deletion (Bloomfield 1962).2

3

Segmental immunity: Representations

If final consonants escape the conditions applying to codas in other positions, their identity as codas is called into question or must be qualified. At least four directions have been explored to account for the internal–final asymmetry. (i) One consists in admitting position-specific syllable well-formedness conditions, for example by defining different coda constraints for final and non-final syllables. This approach is often taken at a descriptive level but it has not been favored in analytical work. (ii) Uniform syllabic conditions may be maintained across positions but violated at edges under pressure from independent constraints. Recent Optimality Theory (OT) analyses have often relied on this type of reasoning; see §4 for a discussion of some relevant factors. (iii) Another line of research has explored the idea that syllabic structure is irrelevant in all or some of the final immunity effects, which arise through sequential generalizations. It has been argued, for instance, that final clusters in English and other languages are accounted for with a constraint limiting sequences of consonants to only one place of articulation (with coronals unspecified for place in English) (Iverson 1990; Yip 1991; Lamontagne 1993; see also Burzio 2007). Such a sequential generalization allows a unified account of consonant clusters in all positions in the word. Côté (2000) takes a more radically non-syllabic approach to consonant phonotactics in general, and final edge effects in particular, which are defined in terms of segment sequencing and adjacency to 2

If words may be required to end in a consonant, we should find cases of systematic final consonant epenthesis, instead of vowel deletion. Interestingly, I am aware of no such cases at the word level. However, many examples of phrase-final consonant epenthesis are reported (see some examples in Trigo 1988). This asymmetry between the word and the phrase needs to be investigated further, but it suggests that phrase-final epenthesis corresponds to an articulatory closure effect that is not relevant word-finally.

7

Marie-Hélène Côté

constituent boundaries (see §4.4). (iv) However, the most widespread approach in the last 40 years has consisted in elaborating specific syllabic representations that distinguish final consonants from internal codas. The main proposals are presented and compared below: appendices, defective syllables, attachment to higher prosodic constituents, extraprosodicity, and non-moraic consonants. These structural possibilities are illustrated in (13b)–(13f) for the English word wild. Each configuration allows final consonants to escape the conditions that apply to internal codas. The straightforward complex coda approach, which does not involve a specific final representation, is given for comparison in (13a). Notice that these devices are not mutually exclusive and have occasionally been combined. For example, Borowsky (1986) uses both extraprosodic and appendix consonants, Iverson (1990) appendices and empty-headed syllables.3 (13)

a. Coda

b. Appendix

PWd

PWd

q

q

O

N

w

a>

C l

d

c. Defective syllable PWd

O N

C App

w a>

l

d. Attachment to PWd PWd

q

q

O

N

C

w

a>

l

q

d

O

N

C

w

a>

l

d

f. Non-moraic “coda” PWd

e. Extraprosodicity PWd q

3

d

q

O

N

C (Ex)

w

a>

l

d

w

[

[

[

a

>

l

d

Another (partial) solution to the free occurrence of final consonants exploits the idea that certain combinations of consonants form complex phonemes and count as a single unit. It has been applied in particular to [s] + obstruent sequences in English (Fudge 1969; Fujimura and Lovins 1982; Selkirk 1982; Wiese 1996 for German; see also chapter 38: the representation of sc clusters). A word like wild, on this view, contains only one post-vocalic segment. Duanmu (2008) develops a richer theory of complex sounds, which allows him to maintain a simple CVX syllable template; see §4.5.

Final Consonants

8

Appendix (13b): It has been argued that final consonants belong to a separate constituent that hosts consonants that do not fit into the coda. This constituent has been variably called “appendix”4 (e.g. Halle and Vergnaud 1980; Mohanan 1982; Charette 1984; Borowsky 1986; Goldsmith 1990; Iverson 1990; Wiltshire 1994; Booij 1995; Kraehenmann 2001), “affix” (Fujimura and Lovins 1982), and “termination” (Fudge 1969). By stipulation, the appendix is available only in wordfinal position. This constituent is usually attached to the syllable node, as in (13b); it is alternatively part of the word structure, as a sister to the syllable. Two types of affixes may be distinguished, for non-suffixal and suffixal consonants (Goldsmith 1990; Duanmu 2008). Defective syllables (13c): Final consonants are by default taken to be part of the syllable headed by the closest preceding vowel. This assumption is regularly challenged by claims that these consonants in fact belong to a separate syllable, one without a pronounced nucleus (e.g. McCarthy 1979; Selkirk 1981; Iverson 1990; Burzio 1994; Dell 1995; Bye and de Lacy 2000; Cho and King 2003). Representational and terminological details abound here. These special syllables have been termed “degenerate,” “empty-headed,” “minor,” “defective,” “semi-syllables,” and “catalectic.” They may or may not contain a nucleus position; the consonants may be onsets, rimes, or segments attached directly to the syllable node. Different types of degenerate syllables may even be distinguished, for example moraic vs. non-moraic (Nair 1999), or syllables whose nucleus position is empty vs. those whose nucleus is occupied by segmental material shared by the onset (Goad 2002; Goad and Brannen 2003). Final consonants have been considered to be universally onsets, notably in the model of Government Phonology (Kaye 1990; Harris and Gussmann 2002). Others advocate a mixed coda vs. onset approach to final consonants, depending on their segmental profile and behavior, and determined on a language-specific basis or even varying within the same language (Piggott 1991, 1999; Goad 2002; Rice 2003). Attachment to higher prosodic constituents (13d): Final consonants may attach directly to prosodic constituents higher than the syllable, usually the prosodic word (PWd), but also phrasal constituents (e.g. Rubach and Booij 1990; Rialland 1994; Rubach 1997; Wiltshire 1998, 2003; Auger 2000; Spaelti 2002). As a variation on this theme, Piggott (1999) considers that final consonants are codas or onsets of empty-headed syllables licensed by, rather than attached to, the prosodic word. Attachment to higher prosodic constituents implies that the relevant domains for final consonant exceptionality are prosodic in nature. This proposal does not directly account for additional contrasts or slots at the end of morphosyntactic constituents, such as the stem (Broselow 2003). Extraprosodicity (13e): The most prevalent approach to final consonant exceptionality involves the concept of extraprosodicity (or extrametricality; see chapter 43: extrametricality and non-finality). Originally designed to exclude final syllables from stress assignment algorithms (Liberman and Prince 1977), extraprosodicity has been extended to final consonants by Hayes (1980 (citing a presentation by K. P. Mohanan 1982)) for stress and Steriade (1982) for syllabification. Designating final consonants extraprosodic makes them invisible for the 4

The term “appendix” has also been used to refer to non-moraic “coda” consonants (13f) (Sherer 1994; Zec 2007) or consonants attached directly to the syllable or prosodic word nodes (13d) (Rosenthall and van der Hulst 1999).

9

Marie-Hélène Côté

purposes of syllabification, stress assignment, and other processes. The consonants are later adjoined to prosodic structure, in conformity with the principle of Prosodic Licensing (Itô 1986), at a stage in the derivation when syllabic constraints are no longer applicable and metrical structure has already been built. Extraprosodicity is subject to the Peripherality Condition, which restricts it to edges of constituents. Here again, the theme of extraprosodicity allows for numerous variations, regarding its universality and the level at which it operates. Final consonants are claimed to be universally extraprosodic at the lexical level (Borowsky 1986; Itô 1986). At the word level, extraprosodicity is parametrized (Itô 1986) or universal (Piggott 1991). Itô argues that it is turned off post-lexically, but cases of final consonant exceptionality at the phrasal level motivate its possible extension to post-lexical phonology (Rice 1990). Non-moraic “coda” consonants (13f): In languages with moraic codas, additional final consonants may be represented as non-moraic (Lamontagne 1993; Sherer 1994; Hall 2002; Kiparsky 2003). The merits and disadvantages of each of these approaches depend in large part on theory-internal considerations. Each must give up on at least one established principle or generalization of phonological theory. The idea of enriching syllable structure with an appendix constituent has encountered some resistance, since it involves position-specific syllable architecture. Little evidence has been adduced to motivate the appendix as a constituent, which would be expected to act as a trigger or target of some phonological processes; the only case known to me is Mohanan’s (1982) suggestion that [r] depalatalizes in the appendix position in Malayalam. Attachment to higher prosodic constituents violates the principle of exhaustive syllabification, as well as strict layering of prosodic constituents. Extraprosodicity requires multiple levels of syllabification, in itself a contentious issue, and may be interpreted as a weakening of the principles of prosodic phonology (Piggott 1999). On the other hand, it avoids syllabic constituents that are otherwise unnecessary (Steriade 1982). Degenerate syllables imply a higher level of abstractness; empty syllabic positions are either viewed as going against the “uncontroversial assumption that syllables must have nuclei” (Rubach 1997: 570– 571) or, more positively, as a natural consequence of the phonological architecture in which the segmental and suprasegmental structures are independent of each other (Harris and Gussmann 2002). Beyond conceptual considerations, at least three issues must be addressed by all approaches relying on specific representations for final consonants. One issue concerns the featural or combinatorial restrictions that additional consonants allowed at the right edge may themselves be subject to, and how these should be expressed. Final obstruent + sonorant sequences occur in French, but not in English. Germanic languages are also well known for allowing word-final strings of voiceless coronal obstruents. The representations in (13b)–(13f) do not make explicit predictions as to the range of consonants they may host, with the exception of the onset approach, according to which final consonants are expected to display onset-like properties. Final consonants or clusters in many languages do have an onset or coda–onset profile (e.g. French; Dell 1995). But many other patterns appear more challenging for the onset approach, as final consonants are regularly much more limited than onsets. In particular, the claim that final consonants are universally onsets is not readily compatible with languages in which

Final Consonants

10

final consonants share the same segmental profile as internal codas (but see Harris and Gussmann 2002 for discussion). A related problem concerns the coda-like behavior of final consonants, even when they do not have a coda segmental profile. In Québec French, for example, high vowels have a lax variant that surfaces variably before internal codas and categorically before final consonants (14). This process has naturally been characterized as applying in closed syllables. If final consonants are not codas, can they be said to “close” the preceding syllable? Or should the laxing context “in closed syllables” be reformulated without reference to the closed syllables? (14)

[b>I.go] [pÁtr] ~ [pÁt] [kyb]

‘bingo’ ‘beam’ ‘cube’

Finally, why and how are such special representations restricted to final or edge positions – if indeed they are? Appendices, attachment to higher prosodic constituents, degenerate syllables, and extraprosodicity have been excluded inside constituents by stipulation. But in fact, some of these devices have been extended to non-peripheral positions as well (e.g. Rubach and Booij 1990; Rubach 1997). Relevant factors in answering this question are explored in the next section.

4

Segmental immunity: Motivations

Beyond representations, what factors possibly underlie segmental immunity? In particular, why do the exceptional structures described above occur at the right edge of constituents, or what motivates violations of coda conditions in final position? Five factors are discussed here: alignment, positional faithfulness, licensing parameters, perceptual factors, and morphology. Some of these proposals specifically address final immunity effects, others are integrated into a larger typological perspective and cannot be properly evaluated without considering the full range of phonotactic possibilities in internal and final syllables. As a step toward this objective, let us look at the idealized patterns in (15), which specify the number of consonantal slots available in internal codas vs. final position. Internal codas

Final position

Examples

a.

Ø

C

b. c. d. e. f.

C Ø C C CC

CC Ø C Ø C

Kamaiurá (7) Yapese (§2.2) Cairene Arabic (9) see Blevins (1995: 219), e.g. Fijian Manam (1) Warlpiri (Nash 1980) (unclear)

(15)

i. ii.

optional obligatory

Three categories of languages can be identified: languages that allow more consonants finally (15a) and (15b), symmetrical languages (15c) and (15d), and languages that allow more consonants internally (15e) and (15f). Languages with final immunity effects and symmetrical languages have already been discussed,

11

Marie-Hélène Côté

and require few additional comments. Within type (15a), a further division can be made based on whether final consonants are merely allowed (Kamaiurá) or required (Yapese). As mentioned in §2.2, the status of Yapese as a distinct case of final exceptionality requiring a separate analysis remains unclear. Apart from the Yapese case, all consonantal slots in (15) are optional. Symmetrical languages include those with simple codas or open syllables across the board; languages where complex codas are allowed internally and finally are not included in (15). Types (15e) and (15f), the opposite of (15a) and (15b), illustrate “internal immunity” effects. The pattern in (15e) is not uncommon; it applies to languages that allow internal codas but require words to end in a vowel, including Hixkaryana (Derbyshire 1979) and a number of Australian languages (e.g. Warlpiri; Nash 1980). Type (15f), the mirror image of Cairene Arabic, where complex codas are tolerated only in internal syllables, is more controversial, and its existence not well established. More complex internal codas may relate to factors that are independent from the internal–final distinction: they may be allowed in stressed or initial syllables, at morpheme boundaries, or as a result of coda–onset linking that is not available in final position. Languages in which internal codas in any position might be productively allowed to be more complex than at the right edge need to be further investigated. In any case, pattern (15f) does not appear to be as prominent as the opposite type (15b).

4.1

Alignment

Analyses of final consonant immunity have explored a range of alignment constraints, requiring coincidence between the edges of two prosodic or morphological constituents. First, constraints may force marked structures to appear at an edge. Contour tones, for instance, are limited to peripheral positions in many languages (chapter 45: the representation of tone). Similar restrictions have been proposed for non-canonical syllables used to accommodate additional final consonants: semisyllables (Cho and King 2003) and trimoraic syllables (Hall 2002). For example, Align([[[[]q-R, PWd-R) aligns the right edge of a trimoraic syllable with the right edge of a phonological word (Hall 2002). Alternatively, marked structures may be prevented to appear in internal positions. Thus, Bye and de Lacy (2000) and Clements (1997) exclude non-peripheral moraless syllables and extrasyllabic consonants, respectively, with constraints enforcing mora and syllable adjacency.5 Alignment constraints have also contributed to account for final consonant immunity in an indirect fashion. In her analysis of Kamaiurá (7), which allows no internal codas but one final consonant, Wiltshire (1998, 2003) keeps NoCoda unviolated and attaches the final consonant directly to the prosodic word. This disrupts alignment between the right edge of the word and that of a syllable, in violation of Align-R(PWd, q). Ranking this constraint below NoCoda and

5

These adjacency constraints are defined in terms of alignment or contiguity. But contiguity here involves only output forms and no comparison with inputs, as in other formulations of Contiguity, following McCarthy and Prince (1995).

Final Consonants

12

NoDeletion yields the desired candidate, with medial deletion and final extrasyllabicity, as shown in (16). Cairene Arabic, which allows extra consonants only phrase-finally (9), is analyzed in the same way, with NoComplexCoda and Align-R(Phrase, PWd) in place of NoCoda and Align-R(PWd, q); Clements’s (1997) analysis of Berber uses a similar constraint Align-L(Phrase, q).6 (16)

o-mo-kon-mo-kon NoCoda NoDeletion a. .o.mo.kon.mo.kon. b. .o.mo.ko.mo.ko.

Align-R(PWd,q)

*!*

☞ c. .o.mo.ko.mo.ko.n

**! *

*

Not considered here, however, are candidates with internal consonants linked to the PWd. Internal extrasyllabicity violates none of the constraints in (16), so the candidate [.o.mo.ko.n.mo.ko.n], with each [n] attached to the PWd, should win over that in (16c), since it incurs no violation of NoDeletion. With the addition of another alignment constraint banning internal extrasyllabic segments (Clements 1997), candidate (16c) could emerge as optimal, but so could [.o.mo.ko.n.mo.ko], with an internal extrasyllabic [n] and deletion of the final [n] (under the ranking NoCoda >> NoDeletion and the constraint against internal extrasyllabicity at the bottom). This could be argued to correspond to type (15e), the mirror image of (15a); likewise, types (15f) and (15b) are generated with equal likelihood. In this approach, then, final consonants are no more likely to surface than non-final ones, and final and internal immunity effects are treated symmetrically. Yet another use of alignment is found in McCarthy’s (1993) constraint FinalC, corresponding to Align-R(PWd, C), which requires words to end in a consonant. This constraint has served to account for the Yapese pattern (15a.ii) (Broselow 2003; Wiltshire 2003), but it cannot generate type (15b), in which consonant clusters are allowed finally but eliminated in internal codas, since FinalC does not distinguish between one or two final consonants. Lombardi (2002) and Wiltshire (2003) extend FinalC to the phrasal level; Wiltshire also uses Align-R(PWd, V), the vocalic equivalent of FinalC, to derive languages of type (15e).

4.2

Positional faithfulness

Another approach invokes constraints protecting the right edge of constituents, ensuring that final segments or syllables make it to the surface. Syllable wellformedness is obeyed inside constituents, but violated finally under pressure from right-edge faithfulness (chapter 63: markedness and faithfulness constraints). Such faithfulness constraints have taken different formulations, with slightly different effects and predictions; two are offered in (17).

6

Cairene Arabic also excludes internal CVVC syllables, which cannot be accomplished with NoComplexCoda. Wiltshire’s analysis does not address this issue.

13 (17)

Marie-Hélène Côté a.

b.

Anchor-R(GWd) A segment at the right edge of the grammatical word in the output has a correspondent at the right edge of the grammatical word in the input (Broselow 2003).7 Faith-R The rightmost syllable constituent in the word is faithful to its underlying form (Krämer 2003).

Ranked above the constraints banning codas and consonant deletion, any of these constraints straightforwardly derives the Kamaiurá deletion pattern (7). As shown in (18), the surface form [.o.mo.ko.mo.kon.] emerges as optimal: the final consonant is protected from deletion by the undominated Anchor/Faith-R constraint, while the internal [n] is eliminated by NoCoda outranking NoDeletion. (18)

o-mo-kon-mo-kon Anchor-R(GWd) NoCoda Faith-R a. .o.mo.kon.mo.kon. b. .o.mo.ko.mo.ko. ☞ c. .o.mo.ko.mo.ko.n

NoDeletion

**! *!

** *

*

This approach follows a line of analysis that has been developed for other positions: syllable onsets (vs. codas), roots (vs. affixes), stressed syllables (vs. unstressed ones), initial syllables (vs. non-initial ones), and long vowels (vs. short ones). These positions are protected by specific faithfulness constraints that reflect their privileged psycholinguistic or phonetic status (Beckman 1998). A similar treatment for final segments or syllables (vs. non-final ones) is conceivable, although it has not been functionally motivated in the way other privileged positions have. From a more formal perspective, right-oriented constraints, as in (17) and the alignment constraints in §4.1, go against recent claims that constraints may only refer to the left edge, not the right one. This has been argued for Anchor by Nelson (2003) and for all constraints by Bye and de Lacy (2000), who re-analyze right-edge effects, including final consonant exceptionality, without reference to the right edge. Beyond such conceptual questions, right-edge faithfulness constraints fail to account for cases where vowel deletion applies finally but not medially, as in Kayardild (8), since Anchor excludes final deletion. The constraint FinalC, requiring every word to end in a consonant, has been invoked to counter the effect of Anchor and generate the Yapese pattern. As a result, cases of final consonant immunity end up being motivated by distinct constraints, FinalC and Anchor, depending on whether they arise from final vowel deletion or processes applying internally. This analytical distinction seems questionable.

7

The Anchor constraint has a predecessor in McCarthy and Prince’s (1993) Align-R(stem, q), which, applied in the context of the early Parse-Fill approach to faithfulness in OT, has a faithfulness effect similar to that of Anchor in terms of protecting the final segment. This alignment approach is applied to the Cairene Arabic pattern by Wiltshire (1994) and extended to laryngeal contrasts by Lombardi (1995). It must be distinguished from later alignment constraints, discussed in the preceding section, which have no faithfulness effects.

Final Consonants

14

One difference between the Anchor (17a) and Faith-R (17b) approaches is that the former only targets the last segment, while the latter considers the entire final syllable. The last segment formulation cannot derive cases where clusters are maintained finally but eliminated internally (15b), as in Cairene Arabic. Consider the hypothetical example /arspont/ and the three constraints in (19). The only possible winners are the faithful [arspont], if NoDeletion outranks NoComplexCoda, and [arpot], under the opposite ranking. There is no ranking that yields the desired candidate [arpont].8 (19)

arspont ☞ a. .ars.pont. b. .ar.pon.

Anchor-R(GrWd) NoComplexCoda NoDeletion ** *

c. .ar.pont ☞ d. .ar.pot.

** *

* **

Since Krämer’s Faith-R considers the entire final syllable, it correctly derives the desired output [arpont]. However, by evaluating identity, and not only correspondence, between input and output, it also predicts that contrasts may be maintained only in final syllables, including in their onset. Could distinctive voicing, for instance, be found only in the onset of final syllables? Beckman (1998) describes patterns in which initial syllables accommodate more complex codas than non-initial ones. Languages with more complex onsets in final than in non-final syllables remain to be reported, however.9

4.3

Licensing parameters

Final consonant freedom of occurrence has been generated by licensing parameters that allow or even favor final consonants on a language-specific basis. Kaye (1990) proposes a Coda Licensing Principle, according to which all codas must be licensed by a following onset; Harris and Gussmann (2002) provide further arguments for this approach (see also Scheer 2008). It follows that wordfinal consonants cannot be codas; they are onsets to an empty-headed syllable. The occurrence of final consonants is independent from that of internal codas, and depends on whether languages allow final onsets to be licensed by an empty nucleus. A four-way typology emerges from the combination of two binary parameters: whether or not (internal) codas are permitted and final empty nuclei are licensed. This yields four categories of languages (20).

8

Broselow (2003) includes I-O Contiguity, which bans medial deletion and epenthesis, in her constraint set. This does not allow [arpont] to emerge as optimal, and it has the unexpected consequence of generating type (15f), whose status remains unclear. I-O Contiguity serves to derive languages that require words to end in vowels (15c), but this could also be accomplished with Align-R(PWd, V). 9 Faith-R also suffers from a problematic formulation. In the output [arspon], with deletion of the final [t], the last syllable [pon] is faithful to its corresponding string in the input. Without syllabification in the input, it is not clear how final deletion is penalized.

15 (20)

Marie-Hélène Côté a. b. c. d.

No codas, no final empty nuclei No codas, final empty nuclei Codas, no final empty nuclei Codas, final empty nuclei

Type Type Type Type

(15c) (15a) (15e) (15b)

Crucially, this typology excludes the other symmetrical pattern (15d), in which internal codas and final consonants obey the same constraints. This appears too restrictive, as argued by Piggott (1991). In response, Piggott (1999) proposes another parameter based on the notion of remote licensing. All segments must be licensed by a higher prosodic category, either directly by the syllable, or indirectly (remotely) by the PWd or a phrasal constituent. Piggott, unlike Kaye, allows final consonants to be either codas or onsets, depending on their segmental profile; final onsets, which escape coda restrictions, are always licensed remotely, final codas may be licensed by the syllable or a higher constituent, and vowels must be licensed directly by the syllable. Languages vary in whether remote licensing is excluded (all final segments are either vowels or codas), possible (final segments are vowels, codas, or onsets), or obligatory (final segments must be consonants). This last option, reminiscent of McCarthy’s (1993) FinalC, derives Yapese generalized apocope. Unlike Coda Licensing, Piggott’s parametric approach does not provide for languages of type (20c); it also predicts that final consonants that exceed the coda template display onset-like properties, which is not always the case, for example with final coronal obstruents in Germanic languages. An OT account similar in spirit to that of Piggott has been proposed by Spaelti (2002). According to his WeakEdge family of constraints, the right edge of a constituent should contain as little prosodic structure as possible. These constraints favor the attachment of final segments to constituents higher than the syllable in the prosodic hierarchy. Since only consonants may be so attached, WeakEdge establishes consonants as the preferred segment type in final position. Piggott’s and Spaelti’s proposals rely on the idea that constituents should end in a consonant, echoing the constraint Final-C mentioned above, and specifically in a non-coda consonant. Goad (2002) argues that final non-codas are indeed advantageous from a processing viewpoint. Final consonants that are not possible internal codas signal the right edge of words more clearly than final vowels or codas do, since they cannot appear syllable-finally inside words. Likewise, codas signal the right edge of syllables better than vowels do. This parsing argument needs to be tested; for now, two questions arise. First, if it is a desirable thing for words to end in onsets, one should expect to find more cases of generalized wordfinal vowel deletion or epenthesis of a non-coda consonant. As noted above, many languages actually require that words, but not syllables, end in a vowel. Second, if codas are the best indicators of the right edge of syllables, why are they considered marked in syllable typology?

4.4

Perception and adjacency to prosodic boundaries

The syllabic basis of consonant phonotactics has been questioned in the last decade or so (Steriade 1999a, 1999b; Blevins 2003), in particular by proponents of the “licensing by cue” approach, according to which the likelihood that a feature or segment occurs in a given context is a function of its relative perceptibility in that context (chapter 98: speech perception and phonology). Côté (2000) applies

Final Consonants

16

this idea to segmental immunity at the right edge, arguing that the additional licensing possibilities in peripheral positions are motivated by perceptual factors. More consonants are tolerated at edges, because their perceptibility is enhanced by a number of phonetic processes: lengthening, articulatory strengthening, and reduction of the amount of overlap with adjacent segments. The formal architecture is based on two constraint families that require consonants to be followed by a vowel (21a) or adjacent to a vowel (21b), contexts where they benefit from optimal transitional cues. But consonants with stronger internal or contextual cues are less dependent on vocalic transitions and vowel adjacency. This includes final consonants, which are subject to the more specific constraints in (21c) and (21d), where i ranges over the set of prosodic boundaries at the word level and above, including Ø for word-internal consonants (which are not adjacent to any boundary). (21)

a. b. c.

C→V C↔V C]i → V

A consonant A consonant A consonant by a vowel. d. C]i ↔ V A consonant to a vowel.

is followed by a vowel. is adjacent to a vowel. followed by a prosodic boundary i is followed followed by a prosodic boundary i is adjacent

It is assumed that the higher the prosodic boundary a consonant is adjacent to, the more easily it surfaces without the support of an adjacent vowel; consonants not adjacent to any prosodic boundary are the weakest. This is expressed in the rankings in (22), which follows the three-way distinction between phrase-final, word-final, and (word-)internal consonants established in §2.2. Syllable wellformedness and extraprosodicity are irrelevant concepts in this framework, but C → V and C ↔ V obviously bear similarity to, but are not equivalent to, NoCoda and NoComplexCoda/Onset, respectively. (22)

a. b.

C]Ø → V >> C]PW → V >> C]Ph → V C]Ø ↔ V >> C]PW ↔ V >> C]Ph ↔ V

This approach derives the Kamaiurá and Cairene Arabic patterns. In Kamaiurá, NoDeletion is ranked between C]Ø → V and C]PW → V; word-internal consonants have to be followed by a vowel, but word-final ones survive (23). It also directly accounts for cumulative immunity effects, as in Basque (10), with the rankings in (22). Note that C in these constraints may be restricted to specific categories or features (e.g. stops or [coronal] consonants). (23)

o-mo-kon-mo-kon C]Ø → V NoDeletion a. omokonmokon b. omokomoko ☞ c. omokomokon

*!

C]PW→V *

**! *

*

Broselow (2003) interprets the existence of exceptionality effects involving the stem – a morphosyntactic constituent – as contradicting the prosodic basis of the perceptual account. Moreover, enhancement effects are strongest at the phrase

17

Marie-Hélène Côté

level but tend to be weak or inconsistent word-finally (phrase-internally). This may appear at odds with final exceptionality effects being most often reported at the word level. One possibility is that word-final effects arise through a generalization of phrase-final effects, which stabilizes word forms across prosodic contexts (see Hyman 1977 and Gordon 2001 for similar ideas regarding stress). The Balantak case involving the stem investigated by Broselow is consistent with this idea, since stems in this language may surface unsuffixed, in word- and phrase-final position. It remains to be seen whether there are stem-level cases that are incompatible with a phrase-final generalization account. In any case, if a prosodic/perceptual explanation is justified at some level, it would need to be enriched with other grammatical factors to account for the generalization effects. Burzio (2007) pursues a related but complementary perceptual approach to final consonant immunity effects in English. As noted above, additional word-final consonants in English are most often coronals, and unstressed vowels preceding coronal consonants tend to reduce, unlike vowels preceding non-coronals (6f). Burzio argues that coronals being unmarked (“pre-neutralized”), they need not be cued by a full vowel and may survive with the weaker cues provided by a reduced vowel. Non-coronals, especially obstruents, need the cues of a full vowel. In other words, the special status of coronals in English (and presumably other languages) stems from their weaker perceptual demands.

4.5

Morphology

Morphology is obviously involved in some of the segmental immunity effects in final position. Additional final consonants in English (and many other languages) correspond in large part to word-level morphemes (plural and 3rd person [s z], past [t d], ordinal [h]), which are productively added to relevant lexical forms. Final consonants, then, are partly motivated by “the morphology,” this idea being implemented in various ways, depending on one’s view of phonology and its interaction with the morphosyntactic component (chapter 103: phonological sensitivity to morphological structure). Suffixes may be excluded from syllable well-formedness conditions (Selkirk 1982; Harris and Gussmann 2002) or added into morphological constituents separate from the syllable (Goldsmith 1990; Duanmu 2008). The role of morphology may also extend beyond consonantal suffixes, as argued by Duanmu (2008). First, he includes in a final suffix constituent not only true suffixes but also affix-like consonants, which correspond in English to all occurrences of [t d s z h] that exceed the syllable template. In bond, for instance, the final [d] is licensed by some analogical principle to the extent that it could potentially correspond to the past tense affix (of some verb bon). The bomb–bond contrast, with deletion and retention, respectively, of the final stop, relates here to the pseudo-suffixal status of [d], and not to its coronality, as others have argued. Second, any final consonant that is potentially followed by a vowel-initial suffix is supported by a paradigm uniformity or anti-allomorphy principle. For example, the [p] in help is protected by its prevocalic position in helping and helper; its stability in help ensures a uniform expression of the morpheme in all contexts. The exclusion from syllable constituency of all morphologically motivated consonants (plus cases of segmental fusion, not discussed here) allows Duanmu to maintain a simple and cross-linguistically invariant CVX syllable template.

Final Consonants

4.6

18

Summary and discussion

The proposals in §4.1–§4.5 can be compared along two empirical dimensions: (i) do they adequately account for all types of final immunity?; (ii) are they compatible with other phonotactic patterns involving the internal–final relationship? Concerning the first question, alignment constraints, due to their variety, are capable of deriving the full range of immunity effects; parameters such as Coda Licensing and Remote Licensing also account for additional consonantal slots, but their onset status appears at odds with the limited range of final consonants often tolerated beyond syllabic possibilities. Positional faithfulness struggles with final consonant immunity obtained though final vowel deletion; Anchor-based analyses also fail to protect final clusters by targeting only the last segment. Prosodic approaches raise the issue of immunity effects at the end of morphosyntactic constituents. Finally, it remains to be seen how reasonably morphology can embrace all cases of final consonant immunity. Beyond final immunity effects, Coda and Remote Licensing are integrated into parametric systems that claim to account for the full typology of internal–final phonotactic patterns, but the former fails to provide for symmetric languages of type (15d), where final consonants display a typical coda profile, and the latter ignores languages that require words to end in a vowel (15e). Alignment constraints probably offer the most flexible framework and derive the full range of patterns in (15). In fact, the flexibility of alignment constraints is such that final immunity effects enjoy no special status: consonant sequences are as likely to be more complex internally as to be more complex finally. This position might be argued to lack restrictiveness or explanatory power or, conversely, better reflect the range of formally possible patterns, depending in part on the status of type (15f), with complex codas productively allowed only inside constituents. Approaches based on positional faithfulness, perception, or morphology make no specific claims with regard to phonotactic patterns other than final immunity, especially those involving more complex internal codas, which need to be derived by independent constraints or factors. The requirement that words end in vowels might be interpreted as a morphological constraint, but type (15f) would seem to be more challenging. Progress in the analysis of final (and internal) immunity effects rests on a deeper understanding of the patterns in (15), how they arise diachronically, and what factors they are sensitive to.

5

Metrical invisibility

As noted in (2b), final exceptionality also manifests itself prosodically, final CVC syllables patterning like CV ones in stress assignment and vowel length alternations. While regularly noted, the metrical invisibility of final consonants has not given rise to the same analytical diversity as phonotactic immunity. Whether or not metrical invisibility and phonotactic immunity are amenable to a unified approach is also unclear: despite some attempts at a common analysis, there is evidence that the segmental and metrical manifestations of final exceptionality should be kept separate. §5.1 presents the relevant generalizations underlying metrical invisibility and its relationship with segmental immunity; §5.2 addresses its functional motivation.

19

5.1

Marie-Hélène Côté

Generalizations

In quantity-sensitive stress systems, syllable weight is normally determined by the segmental make-up of the rime. In some cases weight assignment also depends on the position of the rime in the word. Of interest here is the pattern where CVC syllables count as light in final position but as heavy elsewhere. See Hayes (1995: 57), Lunden (2006: 1), and Gordon et al. (2010: 142) for lists of languages in which final CVC patterns as light. Various dialects of Arabic illustrate this effect, among them Cairene (Hayes 1995: 67–71; see also chapter 124: word stress in arabic). Cairene has three syllable types: light CV, heavy CVC and CV(, and superheavy CVCC and CV(C, found only in final position. Stress falls on final superheavy syllables (24a), otherwise on heavy penults (24b), otherwise on the penult or antepenult, according to a complex algorithm not described here. This pattern reveals an equivalence between final CVCC/CV(C and penultimate CVC/CV(, which regularly attract stress, and between final CVC and penultimate CV, which do not. This is straightforwardly accounted for if word-final consonants are ignored in the computation of weight and stress. (24)

a. b.

[ka’tabt] [ha–’–a(t] [ka’tabta] [ha(’Ïa(ni] [mu’darris]

‘I wrote’ ‘pilgrimages’ ‘you (masc sg) wrote’ ‘these (masc dual)’ ‘teacher’

As shown in (9) and (24), final consonants are simultaneously invisible to syllabic restrictions and stress assignment, which makes Cairene Arabic a textbook example of final consonant exceptionality, regularly discussed since Broselow (1976) and McCarthy (1979). Such data make it tempting to interpret segmental immunity and metrical invisibility as two effects of a single phenomenon. Yet the two must be distinguished. In Cairene, stress assignment applies at the word level, but the CVC syllable template is enforced at the phrasal level, after resyllabification of word-final consonants with a following vowel, as in (25a). Crucially, stress remains on the second syllable in [œi’ribt] and the first in [’katab] (25b), even though on the surface the second syllable is assumed to be CVC in both cases. (25)

a.

/œiribt ahwa/

[œi.’rib.’tah.wa]

b.

/katab lilmalik/

[’ka.tab.lil.’ma.lik]

‘you drank coffee’ (Selkirk 1981: 222) ‘he wrote to the king’ (Broselow 1976: 16)

Other evidence of the independence of segmental immunity and metrical invisibility can be found in the stress pattern of Hindi-Urdu (Hayes 1980; Hussain 1997). The relevant facts resemble those of Cairene Arabic, with stress falling on final superheavy syllables, otherwise on the rightmost heavy. Unlike Cairene, however, superheavy syllables are not restricted to the final position. Final consonants are metrically invisible, since CVC attracts stress only in nonfinal syllables. But segmental immunity is not involved, since complex codas occur in all positions.

Final Consonants

20

Like stress, vowel length is sensitive to syllable shape, vowel shortening typically occurring in closed syllables and lengthening in open syllables. Again, final consonants appear to be ignored in some languages, with lengthening applying in final CVC syllables or shortening applying only in CVCC ones. Icelandic (Gussmann 2002) regularly stresses the initial syllable of the word. The stressed vowel lengthens in open syllables (26a) and in monosyllables closed by only one consonant (26b), but no lengthening is observed in non-final closed syllables (26c) and in monosyllables closed by two or more consonants (26d). This is a straightforward case of final consonant invisibility: final CVC patterns like non-final CV, and final CVCC like non-final CVC. Similar length alternations are observed in Swiss German (Spaelti 2002) and Menominee (Milligan 2005). (26)

a.

[’pu(] [’sta(ra] b. [’pru(n] [’ha(kh] c. [’senta] [’flaska] d. [’t h jalt] [’rixs]

‘estate’ ‘stare’ ‘edge’ ‘roof’ ‘send’ ‘bottle’ ‘tent’ ‘rich (gen sg masc)’

English shows both stress and length effects of final metrical invisibility. In verbs, final CVCC attracts stress (u’surp, tor’ment), but CVC does not (’edit, de’velop). The stress-attracting power of internal CVC is, however, visible in nouns (a’genda, a’malgam) (Hayes 1982). With respect to length, long vowels in final CV(C syllables regularly correspond to short vowels in final CVCC or non-final CVC, after the addition of a consonantal or syllable-size suffix (keep–kept, wide–width, five–fifth–fifty, wise–wisdom, intervene–intervention). This suggests that shortening applies in closed syllables (internal CVC and final CVCC) but spares final CVC, treated as open by virtue of final consonant invisibility. The representations specific to final consonants discussed in §3 have also been used to derive their metrical invisibility, in particular extraprosodicity (13e) and non-moraic coda consonants (13f). The latter directly accounts for invisibility in the context of stress assignment, if stress depends on syllable weight and weight on moraic structure. Extraprosodicity for metrical purposes is generally kept distinct from phonotactically motivated extrametricality (e.g. Hayes 1995: 106), echoing the remarks above on the non-equivalence between metrically invisible and segmentally immune consonants. Iverson (1990) in fact argues that extraprosodicity should be restricted to stress and excluded from the segmental domain, with cases of segmental immunity and vowel length alternations re-analyzed as involving some of the other devices mentioned in §3: appendices, empty-headed syllables, and sequential cluster constraints (see also Lamontagne 1993).

5.2

Motivation

The lightness of final CVC syllables is motivated by the avoidance of final stress, embodied in OT by the constraint Non-Finality (Prince and Smolensky 2004), which excludes final stressed syllables or head feet (see chapter 43: extrametricality and non-finality). Non-Finality, in conjunction with other constraints, correctly

21

Marie-Hélène Côté

derives stresslessness on final CVC and stress on final CVCC and CVVC in Arabic dialects (Rosenthall and van der Hulst 1999). If Non-Finality generates the facts, the question remains why final stress should be avoided or why final CVC is treated as light. A number of proposals functionally related to the special status of final CVC have recently been put forward. Ahn (2000), Lunden (2006), Hyde (2009), and Gordon et al. (2010) offer explanations that, although distinct, are all related to final lengthening.10 Ahn suggests that the increased vowel duration resulting from final lengthening jeopardizes the contrast between short and long vowels by making final short vowels comparable in duration to non-final long vowels. Stressing the final vowel would weaken the length contrast even further. Lunden and Gordon et al. develop a duration-based account of syllable weight, according to which syllables count as heavy (and attract stress) if their rime is sufficiently longer than that of light syllables. The relative difference in duration between internal CVC and CV is sufficient for CVC to be categorized as heavy, but this may not be the case finally, where final lengthening reduces significantly the durational ratio of CVC to CV. Gordon et al. (2010) also reveal that the languages that asymmetrically treat final CVC as light lack vowel length contrasts in final syllables. It is proposed that phonetic final lengthening tends to be more pronounced when no length contrasts need to be maintained, making it more likely that CVC be interpreted as light. Hyde (2009) focuses on certain properties of final lengthening rather than on duration itself. He notes that, unlike initial lengthening, final lengthening is typically associated with tempo deceleration and declining intensity. These characteristics make final position less compatible with stress, either because diminished intensity makes stress more difficult to perceive, or because the intensity that typically accompanies stress makes it more difficult to decelerate. See Hyman (1977) and Gordon (2001) for related ideas. Gordon invokes intonational factors, final stress being avoided because it would result in the high tone associated with stress and the low final boundary tone being realized on the same syllable. Note that these different factors – duration, final lengthening, length contrasts, tones, deceleration, and intensity – are potentially complementary rather than contradictory in explaining the distinction between final stressless CVC and stressed CVCC/CVVC.

6

Conclusions

Final consonants are implicated in a multiplicity of data and analytical approaches, involving a variety of representations, constraints, and parameters. Among the relevant empirical domains, consonant phonotactics has largely dominated the debates, and the stresslessness of final CVC has drawn some attention, while vowel length alternations have been relatively neglected. Analytically, no unified conception of final consonant exceptionality has really emerged, despite attempts based on extraprosodicity, which have been challenged by evidence for the independence of metrical invisibility and segmental immunity. In a changing theoretical landscape, discussions have tended to shift over time from issues of representation to motivations, ranging from abstract parameters (coda or remote 10

Final lengthening may also relate to length alternations that treat final CVC as open, as in Icelandic (26). I leave this issue open.

Final Consonants

22

licensing) and OT constraints (alignment, positional faithfulness) to the role of morphology and functional explanations (perceptual factors, final lengthening). These approaches are probably to some extent complementary. Focusing on phonotactic patterns, several issues remain to be clarified, concerning the typology of final immunity effects and their relationship to internal immunity effects, whereby more consonants are allowed in internal codas than in final position, and the prosodic or morphosyntactic nature of the constituents involved (phrases, words, stems). To what extent can patterns that require constituents to end in a consonant be analyzed along the same lines as those that merely allow additional consonants? What is the status of patterns displaying more complex codas inside constituents? Should internal and final immunity effects be accounted for in a unified and symmetrical fashion or do they involve distinct factors? One difficulty in answering such questions stems from the fact that positional asymmetries in the complexity of consonant sequences may relate to many factors other than the internal–final contrast, including stressed vs. unstressed syllables, initial vs. non-initial syllables, morpheme boundaries, and coda–onset linking.

ACKNOWLEDGMENTS Thanks to two anonymous reviewers and the Companion editors for helpful comments. I also acknowledge the financial support of the Social Sciences and Humanities Research Council of Canada.

REFERENCES Ahn, Mee-Jin. 2000. Phonetic and functional bases of syllable weight for stress assignment. Ph.D. dissertation, University of Illinois, Urbana. Auger, Julie. 2000. Phonology, variation, and prosodic structure: Word-final epenthesis in Vimeu Picard. In Josep M. Fontana, Louise McNally, M. Teresa Turell & Enric Vallduví (eds.) Proceedings of the 1st International Conference on Language Variation in Europe, 14–24. Barcelona: Universitat Pompeu Fabra. Beckman, Jill N. 1998. Positional faithfulness. Ph.D. dissertation, University of Massachusetts, Amherst. Blevins, Juliette. 1994. A phonological and morphological reanalysis of the Maori passive. Te Reo 37. 29–53. Blevins, Juliette. 1995. The syllable in phonological theory. In John A. Goldsmith (ed.) The handbook of phonological theory, 206–244. Cambridge, MA & Oxford: Blackwell. Blevins, Juliette. 2003. The independent nature of phonotactic constraints: An alternative to syllable-based approaches. In Féry & van de Vijver (2003), 375–403. Bloomfield, Leonard. 1962. The Menomini language. New Haven: Yale University Press. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Borowsky, Toni. 1986. Topics in the lexical phonology of English. Ph.D. dissertation, University of Massachusetts, Amherst. Broselow, Ellen. 1976. The phonology of Egyptian Arabic. Ph.D. dissertation, University of Massachusetts, Amherst. Broselow, Ellen. 1980. Syllable structure in two Arabic dialects. Studies in the Linguistic Sciences 10. 13–24. Broselow, Ellen. 2003. Marginal phonology: Phonotactics on the edge. The Linguistic Review 20. 159–193.

23

Marie-Hélène Côté

Buckley, Eugene. 1998. Alignment in Manam stress. Linguistic Inquiry 29. 475–496. Burzio, Luigi. 1994. Principles of English stress. Cambridge: Cambridge University Press. Burzio, Luigi. 2007. Phonology and phonetics of English stress and vowel reduction. Language Sciences 29. 154–176. Bye, Patrik & Paul de Lacy. 2000. Edge asymmetries in phonology and morphology. Papers from the Annual Meeting of the North East Linguistic Society 30. 121–135. Charette, Monik. 1984. The appendix in parametric phonology. Studies in African Linguistics. Supplement 9. 49–53. Cho, Young-mee Yu & Tracy Holloway King. 2003. Semisyllables and universal syllabification. In Féry & van de Vijver (2003), 183–212. Clements, G. N. 1990. The role of the sonority cycle in core syllabification. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 283–333. Cambridge: Cambridge University Press. Clements, G. N. 1997. Berber syllabification: Derivations or constraints? In Roca (1997), 289–330. Côté, Marie-Hélène. 2000. Consonant cluster phonotactics: A perceptual approach. Ph.D. dissertation, MIT. Côté, Marie-Hélène. 2004. Syntagmatic distinctness in consonant deletion. Phonology 21. 1–41. Dell, François. 1995. Consonant clusters and phonological syllables in French. Lingua 95. 5–26. Derbyshire, Desmond C. 1979. Hixkaryana. Amsterdam: North-Holland. Duanmu, San. 2008. Syllable structure: The limits of variation. Oxford: Oxford University Press. Evans, Nicholas D. 1995. A grammar of Kayardild. Berlin: Mouton de Gruyter. Everett, Daniel L. & Lucy Seki. 1985. Reduplication and CV skeleta in Kamaiurá. Linguistic Inquiry 16. 326–330. Féry, Caroline & Ruben van de Vijver (eds.) 2003. The syllable in Optimality Theory. Cambridge: Cambridge University Press. Fudge, Erik C. 1969. Syllables. Journal of Linguistics 5. 253–286. Fujimura, Osamu & Julie B. Lovins. 1982. Syllables as concatenative phonetic units. Bloomington: Indiana University Linguistics Club. Goad, Heather. 2002. Markedness in right-edge syllabification: Parallels across populations. Canadian Journal of Linguistics 47. 151–186. Goad, Heather & Kathleen Brannen. 2003. Phonetic evidence for phonological structure in syllabification. In Jeroen van de Weijer, Vincent J. van Heuven & Harry van der Hulst (eds.) The phonological spectrum, vol. 2: Suprasegmental structure, 3–30. Amsterdam & Philadelphia: John Benjamins. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Gordon, Matthew. 2001. The tonal basis of weight criteria in final position. Papers from the Annual Regional Meeting, Chicago Linguistic Society 36. 141–156. Gordon, Matthew, Carmen Jany, Carlos Nash & Nobutaka Takara. 2010. Syllable structure and extrametricality: A typological and phonetic study. Studies in Language 34. 131–166. Gussmann, Edmund. 2002. Phonology: Analysis and theory. Cambridge: Cambridge University Press. Hall, T. A. 2002. Against extrasyllabic consonants in German and English. Phonology 19. 33–75. Halle, Morris & Jean-Roger Vergnaud. 1980. Three-dimensional phonology. Journal of Linguistic Research 1. 83–105. Harris, James W. 1983. Syllable structure and stress in Spanish: A nonlinear analysis. Cambridge, MA: MIT Press. Harris, John & Edmund Gussmann. 2002. Word-final onsets. UCL Working Papers in Linguistics 14. 1–42. Hay, Jennifer & Andrea Sudbury. 2005. How rhoticity became /r/-sandhi. Language 81. 799–823.

Final Consonants

24

Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Published 1985, New York: Garland. Hayes, Bruce. 1982. Extrametricality and English stress. Linguistic Inquiry 13. 227–276. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hussain, Sarmad. 1997. Phonetic correlates of lexical stress in Urdu. Ph.D. dissertation, Northwestern University. Hyde, Brett. 2009. The rhythmic foundations of Initial Gridmark and Nonfinality. Papers from the Annual Meeting of the North East Linguistic Society 38(1). 397–410. Hyman, Larry M. 1977. On the nature of linguistic stress. In Larry M. Hyman (ed.) Studies in stress and accent, 37–82. Los Angeles: Department of Linguistics, University of Southern California. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts, Amherst. Published 1988, New York: Garland. Iverson, Gregory K. 1990. The stipulation of extraprosodicity in syllabic phonology. Language Research 26. 515–552. Kaye, Jonathan. 1990. “Coda” licensing. Phonology 7. 301–330. Kiparsky, Paul. 2003. Syllables and moras in Arabic. In Féry & van de Vijver (2003), 147–182. Kraehenmann, Astrid. 2001. Swiss German stops: Geminates all over the word. Phonology 18. 109–145. Krämer, Martin. 2003. What is wrong with the right side? Edge (a)symmetries in phonology and morphology. Unpublished ms., University of Tromsø (ROA-576). Lamontagne, Greg. 1993. Syllabification and consonant cooccurrence conditions. Ph.D. dissertation, University of Massachusetts, Amherst. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. Lombardi, Linda. 1995. Laryngeal neutralization and Alignment. University of Massachusetts Occasional Papers in Linguistics 18. 225–247. Lombardi, Linda. 2002. Coronal epenthesis and markedness. Phonology 19. 219–251. Lunden, Anya. 2006. Weight, final lengthening and stress: A phonetic and phonological case study of Norwegian. Ph.D. dissertation, University of California, Santa Cruz (ROA-833). McCarthy, John J. 1979. On stress and syllabification. Linguistic Inquiry 10. 443–465. McCarthy, John J. 1993. A case of surface constraint violation. Canadian Journal of Linguistics 38. 169–195. McCarthy, John J. & Alan Prince. 1993. Generalized alignment. Yearbook of Morphology 1993. 79–153. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249–384. Amherst: GLSA. Milligan, Marianne. 2005. Menominee prosodic structure. Ph.D. dissertation, University of Wisconsin, Madison. Mithun, Marianne & Hasan Basri. 1986. The phonology of Selayarese. Oceanic Linguistics 25. 210–254. Mohanan, K. P. 1982. Lexical Phonology. Ph.D. dissertation, MIT. Distributed by Indiana University Linguistics Club. Nair, Rami. 1999. Syllables and word-edges. Ph.D. dissertation, Northwestern University. Nash, David. 1980. Topics in Warlpiri grammar. Ph.D. dissertation MIT. Published 1986, New York: Garland. Nelson, Nicole. 2003. Asymmetric anchoring. Ph.D. dissertation, Rutgers University. Paradis, Carole & Jean-François Prunet. 1993. A note on velar nasals: The case of Uradhi. Canadian Journal of Linguistics 38. 425–439. Piggott, Glyne L. 1991. Apocope and the licensing of empty-headed syllables. The Linguistic Review 8. 287–318.

25

Marie-Hélène Côté

Piggott, Glyne L. 1999. At the right edge of words. The Linguistic Review 16. 143–185. Prince, Alan & Paul Smolensky. 2004. Optimality Theory: Constraint interaction in generative grammar. Malden, MA & Oxford: Blackwell. Rialland, Annie. 1994. The phonology and phonetics of extrasyllabicity in French. In Patricia Keating (ed.) Phonological structure and phonetic form, 136–159. Cambridge: Cambridge University Press. Rice, Keren. 1990. Predicting rule domains in the phrasal phonology. In Sharon Inkelas & Draga Zec (eds.) The syntax–phonology connection, 289–312. Chicago: University of Chicago Press. Rice, Keren. 2003. On the syllabification of right-edge consonants: Evidence from Ahtna (Athapaskan). In Stefan Ploch (ed.) Living on the edge, 427–448. Berlin & New York: Mouton de Gruyter. Roca, Iggy (ed.) 1997. Derivations and constraints in phonology. Oxford: Clarendon Press. Rosenthall, Sam & Harry van der Hulst. 1999. Weight-by-position by position. Natural Language and Linguistic Theory 17. 499–540. Rubach, Jerzy. 1997. Extrasyllabic consonants in Polish: Derivational Optimality Theory. In Roca (1997), 551–581. Rubach, Jerzy & Geert Booij. 1990. Edge of constituent effects in Polish. Natural Language and Linguistic Theory 8. 427–463. Scheer, Tobias. 2008. Why the prosodic hierarchy is a diacritic and why the interface must be direct. In Jutta M. Hartman, Veronika Hegedüs & Henk van Riemsdijk (eds.) Sounds of silence: Empty elements in syntax and phonology, 145–192. Amsterdam: Elsevier. Selkirk, Elisabeth. 1981. Epenthesis and degenerate syllables in Cairene Arabic. MIT Working Papers in Linguistics 3. 209 –232. Selkirk, Elisabeth. 1982. The syllable. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 337–383. Dordrecht: Foris. Sherer, Timothy. 1994. Prosodic phonotactics. Ph.D. dissertation, University of Massachusetts, Amherst. Spaelti, Philip. 2002. Weak edges and final geminates in Swiss German. Unpublished ms., University of California, Santa Cruz (ROA-18). Steriade, Donca. 1982. Greek prosodies and the nature of syllabification. Ph.D. dissertation, MIT. Steriade, Donca. 1999a. Alternatives to syllable-based accounts of consonantal phonotactics. In Osamu Fujimura, Brian D. Joseph & Bohumil Palek (eds.) Item order in language and speech, 205–245. Prague: Karolinum Press. Steriade, Donca. 1999b. Phonetics in phonology: The case of laryngeal neutralization. UCLA Working Papers in Linguistics 2: Papers in Phonology 3. 25–146. Supple, Julia & Celia M. Douglass. 1949. Tojolabal (Mayan): Phonemes and verb morphology. International Journal of American Linguistics 15. 168–174. Trigo, Loren. 1988. On the phonological behavior and derivation of nasal glides. Ph.D. dissertation, MIT. Vennemann, Theo. 1972. Rule inversion. Lingua 29. 209–242. Wiese, Richard. 1996. The phonology of German. Oxford: Clarendon Press. Wiltshire, Caroline. 1994. Alignment in Cairene Arabic. Proceedings of the West Coast Conference on Formal Linguistics 13. 138–153. Wiltshire, Caroline. 1998. Extending ALIGN constraints to new domains. Linguistics 36. 423–467. Wiltshire, Caroline. 2003. Beyond codas: Word and phrase-final alignment. In Féry & van de Vijver (2003), 254–268. Yip, Moira. 1991. Coronals, consonant clusters, and the coda condition. In Carole Paradis & Jean-François Prunet (eds.) The special status of coronals: Internal and external evidence, 61–78. San Diego: Academic Press. Zec, Draga. 2007. The syllable. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 161–194. Cambridge: Cambridge University Press.

37

Geminates Stuart Davis

1

Introduction

The term “geminate” in phonology normally refers to a long or “doubled” consonant that contrasts phonemically with its shorter or “singleton” counterpart (see also chapter 47: initial geminates). Such contrasts are found in languages like Japanese and Italian, as exemplified by the minimal pairs in (1) and (2), respectively.1 Languages such as English and Spanish do not have geminates. (1)

Japanese geminate contrast (Tsujimura 2007)2 a. b.

(2)

[saka] ‘hill’ [sakka] ‘author’

Italian geminate contrast a. b.

[fato] [fatto]

‘fate’ ‘fact’

The issue of the phonological representation of geminates has engendered much controversy over the past thirty years. The main issue revolves around how to distinguish formally a geminate consonant from its singleton counterpart in a way that captures the cross-linguistic phonological patterning of geminate consonants. The featural representation of geminate consonants posited in Chomsky and Halle 1

Languages with geminates vary considerably with respect to the durational difference between the geminate and its singleton counterpart. Idemaru and Guion (2008) report a 3:1 ratio in the duration of geminates to singletons in Japanese but only a 1.8:1 ratio for Italian. They further note that there may be other phonetic cues to geminates besides consonantal duration. These include pitch and intensity differences that may provide secondary acoustic cues to a geminate. However, this chapter will not focus on the phonetic properties of geminates, nor on the issue of which types of consonants are more likely to be geminated (but see Pycha 2007, 2009 and Kawahara 2007 for discussion on these issues). Instead, this article will focus on the phonological behavior of geminates and the matter of their representation in phonology. 2 In this chapter, geminate consonants are transcribed by a sequence of two identical letters; long vowels are represented either as a sequence of two identical vowel symbols or with the IPA length mark. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0037

Stuart Davis

2

(1968) as being a single consonant possessing the distinctive feature [+long] has long been considered insufficient, since, as noted by researchers such as Leben (1980), long consonants can behave like a sequence of two consonants for certain phenomena. Leben posited an autosegmental representation of geminates in which a single phoneme is linked to two slots on a skeletal tier that encodes the prosody of the word. This skeletal tier is also referred to as a CV-tier, an X-tier, or a length tier, depending on the specific conception of the researcher. Important earlier works that incorporate a CV-tier include McCarthy (1979, 1981), Halle and Vergnaud (1980), Clements and Keyser (1983) and Hayes (1986), while Levin (1985) posited that the tier consisted of X-slots (see chapter 54: the skeleton). Geminate representation on this view is exemplified by the geminate [kk] of the Japanese word in (1b), as is illustrated in (3). (3)

a. CV-tier representation C

V

s

a

C

C k

b.

X-tier representation

V

X

X

a

s

a

X

X k

X a

As seen in (3), a geminate consonant has one set of features indicated by the single consonant “k” on the phoneme (or melody) tier, whereas it is linked to two slots on a prosodic tier. In (4), we make clear the distinction between a geminate and a singleton using an X-tier that encodes prosody. (4)

Prosodic length analysis of geminates a.

X

X

b.

k (geminate)

X k (singleton)

While the proposals for the representation of geminates in (4) go back thirty years, this representation is specifically argued for by Ringen and Vago (2010), who refer to (4) as the segmental length analysis of geminates. A different representation of geminates from that in (4) is the two-root node analysis of geminates posited by Selkirk (1990) shown in (5). The root node in a feature-geometric framework indicates the major class features of a sound (McCarthy 1988) and it dominates the rest of the specified features. Every phoneme has a root node, but a geminate under this view has two root nodes (RN = root node, c = consonant). (5)

Prosodic length analysis of geminates a.

RN RN c (geminate)

b.

RN c (singleton)

There are at least two main differences between the two-root node analysis of geminates in (5) and the segmental length analysis in (4). First, unlike the X-slots

Geminates

3

in (4), a root node is not considered to be a prosodic unit. Second, the two-root node analysis can more readily capture certain phenomena whereby a single geminate splits into two phonemes, as in the case of Icelandic preaspiration: for instance, underlying /kappi/ ‘hero’ is realized as [kahpi]. (See Selkirk 1990 for a detailed discussion on how the two-root node theory captures this process, but also Keer 1998 for an optimality-theoretic analysis of Icelandic preaspiration that argues against the two-root node analysis.) Probably the standard view of geminate representation in current phonological work is the moraic representation of geminates posited by Hayes (1989) and argued for in Davis (1994, 1999a, 2003), as well as by Topintzi (2008) (see also chapter 55: onsets). On this view, geminates are represented as underlyingly moraic or heavy, as shown in (6) (where UR = underlying representation, and [ = mora): a geminate does not have double linking, be it to two slots on the prosodic tier, as in (4a), or to two root nodes, as in (5a). (6)

Moraic (weight) representation of geminates (Hayes 1989) a. Geminate in UR

b. Single consonant in UR

c (geminate)

c (singleton)

This inherent weight approach to geminates is couched within the theory of moraic phonology as developed in Hayes (1989), which characterizes the prosodic tier as being moraic rather than segmental, as in (3). Specifically, in Hayes’s theory of moraic phonology, a short vowel is underlyingly monomoraic while a long vowel is bimoraic; a geminate consonant differs from a short consonant in that the former is underlyingly moraic while the latter is non-moraic. Sample moraic representations are given in (7), where (7a) shows a short vowel, (7b) a long vowel, (7c) a singleton consonant and (7d) a geminate. We refer to the representation in (7d) as the weight analysis of geminates. (7) Underlying moraic weight representation (Hayes 1989) b.

a. a = /a/

c. a

= /a(/

d. t

= /t/

t = /tt/

The moraic weight representation of geminates in (7d), where a single phoneme is linked underlyingly to a mora on the prosodic tier, is quite different from the length representation shown in (3), in which a single phoneme is linked to two C-slots (or X-slots) on the prosodic tier. These two different views of geminate representation make different predictions with regard to the patterning of geminates in phonology. For example, as noted by Ringen and Vago (2010), if epenthesis is triggered by a word-final consonant cluster (i.e. a word ending in two C-slots), epenthesis would be predicted to occur in a word that ends in a final geminate since the word would end in two C-slots under the geminate representation in (3a). Ringen and Vago discuss Hungarian as a language with this

4

Stuart Davis

epenthesis pattern. On the other hand, if a geminate is represented as moraic, as in (6a) and (7d), epenthesis might not be predicted to occur with a word ending in a geminate, since the consonantal length of a geminate is not segmentally encoded. That is, there would not be two C-slots or two consonantal elements at the end of the word to trigger the epenthesis. Ringen and Vago point out that the Hungarian epenthesis pattern poses a problem for the moraic view. Further, given the weight analysis of geminates in (7d), geminate consonants are predicted to play a role in processes that are sensitive to syllable weight even when singleton (coda) consonants do not. Much of the recent research on geminates has focused on whether geminates display weight properties that are independent of other consonants. This will be discussed shortly. Over the past twenty years, a wide variety of phonological evidence has been brought to bear on the correct representation of geminates. The issue is still controversial.3 All three views of geminate representation presented in this section, namely the prosodic length view in (4), the two-root node view in (5) and the moraic weight view in (6), have been argued for on the basis of the phonological patterning of geminates. Some composite views have even been proposed that combine aspects of the above representations, such as those of Schmidt (1992), Hume et al. (1997) and Curtis (2003). In §2 we will present specific evidence from a variety of phenomena to argue for the inherent weight representation of geminates. In §3, we will examine the behavior of geminates with respect to stress processes, cross-linguistically. In these sections, we will try to maintain a consistent view for the weight analysis in (6a) even when the data presented seem problematic for such a view. In §4 we will reconsider the representational issue and suggest that a composite view of the representation of geminates under a constraint-based approach can account for the patterning of geminates in the world’s languages.

2

The weight analysis of geminates

The underlying weight analysis of geminate consonants, as proposed in Hayes (1989), views a geminate consonant as being underlyingly moraic, as shown in (6a), whereas a non-geminate consonant is underlyingly non-moraic, as in (6b). The weight representation of geminates in (6a) has a number of implications, which will be discussed in this section. One such implication is that if geminates are inherently moraic, they should count as moraic in considering minimal word effects: that is, the cross-linguistically common requirement that content words be at least bimoraic. In §2.1 we show that this is the case for Trukese. A specific structural aspect of the weight representation in (6a) is that geminates do not entail a double linking to two C-slots as in the length representation. This implies that there should be cases in which geminates do not pattern with a sequence of consonants. §2.2 discusses cases of the asymmetrical patterning of geminates and consonant 3

The controversy over geminates has fostered a number of dissertations with a focus on the phonology of geminates. Some of the important ones include Sherer (1994), Ham (1998), Keer (1999), Morén (1999), Kraehenmann (2001), Muller (2001), Curtis (2003), and Topintzi (2006). Although space does not allow me to discuss the wide variety of interesting issues and proposals that are raised in these dissertations and the different positions that are taken, some issues raised in these dissertations will be brought up in the course of this chapter.

Geminates

5

sequences. A third implication that emerges from the weight representation in (6a) is the prediction that there should be languages that treat syllables closed by a geminate (CVG) as heavy but do not otherwise treat syllables closed by a (coda) consonant (CVC) as heavy. In §2.3, we will provide evidence for this prediction by discussing languages that avoid long vowels in syllables closed by a geminate (CVVG), but do not generally avoid long vowels in closed syllables (CVVC). We hold off until §3 the discussion of geminate behavior in weight-sensitive stress systems.

2.1

Trukese initial geminates

One type of evidence for the underlying moraic nature of geminates as in (7d) comes from the bimoraic minimal word requirement in Trukese (also called Chuukese) and the behavior of word-initial geminates with respect to it. Although word-initial geminates are rare, they are attested in a number of languages. (Indeed, the dissertations of Muller 2001 and Topintzi 2006, 2010 are exclusively on initial geminates; see also chapter 47: initial geminates.)4 Muller (2001), whose study incorporates acoustic analyses of word-initial geminates in a variety of languages, including Trukese, concludes that initial geminates are moraic in some languages but not in others, while Topintzi (2006, 2008, 2010), focusing on languages where initial geminates pattern as moraic, argues that such geminates constitute moraic onsets, thus providing examples in which onsets carry weight.5 Trukese provides a clear example of a language where a word-initial geminate patterns as moraic. Consider the data in (8) and (9), which reflect a minimal word constraint on Trukese nouns. The data here are cited from Davis (1999b) and Davis and Torretta (1998), and are mainly taken from Dyen (1965) and Goodenough and Sugita (1980). The relevance of Trukese geminates for moraic phonology has previously been observed by Hart (1991) and Churchyard (1991). 4

It is clear from typological surveys of geminate consonants such as Thurgood (1993) and from the discussion in Pajak (2009) that geminates are most commonly found in intervocalic position and least commonly found when not adjacent to any vowel (e.g. between two consonants). Languages that allow for geminates that are only adjacent to one vowel (e.g. word-initial or word-final geminates), although not common, are not as rare as languages that allow for geminates to occur not adjacent to any vowel. As noted by Pajak (2009), the typological facts correspond to perceptual saliency in that the contrast between a singleton and a geminate consonant is most perceptually salient in intervocalic position and least salient in a position not adjacent to any vowel. 5 Following a suggestion in Hayes (1989), Davis (1999b) proposes that word-initial geminates are moraic but that the mora is not part of the syllable onset. His representation is in (i), while Topintzi’s moraic onset representation is given in (ii) (where the vowel of the syllable is also shown). (i)

(ii)

q

[ c

q

[

[

[

v

c(

v

One difference between (i) and (ii) is that the latter predicts that onset geminates could occur word-internally, not just at the beginning of the word. In support of (ii), Topintzi (2008) provides interesting evidence from Marshallese that word-internal geminates are syllabified as onsets and are not heterosyllabic as commonly assumed in Moraic Theory.

Stuart Davis

6 (8) a. b. c. (9) a. b. c. d. e.

Underlying representation

Output form

/maa/ /tHH/ /oo/

[maa] [tHH] [oo]

Underlying representation

Output form

/etiruu/ /ttoo/ /ŒŒaa/ /ssDD/ /ffHne/

[etiru] [tto] [ŒŒa] [ssD] [ffHn]

Unattested output ‘behavior’ ‘islet’ ‘omen’

*[ma] *[tH] *[o] Suffixed form [n] = relational

‘coconut mat’ ‘clam sp.’ ‘blood’ ‘thwart of a canoe’ ‘advice’

[etiruu-n] [ttoo-n] [ŒŒaa-n] [ssDD-n] [ffHne-n]

Trukese has a general process whereby a word-final long vowel shortens, as seen in (9a)–(9d). This is part of a more general process of final mora deletion, as evidenced in (9e). However, as (8) shows, final mora deletion does not apply if the result would be monomoraic, because Trukese has a minimal word constraint that requires nouns to be at least bimoraic. The fact that the word-final vowel does shorten in (9b)–(9d) strongly suggests that the initial geminate is moraic. That is, an output such as [tto] in (9b) is bimoraic, with a mora being contributed by both the vowel and the geminate. This is supportive of the underlying weight analysis of geminates.6 6

While discussion on Trukese in works such as Muller (2001), Curtis (2003), Topintzi (2006, 2010), and Ringen and Vago (2010) recognizes the inherent weight of Trukese geminates even though they do not all incorporate the underlying weight analysis of geminates (7d), these researchers have often contrasted the moraic behavior of initial geminates in Trukese with the clearly non-moraic behavior of initial geminates in Leti, as originally discussed in Hume et al. (1997). For example, although Leti has initial geminates. it lacks words consisting of an initial geminate followed by a short vowel such as [ppe]. Hume et al. maintain that the lack of such words argues against the moraicity of geminates, given the presence of a bimoraic minimal word condition. Following Davis (1999b, 2003), I maintain in this chapter that Leti is different from Trukese, because the initial geminates of Leti (but not Trukese) are extraprosodic, and that this is supported by the phonotactics of Leti. To see this, it is insightful to compare Leti geminates and word-initial clusters with those of Trukese. In Leti, underlying geminates occur only in word-initial position (Jennifer Muller, personal communication). In Trukese, in contrast, they occur in both word-initial and word-internal positions. Moreover, in Trukese, word-initial clusters other than geminates do not occur (with the exception of a few loanwords). On the other hand, word-initial clusters are pervasive in Leti, allowing for almost any possible sequence of two consonants at the beginning of the lexical word. There are no sonority restrictions on what these two consonants can be. The two consonants in a word-initial sequence can be an obstruent + sonorant, such as [pn pl pr tm tl tr vn vl vr], a sonorant + obstruent, such as [mb ms mv ns rs rv], a sonorant + sonorant, such as [mr nr rm rn rl], or two obstruents, such as [pt tp pk kp tk kt]. Given this patterning, one could realistically analyze the first consonant of a word-initial cluster in Leti as being extraprosodic. The initial consonant of such a cluster is unrestricted and can be identical to the following consonant. This means that the word-initial geminate of Leti consists of a sequence of identical consonants; the first consonant of the sequence would be extraprosodic just like the first consonant of any other word initial cluster. Such an analysis would explain the absence of Leti words like [ppe] or any other word of the shape CCV. With initial extraprosodicity, these forms would not comply with the bimoraic minimum. Given that underlying geminates only occur word-initially in Leti, and given the general phonotactics of word-initial clusters in Leti discussed above, I conclude that Leti presents a very different type of situation from Trukese, where the geminate phonology is tightly integrated with the rest of the phonology (see Davis and Torretta 1998). I suggest that the Leti case has no bearing on the issue of the underlying representation of geminates. Leti allows initial extraprosodic consonants, and the apparent geminate is just a coincidental case where the extraprosodic consonant has the same quality as the following prevocalic consonant.

Geminates

2.2

7

Asymmetrical cases of the patterning of geminates and consonant clusters

In the weight representation of geminates in (6a), repeated below in (10a), a consonant is underlyingly linked to a mora. This is contrasted with the length representation of geminates, in which a consonant is linked to two X-slots or C-slots, as in (10b). (10)

Contrasting representations for geminates a. Geminate in UR: weight representation

b. Geminate in UR: length representation C

c (geminate)

C

c (singleton)

An important difference between the two representations is that the length representation in (10b) tacitly assumes that geminates should pattern similarly to a sequence of two consonants for rules or constraints that reference the CV-tier. Crucially, the weight representation in (10a) does not make such an assumption. There can be cases where geminates will pattern differently than a sequence of two consonants. An example where there is a parallel in patterning between a geminate and a sequence of two consonants is the case of Hungarian epenthesis discussed by Ringen and Vago (2010) and noted earlier in the chapter. In Hungarian, in some verb stems that end in two consonants, an epenthetic vowel occurs after the two consonants when a consonantal suffix is added (e.g. /önt-s/ → [öntes] ‘pour-2sg’). No epenthesis occurs if the verb stem ends in a single consonant when the suffix is added (e.g. /kAp-s/ → [kaps] ‘receive-2sg’). This suggests a constraint for these forms that disallows a word-final sequence of three C-slots. The two representations of geminates in (10) seem to make different predictions for verb stems that end in a geminate when the consonantal suffix is added. On the length view (10b), epenthesis would be predicted since there would be three consecutive C-slots, but such a prediction is not clear, given the weight representation in (10a). As Ringen and Vago observe, epenthesis does occur in a verb stem ending in a geminate (e.g. /fygg-s/ → [fygges] ‘depend-2sg’), which clearly shows that a geminate patterns like a sequence of two consonants with respect to the CV-tier. While this is not conclusive evidence for a length representation, since one could express the epenthesis rule as being sensitive to mora structure, given certain other assumptions about Hungarian, the analysis is more straightforward under the length representation of geminates. Nonetheless, there exist striking cases where geminates do not pattern like a sequence of C-slots. One such example comes from Trukese geminates, shown in (9) above. As seen before, Trukese has word-initial geminates, but does not have word-initial consonant clusters. One would think that if a language allows for a word-initial geminate, it should also allow for a word-initial sequence of two consonants under the length representation of geminates in (10b). Moreover, Trukese has word-internal geminates that are intervocalic (e.g. [tikka] ‘coconut oil’), but Trukese does not generally allow for intervocalic consonant clusters. These

8

Stuart Davis

observations would be hard to account for under the length representation of geminates. One would expect that if two C-slots could occur at the beginning of the word or intervocalically, those two C-slots should not be restricted to just geminates. Furthermore, Trukese geminates do not pattern exactly like single consonants either. This is clearly seen in the observation that words in this language can end in a single consonant (as in the suffixed forms in (9)), but cannot end in a geminate. This contrastive behavior of geminates and consonant clusters on the one hand and geminates with singleton consonants on the other is consistent with and reflective of the weight analysis of geminates in (10a), especially in light of the fact that a word like [tto] in (9b), comprising an initial geminate followed by a short vowel, meets the bimoraic word-minimality requirement. The presence of word-final singleton consonants and the absence of word-final geminates can be seen as a reflection of a high-ranked constraint that disallows words to end in a moraic consonant. A final singleton consonant would not be considered moraic in Trukese. In this regard, it is important to note that Trukese lacks CVC words. This is expected, given bimoraic word minimality, if a word-final consonant is not moraic. Thus the patterning of Trukese geminates provides evidence for the weight representation in (10a) and against the length representation in (10b). (For further details and arguments for the weight representation of geminates in Trukese, see Davis and Torretta 1998.) Another example in which geminates pattern differently from consonant clusters concerns final geminates in Arabic. In some Arabic dialects, such as the Hadhrami dialect as spoken in the town of Ghayl Bawazir near the south coast of Yemen (Bamakhramah 2009, personal communication), consonant clusters are avoided in word-final position (e.g. [’girid] ‘a monkey’ from underlying /gird/, [’binit] ‘a girl’ from underlying /bint/); yet word-final geminates are allowed (e.g. [’rabb] ‘Lord’, [?a’xaff] ‘lighter’). Moreover, word-final geminates in Hadhrami Arabic are different from singleton consonants in that a word-final geminate attracts stress onto the last syllable of the word, but a word that ends in a singleton does not have such impact on stress (e.g. [?a’xaff] ‘lighter’ vs. [’?akbar] ‘greater’) (see also chapter 124: word stress in arabic). Under the length representation of geminates as in (10b) it would be difficult to explain why word-final geminates are allowed when word-final consonantal sequences are avoided. Moreover, the attraction of stress onto the final syllable of a word ending in a geminate is consistent with the weight representation, given that, as observed by Bamakhramah (2009), primary stress typically falls on the rightmost bimoraic syllable. Consequently, the patterning of geminates in languages like Trukese and Hadhrami Arabic calls the length representation into question.7 7

One matter that is historically important to note, but not focused on in the current chapter, concerns properties of geminates that were discussed in the literature in the 1980s in works such as Hayes (1986) and Schein and Steriade (1986) regarding geminate integrity and geminate inalterability. Geminate integrity is the observation that rules of epenthesis tend not to split up a geminate consonant (at least not a “true” geminate consonant, i.e. one that is monomorphemic and non-derived); geminate inalterability refers to the tendency that geminates are resistant to certain rules of segmental phonology (e.g. spirantization in Tiberian Hebrew) that a priori should apply to them. Hayes (1986), in particular, argued that the assumption of a CV-tier can account for integrity and inalterability effects. Integrity effects were accounted for by a length representation of geminates as in (10b), repeated below in (i), because the insertion of a specific vowel into a geminate as in (ii) would violate the prohibition on crossing association lines.

Geminates

2.3

9

Avoidance of CVVG syllables

Hayes’s theory of underlying moraic representation was presented in (7): geminate consonants (7d) differ from singleton consonants (7c) in being underlyingly moraic. Furthermore, Hayes distinguishes long vowels from short vowels by representing the former as underlyingly bimoraic and the latter as monomoraic. Moraic Theory has an implication for the patterning of geminates with respect to weight-sensitive processes – an implication not discussed by Hayes (1989), but taken up by other researchers such as Selkirk (1990), Tranel (1991), and Davis (1999a, 2003). That is, there should be languages in which syllables with a long vowel (CVV) and those closed by a geminate consonant (CVG) count as heavy since they would be bimoraic, while CV syllables and CVC syllables (i.e. syllable closed by a non-geminate consonant) would be considered light or monomoraic. This weight distinction is shown in (11) (G = geminate consonant, C = non-geminate consonant). Syllable weight distinction based on geminates being underlyingly moraic

(11)

heavy CVV CVG

light CV CVC

The system in (11) is predicted to occur under Hayes’s theory in any language with long vowels and geminate consonants that do not regard coda consonants as moraic. The moraic representation for syllables with the structure of (11) is given in (12). (i)

Geminate in UR: Length representation C

C c

(ii) Epenthesis into a geminate C

V

C

c

v

Geminate inalterability effects were handled by a condition on interpretation in segmental rules that association lines in structural description of rules had to be interpreted as exhaustive. A rule like spirantization in Tiberian Hebrew, which only applied to singleton consonants and not to geminates, would include in its rule environment a single C-slot linked to the phoneme. Since the rule environment did not explicitly show double linking as in (i), the rule would fail to apply to geminates. Kenstowicz (1994: 410–416) summarizes important criticisms of the CV account of both geminate integrity and inalterability. He points out that geminate integrity could be called into question if epenthesis is viewed as a two-stage process of inserting a V-slot followed by a late default spell-out rule. However, it is worth noting that geminate integrity effects follow automatically from the weight representation of geminates as in (10a). With respect to geminate inalterability, Kenstowicz specifically calls attention to work by Selkirk (1991), who noted that rules of inalterability tended to always involve spirantization processes, thus suggesting a more general explanation that does not involve the length representation of geminates. Along these lines, Kirchner (2000) approaches the issue of geminate inalterability from a general theory of lenition within a functionally based optimality-theoretic framework.

10 (12)

Stuart Davis Surface syllabification of the division in (11) Heavy (bimoraic) b.

a.

t

a = [ta(]

t

a

t

a = [tat.ta] (the first syllable is bimoraic)

Light (monomoraic) c.

d.

t

a

= [ta]

t

a

t

= [tat]

As seen in (12b), if a geminate is underlyingly moraic, the syllable closed by a geminate (i.e. the first syllable of 12b) will be bimoraic, just like a syllable containing a long vowel (12a). This should be contrasted with a syllable closed by a non-geminate, as in (12d). The syllable in (12d) reflects the syllabification in a language where the rule (or constraint) Weight-by-Position does not apply. Weight-by-Position is the language-specific rule posited by Hayes (1989) that makes a surface coda consonant moraic (see chapter 57: quantity-sensitivity). If the rule applies in a language, any closed syllable (CVC) in that language will behave as bimoraic. If it does not apply, closed syllables should pattern as light or monomoraic. However, because geminates are underlyingly moraic within Hayes’s theory in (7), then a syllable closed by a geminate (CVG) will always be bimoraic, as in (12b), even if Weight-by-Position generally does not apply to codas. Under the assumption that the syllable weight distinction in (11) exists, as Hayes’s theory predicts, a specific prediction that emerges is that we would expect to find languages where a long vowel in a syllable closed by a geminate (CVVG, where G = the first part of a geminate) is avoided while a long vowel in a syllable closed by a non-geminate is not (i.e. CVVC is allowed). Such avoidance of CVVG syllables can be seen as a particular instance of a cross-linguistically common phenomenon of avoiding trimoraic syllables (Prince 1990). When we have a language that treats coda consonants as moraic or heavy, trimoraic syllables are often avoided, as is the case with the dialectal Arabic examples in (13). (See Broselow 1992 and Kiparsky 2003 for overviews on Arabic syllables.) (13)

Avoidance of trimoraic syllables in Arabic dialects a. b.

Cairene Arabic /baab + na/ → [’bab.na] Meccan Arabic /baab + na/ → [’baa.ba.na]

‘our gate’ ‘our gate’

Both Cairene and Meccan Arabic avoid the potentially trimoraic parse of the first syllable of /baab + na/ as [’baab.na]. The dialects, however, differ in how they avoid the trimoraic syllable. While Cairene Arabic favors closed-syllable shortening, Meccan Arabic preserves the underlying vowel length by having vowel

Geminates

11

epenthesis apply between the two consonants to create open syllables, thereby avoiding any trimoraic syllable. In the Arabic dialects, the rule (or constraint) of Weight-by-Position applies, resulting in a moraic coda. However, if we consider the weight division in (11) in which Weight-by-Position does not apply, we would expect to find a language where CVVG syllables but not CVVC syllables are avoided. A potential CVVG syllable would not surface, since it is trimoraic, while a CVVC could still occur, as it would only be bimoraic. Kiparsky (2008a) discusses Swedish dialects where vowel shortening occurs before a geminate but not before a single coda consonant. For example in West Swedish (Kiparsky 2008a: 191) /ruu-dde/ ‘rowed’ surfaces as [rudde], with its underlying long vowel shortened, but no shortening occurs when a long vowel is before a singleton coda consonant. The shortening before a geminate changes the potential trimoraic CVVG syllable to bimoraic CVG. There is no need for shortening in a CVVC syllable since it would be bimoraic, given that Weight-by-Position does not apply. Kiparsky specifically uses the Swedish data to argue for the moraic representation of geminate consonants. Another language displaying this pattern of shortening is the Dravidian language Koya, brought up by Sherer (1994), based on Tyler (1969), and discussed in Davis (1999a, forthcoming). Koya has long vowels, coda consonants and geminate consonants. Sherer notes that there are words in Koya like those in (14a)–(14c), with a long vowel before a coda consonant. Crucially, as Tyler (1969: 6) observes, there are no words that contain a long vowel before a geminate. They are always short, as in (14d). All Koya data are cited from Tyler (1969), with the page numbers provided. (The transcription of the vowel quality is phonemicized and does not reflect the precise allophonic variant.) (14)

a. c.

le(Iga ne(rs

‘calf’ (p. 11) ‘learn’ (p. 76)

b. a(KÕa ‘female’ (p. 8) d. ett ‘lift’ (p. 76)

Sherer additionally notes cases where a stem-final long vowel shortens before a geminate-initial suffix, as the examples in (15) show. (15)

a. b.

ke( + tt + o(KÕu o( + tt + o(KÕu

[ketto(KÕu] [otto(KÕu]

‘he told’ (p. 39) ‘he bought’ (p. 38)

This shortening can be viewed as a way of avoiding trimoraic syllables. Shortening does not occur before a non-geminate consonant, as the examples in (16) illustrate. (16)

a. b.

na(l + ke tuIg + ana( + n + ki

[na(lke] ‘tongue’ (p. 47) [tuIgana(Iki] ‘for the doing’ (p. 90)

In (16), a long vowel surfaces before a syllable-final singleton coda consonant. Since vowel shortening occurs before a geminate in (15), the Koya data in (14)–(16) are consistent with the weight system in (11), in which CVV and CVG syllables are bimoraic whereas CVC syllables are light.8 8

Curtis (2003: 169–170) suggests that the lack of word-internal CVVG syllables in Koya may be due to a shortening effect that geminate consonants have on preceding vowels, since the perceptual cues for vowel length can be blurred in CVVG syllables; thus, Curtis maintains that vowel shortening before geminates is independent of the issue of the moraic status of geminates. However, this does not explain cases like Fula in (19) where avoidance of CVVG is achieved by degemination rather than vowel shortening.

12

Stuart Davis

While the above examples of Koya and Swedish are cases where vowel shortening occurs in syllables closed by a geminate, there are other languages where vowel-lengthening processes are prevented in CVG syllables but not in CVC syllables. This suggests that in such languages geminates are underlyingly moraic, although coda consonants in general are not. Vowel lengthening then does not apply before a geminate, since that would create a trimoraic syllable. This is illustrated by Seto (Southeastern Estonian), discussed by Kiparsky (2008b). According to Kiparsky, Seto has feet that are required to be trimoraic and such a restriction is normally implemented by foot-final vowel lengthening. As a result, a foot with the underlying sequence CV.CVC surfaces as CV.CVVC. However, given an input structure where the final consonant of the foot is part of a geminate, i.e. CV.CVG, no vowel lengthening occurs. This provides evidence that the geminate is underlyingly moraic: that is, foot-final vowel lengthening need not occur in CV.CVG since the foot is already trimoraic. A different case that avoids the surfacing of CVVG syllables can be found in the West African language Fula as discussed by Paradis (1988) and Sherer (1994). Fula avoids CVVG syllables by degemination of the consonant but, importantly, it allows for CVVC syllables, as seen in (17). (17) CVVC syllables in Fula (Sherer 1994: 176) a.

kaakt-e

‘spittle’

b.

caak-ri ‘couscous’

This language has a suffixation process that triggers the gemination of a root-final consonant. Consider the singular/plural alternations in (18). Because of an active constraint that requires geminates to be [−continuant] in Fula, a root-final continuant segment changes to a stop when it geminates. (I thank Abbie Hantgan for help with the Fula data.) (18)

Fula morphological gemination (Paradis 1988: 78) a. b.

stem (sg) lew lef

suffixed form (pl) lebb-i lepp-i

‘month’ ‘ribbon’

Of relevance here is that when a long vowel precedes the stem-final consonant, gemination fails to occur, but the stem-final consonant nonetheless is realized as a stop. This is illustrated by the singular/plural alternations in (19). (19)

Lack of gemination after a long vowel (Paradis 1988: 80) a. b.

stem (sg) laaw lees

suffixed form (pl) laab-i leec-e

expected form *laabb-i *leecc-e

‘road’ ‘bed’

Given that gemination is part of this suffixing process, we note that the expected forms in (19), where the initial syllable would be CVVG, fail to surface as such. Rather, the nature of the occurring suffixed forms in (19) makes it appear that degemination has occurred. This can be understood as the avoidance of a trimoraic CVVG syllable. Since CVVC syllables are allowed as in (17), Fula seems to

Geminates

13

instantiate a language with the weight system of (11), where CVG syllables are heavy but not other CVC syllables. Thus, we see that a variety of languages have strategies to avoid CVVG syllables but not CVVC syllables. These languages can be understood as providing support for the inherent weight analysis of geminates.

3

The patterning of geminates in stress systems

One of the criticisms of the weight analysis of geminates proposed by Hayes (1989), discussed by Tranel (1991), comes from the observation that there do not seem to be quantity-sensitive stress systems that support the weight division in (11), repeated below as (20), where stress would be attracted onto a syllable with a long vowel or closed by a geminate consonant, but not on a syllable closed by a non-geminate. (20)

Syllable weight distinction based on geminates being underlyingly moraic heavy CVV CVG

light CV CVC

The system in (20) is predicted to occur under Hayes’s theory in any language that allows long vowels and geminate consonants, but in which Weight-byPosition does not generally apply to coda consonants. According to the division in (20), CVV and CVG syllables would syllabify as bimoraic, while CV and CVC syllables would syllabify as monomoraic, as was shown in (12). Since quantitysensitive stress systems single out bimoraic syllables, it would be expected that at least some quantity-sensitive stress systems would reflect the weight division in (20) if the moraic representation of geminates were correct. Tranel suggests that weight systems like (20) do not exist and instead proposes a principle of equal weight for codas. Specifically, in languages in which codas pattern as moraic, geminates will be moraic; but in languages in which codas are not moraic, geminates would not be moraic. While our observation in §2.3 above – that in a variety of languages CVVG syllables are avoided but CVVC are not – can be taken as evidence for the weight division in (20), Tranel’s observation is of importance. In this section, we will overview the behavior of geminates in quantity-sensitive stress systems. In §3.1, we will provide stress data from various languages which indeed support the division in (20) whereby CVV and CVG syllables pattern together, thus supporting the moraic weight analysis of geminates. These are languages whose stress patterns Tranel predicts not to occur. In §3.2, I will review the type of case mentioned by Tranel in which quantity-sensitive stress treats all closed syllables in the same manner whether they be CVG or CVC. These are the languages that motivated Tranel’s principle of equal weight for codas and can be considered somewhat problematic for the weight analysis of geminates. In §3.3, I will present the case of the Australian language Ngalakgan (variably spelled Ngalagkan and Ngalakan; Baker 1997, 2008), in which CVC syllables can attract stress but apparently not CVG syllables. This seems to suggest that somehow geminates are resistant to carrying a mora. Thus, this section will identify three different types of geminate behavior with respect to quantity-sensitive

14

Stuart Davis

stress systems. In §4, we will try to reconcile this variability of behavior with respect to the representation of geminates.

3.1

Languages that uniquely treat CVG closed syllables as heavy with respect to stress

In this subsection I discuss two languages cited in the literature where syllables closed by a geminate (CVG) function as heavy, like a syllable with a long vowel in attracting stress, even when other closed syllables (CVC) may not act as heavy. Such languages provide stress evidence for the syllable weight division in (20). The two languages to be discussed are the Uto-Aztecan language Cahuilla and the San’ani (Yemen) dialect of Arabic. Hayes (1995) noted that stress assignment in the Uto-Aztecan language Cahuilla distinguishes CVG syllables from CVC syllables: CVG syllables behave like CVV syllables in attracting stress. Consider the data in (21), taken from Hayes (1995, and see sources cited therein). (21)

Cahuilla stress a. b. c. d. e. f.

’ta.ka.‘li.Œem ’Œe.xi.‘wen ’tax.mu.‘?at ’he.?i.’ka.kaw.‘la(.‘qa ’qa(n.‘ki.Œem ’Œex.‘xi.wen

‘one-eyed ones’ ‘it is clear’ ‘song’ ‘his legs are bow-shaped’ ‘palo verde (pl)’ ‘it is very clear’

The data items in (21a)–(21d) help to establish Cahuilla as having iterative left-to-right stress on alternating syllables (i.e. trochaic foot structure), although CVC syllables are not distinguished from CV syllables.9 (See also chapter 40: the foot and chapter 44: the iambic-trochaic law for more general discussion of trochaic foot structure.) The quantity-insensitive nature of CVC syllables is probably most clearly seen in the third and fourth syllables of the sequence in (21d), the trochaic foot (’ka.kaw). The second syllable of this foot is treated as light despite the presence of a coda consonant, which indicates the monomoraic nature of CVC syllables. On the other hand, the quantity-sensitive nature of the stress system can be seen not only in the stress-attracting nature of syllables with long vowels, but also in the observation that the syllable immediately after a long vowel receives stress. This is witnessed by the last two syllables in (21d), both of which have stress and comprise a foot, (la()(qa), and can be understood if Cahuilla foot structure is maximally bimoraic. Consequently, the last two syllables in (21d) cannot form a single foot, since such a foot would be trimoraic. The syllable with the long vowel (la() comprises a bimoraic foot on its own; the last syllable (qa) is forced to comprise a (monomoraic) foot on its own, due to a constraint in the 9

Hayes (1995) notes that Cahuilla CVC syllables closed by glottal stops, but not other CVC syllables, are also treated as bimoraic. I shall not discuss this, other than to note that in the Ngalakgan stress pattern presented in (25) CVC syllables closed by a glottal stop exceptionally act as monomoraic. The variable behavior of coda glottal consonants with respect to syllable weight is known but infrequently discussed (but see Churma and Shih 1996).

Geminates

15

language that requires exhaustive footing. The data item in (21e) is similar to the last two syllables in (21d) in that the syllable with a long vowel forms a foot on its own and the syllable immediately after it is at the beginning of a new trochaic foot, thereby receiving stress. We see then that bimoraic CVV syllables in Cahuilla are distinguished from monomoraic syllables, not only in bearing stress, but also by the presence of stress on the syllable immediately after them. CV and CVC syllables lack these two characteristics and function as monomoraic. It is interesting in this light to observe the patterning of syllables closed by a geminate, as in (21f), where the first syllable is CVG. This CVG syllable functions as bimoraic. It has stress, as would be expected of any initial syllable, but crucially it patterns exactly like a CVV syllable in that the syllable immediately after it also carries stress. This provides evidence that the CVG syllable comprises a bimoraic (trochaic) foot on its own. This constrasts with the initial CVC syllable in (21c) that forms a trochaic foot with the following syllable, suggesting the monomoraic nature of the CVC syllable. Cahuilla thus serves as a clear illustration for the weight distinction in (20) in which stress treats syllables with long vowels and those closed by geminates as bimoraic but not other types of syllables, be they CV or CVC. San’ani Arabic (Yemen) presents a very interesting case, in which CVV and CVG syllables pattern together with respect to stress. Watson (2002: 81–82) specifically notes that they pattern together as opposed to CVC syllables (see also chapter 135: word stress in arabic). Consider the data in (22), which illustrate the stress pattern in words without geminates. (22)

San’ani Arabic (Watson 2002: 81–82) a. b. c. d. e. f. g. h. i. j.

mak.’tu(b da.’rast ’sa(.fa.rat miú.’sa(.lih mi.’gam.bar ’mad.ra.sih mak.’ta.ba.ti( ’li.bi.sat ’ka.tab ’ra.ga.ba.tih

‘office’ ‘I/you (masc sg) learnt’ ‘she travelled’ ‘launderette’ ‘sitting’ ‘school’ ‘my library’ ‘she wore/put on’ ‘he wrote’ ‘his neck’

Stress normally falls on one of the last three syllables of the word: it falls on a final superheavy syllable (CVVC or CVCC) if there is one, as in (22a)–(22b); it falls on the rightmost non-final heavy syllable (CVC or CVV) up to the antepenultimate, as in (22c)–(22f); otherwise, stress falls on the leftmost CV syllable, as in (22g)–(22j). The data in general show that the word-final segment does not play a role in the computation of weight so that the final syllable can only be stressed if it is superheavy. The word in (22g) illustrates two important aspects of the stress system. It shows that a word-final syllable ending in a long vowel does not attract the stress; it also indicates that a CVC syllable in pre-antepenultimate position fails to attract stress. The latter point is significant, since it suggests that Weightby-Position, which assigns a mora to a coda consonant, is restricted to one of the last three syllables of the word. Now let us consider the data in (23) with words possessing geminate consonants.

16 (23)

Stuart Davis San’ani Arabic (Watson 2002: 81–82): stress on words with geminate consonants a. b. c. d. e. f.

ji.’pib.bu mit.’?ax.xi.ra(t mu.’sa–.–i.la.ti ’ha(.ka.Ïa.ha( ’daw.wart ’sa(.fart

‘they (masc) love/like’ ‘late (fem pl)’ ‘my recorder’ ‘like this’ ‘I/you (masc sg) looked for’ ‘I/you (masc sg) travelled’

The comparison between (22) and (23) indicates the priority of CVG and CVV syllables for stress assignment, in that CVG and (non-final) CVV syllables always attract stress even when in pre-antepenultimate position as in (23c) and (22d). The word in (22g), in contrast, shows that a CVC syllable does not receive stress in pre-antepenultimate position. The difference between CVG and CVC syllables can be readily understood on the inherent weight analysis of geminates. If a geminate is underlyingly moraic, it contributes weight to the syllable regardless of its location in the word. Recall that Weight-by-Position does not apply here, because it is restricted to one of the last three syllables in San’ani Arabic. In pre-antepenultimate position, only CVV and CVG act as bimoraic. Moreover, (23e) shows that CVG syllables have a priority of stress over a final superheavy syllable and should be compared with (22a) where a regular CVC syllable is devoid of such priority. It could be argued that Weight-by-Position in San’ani Arabic only applies to words that would not otherwise have bimoraic syllables (CVV or CVG). That is, there is no necessity for Weight-by-Position to apply in (23e) or (23f). While we do not pursue a full analysis here (but see Watson 2002), the priority given to both CVV and CVG syllables in stress assignment, especially as seen by the comparison of (23c) and (23d) with (22g), provides an interesting argument for the underlying moraic weight analysis of geminates, and, in turn, against Tranel’s (1991) claim of equal weight for codas. We have detailed above two cases where CVV and CVG syllables pattern together with respect to stress systems as predicted by the inherent weight analysis of geminates. Further support for the weight analysis of geminates is found in other languages. For example, Gupta (1987) discusses a Hindi dialect in which stress is attracted to the leftmost heaviest syllable in the word. The dialect treats both CVV and CVG syllables as bimoraic, while CVC syllables behave as light, although, as noted by Curtis (2003), such a pattern appears unusual among Hindi dialects. Additional support may come from the stress system of Pattani Malay, discussed by Topintzi (2006, 2008, 2010) and references cited therein. Pattani Malay has geminates that are restricted to word-initial position and the language lacks long vowels. Although primary stress typically falls on the final syllable of a word, stress occurs on the initial syllable in words that begin with a geminate consonant. This can be taken as evidence for the moraification in (10a) where a geminate is underlyingly moraic. That is, stress is attracted onto a syllable that is bimoraic. Despite the range of examples presented in this section, it remains rare to see languages that display the weight system in (20), grouping CVV and CVG syllables together as heavy. It is possible, on the other hand, that the rarity is due to the infrequent occurrence of the specific set of properties that is required for CVV and CVG to pattern together in stress assignment; namely, the language would

Geminates

17

have to have quantity-sensitive stress, long vowels, coda consonants, and geminates. Perhaps when such languages, San’ani Arabic for one, are examined closely, more instances of the special properties of CVG syllables will emerge. In this connection, it is worth noting that in most Arabic dialects CVG syllables are special: they always attract stress when in word-final position. This property separates them from other CVC syllables, which do not attract stress in word-final position. The difference thus finds a logical explanation in the underlying weight analysis of geminate consonants (especially in those dialects, such as Hadhrami Arabic, discussed earlier, which disallow final consonant clusters).10

3.2

Languages in which stress treats all codas equally

There are two types of languages in which stress assignment treats all codas equally. In the first type, stress is quantity-sensitive and is attracted to a heavy syllable, be it CVV, CVC, or CVG. Latin belongs to this group: any coda consonant makes a syllable heavy, so both CVC and CVG syllables are bimoraic. In the second type, which is more relevant for the representation of geminate consonants, both CVC and CVG syllables behave as light in a quantity-sensitive stress system. In such a language, stress is attracted to a CVV syllable, but both CVC and CVG syllables seem to pattern as monomoraic, treating CVG as light, just like other CVC syllables. As an illustration, consider the stress data from the Uralic language Selkup in (24). The data in (24a)–(24f) come from Halle and Clements (1983). The data items in (24g)–(24h) are reported in Ringen and Vago (2010) from the Selkup language scholar Eugene Helimski, and reflect the Taz Selkup dialect, which seems to have the same stress pattern as that in Halle and Clements (1983). (24)

Selkup stress a. b. c. d. e. f. g. h.

10

qu’mo(qi ’u(cqqo u(’cD(mqt ’qumqnqk ’amqrna ’u(cqkkak ’esykka es’s/(qo

‘two human beings’ ‘to work’ ‘we work’ ‘human being (dat)’ ‘eats’ ‘I am working’ ‘(it) happens (occasionally)’ ‘to happen (already)’

In many Arabic dialects, word-final CVC syllables behave as extrametrical. Ham (1998) puts forward the very intriguing observation that final CVC syllables are always extrametrical in languages that possess word-final geminates. This is because a word-final geminate is moraic and would need to be distinguished in final position from a potential moraic coda. With the underlying moraic weight representation of geminates as in (10a), final extrametricality of CVC syllables is able to preserve the contrast between an underlying final geminate and the corresponding final singleton consonant. The geminate of a final CVG syllable would surface as moraic while the singleton coda of the final CVC would be non-moraic. This difference is found in Arabic dialects where a final CVG syllable attracts stress, making it distinct from a final CVC syllable (i.e. bimoraic), which is light (monomoraic) and does not attract the stress. In a variety of other languages having word-final geminates examined by Ham (1998), the same distinction is made between final CVG and CVC syllables. If Ham’s observation holds up to further scrutiny, it constitutes an interesting argument for the underlying moraification of geminate consonants. (See also Topintzi 2008: 175 for discussion on this point.)

18

Stuart Davis

In Selkup, primary stress falls on the rightmost syllable with a long vowel (24a)–(24c) or on the initial syllable if there are no long vowels (24d). A CVC syllable does not count as heavy (24e), even if the CVC syllable is closed by a geminate, as seen in (24f) and (24g). As noted by Tranel (1991), if stress targets bimoraic syllables and geminates are underlyingly moraic, the second syllable in (24f) and (24g) would be the rightmost bimoraic syllable. Both the vowel and the geminate would contribute a mora to the second syllable. The fact that (24f) and (24g) do not receive stress on the second syllable, however, seems to provide evidence against geminates being underlying moraic, favoring a representation of geminates that is different from that in (10a). The stress pattern of Selkup does not appear to be unique in ignoring geminate consonants. Davis (1999a: 41) notes the Altaic language Chuvash (Krueger 1961), which exhibits an almost identical stress pattern to that of Selkup: stress is attracted to the rightmost syllable with a full vowel (interpreted as being bimoraic), but CVG syllables are ignored. Thus, in both Chuvash and Selkup, CVG syllables do not function as bimoraic CVV syllables but instead act like monomoraic CV and CVC syllables. Data from languages like Selkup have been used by Tranel (1991) and Ringen and Vago (2010) to argue against the underlying moraic weight representation of geminates. Ringen and Vago note that such languages are consistent with the length analysis of geminates as in (10b). In these languages, stress is sensitive to the presence of a long vowel, and ignores a coda consonant, whether the coda is part of a geminate or not. However, it is not that proponents of the weight representation are unaware of languages like Selkup. Topintzi (2008, 2010), who for the most part maintains the underlying moraic weight view of geminates, suggests that weightless geminates are represented by double consonants with two root nodes rather than as a single root node linked to a mora like (10a). But such a comment implies that there is language-specific variation in the representation of geminate consonants. On the other hand, Davis (2003) suggests that the stress pattern of languages like Selkup does not necessarily argue against the underlying moraic representation of geminates; viewed from an optimalitytheoretic perspective, the pattern can be a consequence of certain high-ranking stress constraints that have the effect of ignoring the bimoraicity of any CVC syllable. As suggested by Steriade (1990: 275), there may be reasons in some languages to restrict the set of stress-bearing segments to those that are also tone-bearing, “for reasons that are clearly related to the fact that pitch is one of the main realizations of metrical prominence.” Steriade’s suggestion can be incorporated into an optimality-theoretic approach as a constraint that restricts pitch realization to vocalic elements: the constraint prefers to place stress on any CVV syllable over any syllable closed by a consonant, even if that consonant is part of a geminate. Thus, the lack of second syllable stress in (24f) and (24g) of the Selkup data need not reflect on the underlying moraicity of geminate consonants.

3.3

Languages in which geminates repel stress

A third type of geminate behavior is witnessed in languages where stress is attracted to a closed syllable, but not to one with a geminate. The Australian language Ngalakgan, discussed by Baker (1997, 2008), serves as a major example. Consider

Geminates

19

the pattern of primary stress in (25), taken from Baker (2008) who notes that other geographically proximate languages have a similar stress pattern. (25)

Ngalakgan a. b. c. d. e. f. g. h. i. j.

pu’ruÍci ki’piÍkuluc mi’Èarppu? pu’Íolko? ‘maca’purka ’ÎaÈkurca ’calpurkic ’ciwi ’ceraÍa ’paÈa‘munu

‘water python’ ‘frogmouth (bird)’ ‘crab’ ‘brolga (bird)’ ‘plant sp.’ ‘vine sp.’ ‘fish sp.’ ‘liver’ ‘women’s ceremony’ ‘sand goanna’

k. l. m. n. o. p. q. r. s. t.

’calapir ’kupuj kaK’Íalppuru ’cakanta ’IuruKÍuc ’IoloIko? ’Iamuc‘culo ’capatta ’moÎoppoÎ ’IaKa?paj

‘red ant (species)’ ‘sweat (n)’ ‘plains kangaroo’ ‘macropod sp.’ ‘emu’ ‘eucalyptus’ ‘subsection term’ ‘tortoise sp.’ ‘catfish sp.’ ‘and moreover’

(25a)–(25l) show that primary stress in Ngalakgan falls on the leftmost (non-final) heavy syllable; if there is no heavy syllable, it falls on the initial syllable. From these data, it can be surmised that a coda consonant is moraic in making a syllable heavy, be it an obstruent (as in (25a) and (25b), or a sonorant (as in (25c)–(25g)). The data in (25m)–(25t) show that the leftmost closed syllable (underlined) fails to attract primary stress. Note that the coda in the leftmost closed syllable in (25m)–(25t) belongs to one of three types: in (25m)–(25p) the coda is a nasal homorganic with the following onset, in (25q)–(25s) it is the first part of a geminate consonant, and in (25t) the coda is a glottal stop. Key to our discussion is the fact that CVG syllables in (25q)–(25s) resist stress. However, the comparison of the stressresistant nature of CVG syllables with the other instances of stress-resistant closed syllables in (25m)–(25p) and (25t) points to the fact that a common property these syllables have is that they do not possess their own place features: either the place features are shared with the following onset, or, in the case of the glottal stop in (25t), there is a lack of place features altogether. Thus, Ngalakgan seems to divide closed syllables into two types: those in (25a)–(25l), in which the coda has independent place features and attracts stress, and those in which the coda does not have its own independent place features and fails to attract stress. This suggests that Ngalakgan is best analyzed as having a requirement that moraic elements have independent place features (i.e. not shared with a following onset), as advocated in Baker (1997). It follows that the stressed closed syllables in (25a)–(25l) would be bimoraic and attract stress, whereas the CVG syllables in (25m)–(25p) would be monomoraic and not attract stress. Languages like Ngalakgan seem to present a challenge for the underlying weight representation of geminates in (10a), since not only do syllables with geminates not attract stress, but they are not even equal in weight to CVC syllables, as in (25a)–(25l), which do attract stress. Baker (2008) offers an articulatory gestural analysis of the difference. He observes that the apparent CVC syllable attracts stress only if the postvocalic coda consonant has an articulatory gesture distinct from that of the following onset. That is, in a CVCCV sequence, the first syllable counts as heavy only if the two intervocalic consonants have distinct articulatory gestures. When the intervocalic sequence involves a geminate or a nasal that is homorganic with a following consonant (or a glottal stop, which does not have a distinct articulatory gesture) there is only one articulatory gesture, and stress

20

Stuart Davis

is not attracted onto CVG (or CVN where N is homorganic with a following consonant). Baker (2009) adopts a composite view of geminate representation that incorporates a gestural tier along with root nodes and moras. Under this view, stress in Ngalakgan is characterized as sensitive to the gestural tier. Ringen and Vago (2010), in contrast, take the Ngalakgan data as supporting the length representation of geminates where the stress rule treats linked structures like those in (10b) as light. Davis (2003), who maintains the underlying weight representation of geminates, suggests that the Ngalakgan stress pattern in (25) does not necessarily argue against the underlying moraic representation of geminates as in (10a); rather, the language has a high-ranked constraint that requires moraic elements to have their own place features. Thus, while geminates may be underlyingly moraic, they do not surface as moraic.11 To conclude this section, we have surveyed languages demonstrating three types of behavior of CVG syllables in stress systems: (i) cases where CVG and CVV pattern together; (ii) cases where CVG patterns with other CVC syllables; and (iii) cases where CVG syllables are specifically resistant to stress. We have tried to maintain the underlying moraic weight representation of geminates despite apparent evidence to the contrary. In the concluding section we will further discuss representational issues.

4

Representational issues and conclusion

In this overview, we have focused on the cross-linguistic patterning of geminate consonants while trying to maintain the representational view of geminates in (10a), in which geminates are marked as being underlyingly moraic, over the length representation in (10b). In §2, we provided evidence for the underlying moraic representation of geminate consonants by considering a variety of phonological patterning pertinent to geminates. This included the moraic analysis of initial geminates in Trukese in §2.1 and the cross-linguistic avoidance of CVVG syllables in §2.3. We made it clear in §2.2 that geminates do not always behave like a sequence of two C-slots in prosodic patterning, thereby contradicting the length representation of geminates in (10b). In §3, we surveyed geminate patterning in stress systems identifying three types of behavior. Despite the differences, we still argued for the underlying moraic view of geminate consonants. The issue of the representation of geminate consonants has been a controversial matter and will most likely remain so in future investigations. This is because geminates do not display uniform behavior, as we have illustrated. It seems that the very nature of the data under examination determines what type of representation must be appropriate. For example, the parallel patterning between final CC sequences and final geminates in Hungarian seems to be supportive of the length representation, while the difference between final CC sequences and final 11

It should be noted that Baker (2008) actually considers the intervocalic geminates in (25q)–(25s) and the intervocalic homorganic nasal clusters in (25m)–(25p) to be syllabified completely as onsets rather than as heterosyllabic. This differs from his earlier work, Baker (1997), where a heterosyllabic parse of geminates and homorganic nasal clusters is maintained. One shortcoming of Baker’s (2008) onset analysis is that geminates and homorganic nasal clusters do not occur word-initially. Nonetheless, even if geminates could be analyzed as syllabifying as onsets, they would not add weight to a syllable and thus would be different from the initial geminates of Trukese discussed earlier.

Geminates

21

geminates in Arabic dialects such as Hadhrami Arabic appears to be better accounted for by the weight analysis of geminates. Such differences lead to three possibilities for a representational view of geminates, in a general sense. The first possibility is that geminate representation is language-specific. In a language of the Trukese type, geminates are represented to be underlyingly moraic (10a), as argued for by Davis and Torretta (1998) and Davis (1999b), but in a language of the Hungarian type, geminates have a length analysis (10b), as argued for by Ringen and Vago (2010).12 The second possibility – one maintained in the body of this chapter – is that there is one specific universal representation of geminates (i.e. the underlying moraic weight representation), and apparent counterexamples to it are explained by constraints that act on surface forms. This strategy was emphasized especially in §§3.2 and 3.3. For example, in the Selkup-type stress pattern in (24), which ignores all CVC syllables, including those closed by a geminate, there is an independent constraint that restricts pitch realization to vocalic elements. The constraint would choose to place stress on any CVV syllable over any syllable closed by a consonant, even if that consonant were part of a geminate. Thus, the lack of CVG syllables that attract stress in Selkup need not reflect on the underlying moraicity of geminate consonants. Given that geminates show a varied degree of sensitivity to stress, as seen in §3, any single universal analysis of geminates would have to make use of such flexible strategies to account for the apparent problematic cases. A third possibility for a representational account of geminates is to maintain that there is a universal representation of geminates, but one that combines different aspects of the various representations discussed in §1 and elsewhere in this chapter. Hume et al. (1997) combine a length representation with moraic theory, and Curtis (2003) integrates the two-root node representation presented in (5) with a moraic view. (See Curtis 2003 for a detailed discussion and comparison of different representational views of geminate consonants.) A more recent composite view is implicit in Baker (2009), in which a geminate is represented on both a timing (length) tier and a gestural tier. It is also viewed as having a moraic representation if it functions as heavy. Baker’s (2009) representation of a geminate [k] is shown in (26). (26) [k] X

X

Gestural tier Timing (length) tier

Baker’s analysis is quite attractive as a universal, especially if we assume the underlying moraic nature of geminates. Languages display variable patterning for geminates because the phonology (i.e. the constraints or rules) makes reference 12

José and Auger (2005) argue that even within a single language not all geminates must have the same representation. According to them, in Vimeu Picard (phrase-)initial geminates differ as to whether they have a single set of features or two sets of identical features linked to two root nodes. From phonological patterning they argue that initial [ll] has the former representation while initial [nn] has the latter.

22

Stuart Davis

to different aspects of the representation. For example, in Trukese, the moraic aspect of the geminate representation is crucial, while it may or may not surface with two X-slots, depending on the constraints. In Hungarian, given the parallel behavior of word-final clusters and geminates, the X-tier representation is effective, while the geminate may or may not surface as moraic, depending on the constraints. In Ngalakgan, the behavior of geminates with respect to stress can be understood through the gestural tier, as proposed by Baker (2008, 2009). Geminates and homorganic nasal + consonant clusters have a single gesture. Consequently, they can pattern as single consonants despite having two X-slots, as long as there is a high-ranked constraint that requires a moraic element not to share place features. This would provide an explanation for the repulsion of geminates to stress. It follows that while geminates may have one underlying universal representation (as in (26)), its surface realization may vary cross-linguistically, e.g. as non-moraic in Ngalakgan, but moraic in Trukese. The composite view, or some version of it, may ultimately be the best universal representation for an underlying geminate. For example, one criticism of a purely weight account of geminates is that it cannot distinguish between a geminate that syllabifies entirely in a coda from a single coda consonant in a language in which Weight-by-Position applies. This matter comes up in the Palestinian Arabic dialect described by Abu-Salim (1980) and mentioned in Rose (2000), where a coda singleton in a word like [bit.na] ‘our house’ is representationally indistinguishable from a coda geminate in a word like [sitt.na] ‘our grandmother’ on a strictly moraic view of geminates as in (10a). Unless a length tier (or two root nodes) is assumed, there is no obvious way to distinguish the two cases. Although such examples are probably rare, the occurrence of this type of contrast indeed favors a composite analysis, especially given the language-specific evidence for Arabic moraic structure presented in various parts of this chapter. That said, it may still be possible to argue for the underlying moraic weight representation as universal, but with the understanding that the surface realization of geminates may vary across languages because of the interaction of relevant constraints. In conclusion, there is much about the phonology of geminates that remains to be investigated. Geminates do not all pattern the same way across languages. Consequently, geminate phonology will remain an area of theoretical controversy for the foreseeable future.

ACKNOWLEDGMENTS I thank Brett Baker, Ken de Jong, Daniel Dinnsen, Tracy Alan Hall, Abbie Hantgan, Brian José, Michael Marlo, Paul Newman, Anne Pycha, Nathan Sanders, Robert Vago and Islam Youssef for discussion on various aspects of this paper. I especially thank Natsuko Tsujimura for her detailed comments. The usual disclaimers apply.

REFERENCES Abu-Salim, Issam M. 1980. Epenthesis and geminate consonants in Palestinian Arabic. Studies in the Linguistic Sciences 10(2). 1–11. Baker, Brett. 1997. Edge crispness: Segment to mora isomorphism. Proceedings of the West Coast Conference on Formal Linguistics 16. 33 –47.

Geminates

23

Baker, Brett. 2008. Word structure in Ngalakgan. Stanford: CSLI. Baker, Brett. 2009. Monogestural clusters as onsets: The Australian evidence. Paper presented at the 83rd Annual Meeting of the Linguistic Society of America, San Francisco. Bamakhramah, Majdi. 2009. Syllable structure in Arabic varieties with a focus on superheavy syllables. Ph.D. dissertation, Indiana University. Broselow, Ellen. 1992. Parametric variation in Arabic dialect phonology. In Ellen Broselow, Mushira Eid & John J. McCarthy (eds.), Perspectives on Arabic linguistics IV, 7 –45. Amsterdam & Philadelphia: John Benjamins. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Churchyard, Henry. 1991. Compensatory lengthening and “gemination throwback” in Trukese and Puluwat as evidence for Rime in moraic phonology. Unpublished ms., University of Texas. Churma, Donald & Yili Shi. 1996. Glottal consonants and the “sonority” hierarchy. Proceedings of the Eastern States Conference on Linguistics (ESCOL) 1995. 25–37. Clements, G. N. & Samuel J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Curtis, Emily. 2003. Geminate weight: Case studies and formal models. Ph.D. dissertation, University of Washington. Davis, Stuart. 1994. Geminate consonants in moraic phonology. Proceedings of the West Coast Conference on Formal Linguistics 13. 32 –45. Davis, Stuart. 1999a. On the moraic representation of underlying geminates: Evidence from prosodic morphology. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.), The prosody–morphology interface, 39 –61. Cambridge: Cambridge University Press. Davis, Stuart. 1999b. On the representation of initial geminates. Phonology 16. 93 –104. Davis, Stuart. 2003. The controversy over geminates and syllable weight. In Féry & van de Vijver (2003), 77 –98. Davis, Stuart. Forthcoming. Quantity. In John A. Goldsmith, Jason Riggle & Alan C. L. Yu (eds.) The handbook of phonological theory, 2nd edn. Malden, MA & Oxford: Wiley-Blackwell. Davis, Stuart & Gina Torretta. 1998. An optimality-theoretic account of compensatory lengthening and geminate throwback in Trukese. Papers from the Annual Meeting of the North East Linguistic Society 28. 111–125. Dyen, Isadore. 1965. A sketch of Trukese grammar. New Haven: American Oriental Society. Féry, Caroline & Ruben van de Vijver (eds.) 2003. The syllable in Optimality Theory. Cambridge: Cambridge University Press. Goodenough, Ward & Hiroshi Sugita. 1980. Trukese–English dictionary. Philadelphia: American Philosophical Society. Gupta, Abha. 1987. Hindi word stress and the obligatory branching parameter. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23. 134 –148. Halle, Morris & G. N. Clements. 1983. Problem book in phonology. Cambridge, MA: MIT Press. Halle, Morris & Jean-Roger Vergnaud. 1980. Three-dimensional phonology. Journal of Linguistic Research 1. 83 –105. Ham, William. 1998. Phonetic and phonological aspects of geminate timing. Ph.D. dissertation, Cornell University. Hart, Michele. 1991. The moraic status of initial geminates in Trukese. Proceedings of the Annual Meeting, Berkeley Linguistics Society 17. 107 –120. Hayes, Bruce. 1986. Inalterability in CV phonology. Language 62. 321–351. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press.

24

Stuart Davis

Hume, Elizabeth, Jennifer Muller & Aone van Engelenhoven. 1997. Non-moraic geminates in Leti. Phonology 14. 371–402. Idemaru, Kaori & Susan Guion. 2008. Acoustic covariants of length contrast in Japanese stops. Journal of the International Phonetic Association 38. 167 –186. José, Brian & Julie Auger. 2005. Geminates and Picard pronominal clitic allomorphy. Catalan Journal of Linguistics 4. 127 –154. Kawahara, Shigeto. 2007. Sonorancy and geminacy. University of Massachusetts Occasional Papers 32. 145 –186. Keer, Edward. 1998. Icelandic preaspiration and the moraic theory of geminates. Proceedings of the 10th Conference of Nordic and General Linguistics, Reykjavik. (ROA-312.) Keer, Edward. 1999. Geminates, the OCP and the nature of Con. Ph.D. dissertation, Rutgers University. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kiparsky, Paul. 2003. Syllables and moras in Arabic. In Féry & van de Vijver (2003), 147–182. Kiparsky, Paul. 2008a. Fenno-Swedish quantity: Contrast in Stratal OT. In Bert Vaux & Andrew Nevins (eds.) Rules, constraints, and phonological phenomena, 185–219. Oxford: Oxford University Press. Kiparsky, Paul. 2008b. Weight and length. Paper presented at the CUNY Conference on the Syllable, New York City. Kirchner, Robert. 2000. Geminate inalterability and lenition. Language 76. 509 –545. Kraehenmann, Astrid. 2001. Quantity and prosody asymmetries in Alemannic: Synchronic and diachronic perspectives. Ph.D. dissertation, University of Konstanz. Published 2003, Berlin & New York: Mouton de Gruyter. Krueger, John. 1961. Chuvash manual: Introduction, grammar, reader, and vocabulary. Bloomington: Indiana University Press. Leben, William R. 1980. A metrical analysis of length. Linguistic Inquiry 11. 497 –509. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12. 373 –418. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. Morén, Bruce. 1999. Distinctiveness, coercion and sonority: A unified theory of weight. Ph.D. dissertation, University of Maryland at College Park. Muller, Jennifer. 2001. The phonology and phonetics of word-initial geminates. Ph.D. dissertation, Ohio State University. Pajak, Bozena. Forthcoming. Contextual constraints on geminates: The case of Polish. Proceedings of the Annual Meeting, Berkeley Linguistics Society 35 (ROA-105). Paradis, Carole. 1988. On constraints and repair strategies. Linguistic Review 6. 71– 97. Prince, Alan. 1990. Quantitative consequences of rhythmic organization. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26(2). 355–398. Pycha, Anne. 2007. Phonetic vs. phonological lengthening in affricates. In Jürgen Trouvain & William J. Barry (eds.) Proceedings of the 16th International Congress of Phonetic Sciences, 1757 –1760. Saarbrücken: Saarland University. Pycha, Anne. 2009. Lengthened affricates as a test case for the phonetics–phonology interface. Journal of the International Phonetic Association 39. 1– 31. Ringen, Catherine & Robert M. Vago. 2010. Geminates: Heavy or long? In Charles Cairns & Eric Raimy (eds.) Handbook of the syllable. Leiden: Brill. Rose, Sharon. 2000. Rethinking geminates, long-distance geminates, and the OCP. Linguistic Inquiry 31. 85 –122. Schein, Barry & Donca Steriade. 1986. On geminates. Linguistic Inquiry 17. 691–744.

Geminates

25

Schmidt, Deborah. 1992. Compensatory lengthening in a segmental moraic theory of representation. Linguistics 30. 513 –534. Selkirk, Elisabeth. 1990. A two-root theory of length. University of Massachusetts Occasional Papers 14. 123 –171. Selkirk, Elisabeth. 1991. On the inalterability of geminates. In Pier Marco Bertinetto, Michael Kenstowicz & Michele Lopocaro (eds.) Certamen Phonologicum II: Papers from the 1990 Cortona Phonology Meeting, 187 –209. Turin: Rosenberg & Sellier. Sherer, Timothy. 1994. Prosodic phonotactics. Ph.D. dissertation, University of Massachusetts, Amherst. Steriade, Donca. 1990. Moras and other slots. Proceedings of the Formal Linguistics Society of Midamerica 1. 254 –280. Thurgood, Graham. 1993. Geminates: A cross-linguistic examination. In Joel Ashmore Nevis, Gerald McMenamin & Graham Thurgood (eds.) Papers in honor of Frederick H. Brengelman on the occasion of the twenty-fifth anniversary of the Department of Linguistics, CSU Fresno, 129 –139. Fresno: Department of Linguistics, California State University, Fresno. Topintzi, Nina. 2006. Moraic onsets. Ph.D. dissertation, University College, London. Topintzi, Nina. 2008. On the existence of moraic onsets. Natural Language and Linguistic Theory 26. 147 –184. Topintzi, Nina. 2010. Onsets: Suprasegmental and prosodic behaviour. Cambridge: Cambridge University Press. Tranel, Bernard. 1991. CVC light syllables, geminates and moraic theory. Phonology 8. 291–302. Tsujimura, Natsuko 2007. An introduction to Japanese linguistics. 2nd edn. Cambridge, MA: Blackwell. Tyler, Stephen. 1969. Koya: An outline grammar. Berkeley: University of California Press. Watson, Janet C. E. 2002. The phonology and morphology of Arabic. Oxford: Oxford University Press.

38

The Representation of sC Clusters Heather Goad

1

Introduction

Clusters of the shape s + consonant (sC), exemplified by stack, have posed a challenge for theories of syllabification, as they defy many of the constraints holding of true branching onsets, as found in, for example, track. Accordingly, some researchers have proposed that s is organized outside the onset constituent that contains the following consonant. Others have proposed that this sort of analysis holds only for a subset of sC clusters; those that rise in sonority, for example in slack, are represented in the same fashion as branching onsets. Yet others have argued that some sC clusters, those of the shape s + stop, form complex segments. This chapter will critique each of these proposals. An element shared by all of them is that phonological units are highly articulated: the burden of explanation is placed precisely on the structural relationships that adjacent segments enter into. Although this approach captures many peculiarities of sC clusters, there is little attempt to explain why the consonant that displays unorthodox behavior is typically s. Under the view that segments are ordered to maximize their perceptibility, the behavior of s becomes less puzzling: strident fricatives have robust internal cues, ensuring their perceptibility even in non-optimal contexts. If the acoustic properties of s are of central importance, it behoves us to ask whether the differences between sC clusters and branching onsets can be explained solely by perceptual considerations. This, of course, would challenge the view that a structural approach to cluster well-formedness is necessary. In the final section of the paper I will argue that this position is too strong. I will conclude that an adequate understanding of sC clusters requires consideration of both perceptual and structural factors. Much of the paper compares sC clusters with branching onsets. We will observe that they differ in several respects: phonotactic constraints, word-internal syllabification, allomorph selection, patterns of reduplication, options for cluster repair, etc. Although the general observations we will detail are likely to be accepted by most phonologists, there is little agreement on how these differences should be formally represented. In this context, there are three topics that will be addressed. The first involves critical assessment of various proposals in the literature concerning the representation of sC clusters – as a single class – in contrast to The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0038

The Representation of sC Clusters

2

obstruent + sonorant clusters. Henceforth, I will use the term “obstruent” to refer to obstruents other than s. “s” is itself a cover term for the sibilant(s) appearing in sC clusters; although this sibilant is usually /s/, in some languages, other sibilants pattern as s (e.g. German s is usually /œ/; in Russian, /s z œ Ú/ all pattern as s). Concerning obstruent + sonorant clusters, there will be nothing particularly special to say about their representation; they form branching onsets, and there is little controversy on this matter among those who accept a hierarchically organized syllable. For sC clusters, in contrast, several options will be considered. Most of these share the idea that s is an appendix, a segment which is not organized by any sub-syllabic constituent; one views s as a coda. We will see, in addition, that some researchers propose a single representation for sC in all languages; others argue for different representations across languages. The second topic that must be addressed is whether, in a given language, all sC clusters are represented in the same fashion. s + sonorant clusters are phonotactically ambiguous: like obstruent + sonorant clusters, they rise in sonority, yet like s + obstruent clusters, they do not respect the place constraints holding of obstruent + sonorant clusters. Depending on the weight assigned to each of these, different conclusions will be arrived at concerning the analysis of s + sonorant. For those researchers who place most weight on sonority profile, s + sonorant clusters form branching onsets. This research itself falls into two categories. One body of work aims to show that s + sonorant patterns with branching onsets while s + obstruent patterns differently, and is organized with some type of appendix. Another body of work considers s + sonorant clusters to be branching onsets, but focuses on arguing that s + stop clusters form complex segments. The proposals sketched above assume that the syllable is hierarchically organized. However, there is a growing literature that de-emphasizes the role of constituency and aims to provide phonetically grounded explanations for phonological behavior. The third topic therefore considers whether differences in the behavior of obstruent + sonorant, s + sonorant, and s + obstruent can be explained by perceptual considerations alone. This topic will be the focus of the final section of the paper. Until then, a structural approach will be assumed.

2

Cluster phonotactics

We begin by detailing the phonotactic constraints most commonly held of obstruent + sonorant clusters on the place and sonority dimensions, in turn, examining sC clusters on these same dimensions (see also chapter 33: syllableinternal structure; chapter 55: onsets). Our focus will be on the left word edge; other types of phonological behavior will be discussed in later sections, when we examine alternative representations for sC clusters. Consider the inventories of two-member clusters found in word-initial position in English and Dutch in (1) and (2).1 The data are organized by the place and manner values of C1 for obstruent + sonorant clusters and of C2 for sC clusters. (On cluster phonotactics for English, see Fudge 1969; Selkirk 1982; Clements and 1

We restrict discussion of obstruents in clusters to those that are voiceless; some languages display fewer options for voiced obstruents (e.g. */vl vr/ in English). We avoid consonant + glide clusters altogether, as there are more representational options available for glides than we have space to consider.

Heather Goad

3

Keyser 1983; Goldsmith 1990; Harris 1994; for Dutch, see Trommelen 1984; van der Hulst 1984; Fikkert 1994; Booij 1995; van der Torre 2003.) (1)

English a.

(2)

Obstruent + sonorant pl *tl kl pr tr kr fl *hl *œl fr hr œr

b. sC clusters sp st sk sm sn sl *sr

Dutch a.

Obstruent + sonorant *tn kn pl *tl kl pr tr kr fl øl fr ør

b. sC clusters sp st sø sm

sn sl *sr

We first consider place identity, which forbids the consonants in obstruent + sonorant clusters from having the same place (chapter 22: consonantal place of articulation). This captures the ill-formedness of */tl hl œl/ in English (1a), and */tn tl/ in Dutch (2a).2 Turning to (1b) and (2b), the well-formedness of /st sn sl/ indicates that sC clusters do not respect place identity, suggesting that sC clusters do not have the same representation as obstruent + sonorant clusters. However, before we can conclude this with certainty, we must consider place-sharing */sr/, which is ill-formed in English and Dutch.3 Importantly, */sr/ is illicit even in Dutch dialects with dorsal /r/, suggesting that the ill-formedness of this cluster has nothing to do with place identity. We return to */sr/ later in the chapter. A second, less commonly discussed, constraint on place concerns asymmetries that hold between C1 and C2 in obstruent + sonorant clusters (when C2 ≠ glide). English is not very revealing, because C2 is restricted to liquids, which are coronal. Dutch is potentially more illuminating, because it contains stop + nasal clusters, and nasals contrast for place.4 As (2a) shows, when a nasal is in C2 in an obstruent + sonorant cluster, it must be coronal: /kn/ is well-formed; */km/ is out. The broader generalization is thus that when C2 is a contoid, it must be coronal (except /r/; see note 2). 2

Place identity is not respected with /r/. In English and Dutch dialects with coronal /r/, coronal + /r/ clusters are well-formed, as are dorsal + /r/ clusters in Dutch dialects with dorsal /r/. Even in languages where /t/ and /r/ are articulated near-identically, the constraint is not respected (see Arvaniti 2007 on Greek). This may suggest that /r/ permanently lacks place (Rice 1992; Goad and Rose 2004; see also chapter 30: the representation of rhotics). 3 Concerning Dutch */sr/, some speakers realize /sør/ as [sr] (Waals 1999). If this represents a re-analysis of /sør/ (van der Torre 2003), then /sr/ is well-formed for these speakers. Concerning English /œr/, we have placed this cluster in the obstruent + sonorant category (Goad and Rose 2004), rather than treating it as an assimilated form of /sr/ (Clements and Keyser 1983; Goldsmith 1990). 4 I say “potentially,” because, as is undoubtedly evident, obstruent + sonorant clusters will be analyzed as branching onsets below. There is, however, dispute about the status of /kn/ in Dutch, as branching onset (Fikkert 1994; Booij 1995) or appendix-initial (van der Hulst 1984; Trommelen 1984; Kager and Zonneveld 1986). Notably, intervocalic /kn/ is syllabified as coda + onset, contra the branching onset analysis.

The Representation of sC Clusters

4

C2 in sC clusters has a different profile: it can have any place of articulation. Indeed, in s + obstruent and s + nasal clusters, C2 displays the same range of place contrasts attested for singleton onsets (e.g. /sp st sk/ alongside /p t k/ in English). Directly comparing nasal-final clusters in Dutch, we find the following: */km/, /kn/; /sm/, /sn/. The absence of */km/ alongside the presence of /sm/ is unexpected, if these clusters are represented identically. Because of the disputed status of Dutch /kn/ (note 4), we turn to Modern Greek to better examine differences in place profile between C2 in obstruent + sonorant vs. sC clusters. (On Modern Greek cluster phonotactics, see Joseph and PhilippakiWarburton 1987; Drachman 1990; Klepousniotou 1998; Morelli 1999; Tzakosta and Vis 2009; on Attic Greek, see Steriade 1982.) The data in (3) reveal that, although C2 in an obstruent + sonorant cluster can have any manner, leading to a wider range of cluster profiles than in Dutch or English, C2 must still be coronal (Klepousniotou 1998).5 (3b) shows that sC clusters are more restricted on the manner dimension than they are in Dutch, but among clusters with obstruents in C2, it can nevertheless be seen that C2 can have any place. (3)

Modern Greek a.

Obstruent + sonorant pt (pn) pl pr ft fh *fn fl fr (mn)

*tn *tl tr *ht (hn) (hl) hr

kt (kn) kl kr xt (xh) xn xl xr

b.

sC clusters sp sf (sm)

st (sh) (sn) *sl *sr

sk sx

In sum, we have observed that C2 in an obstruent + sonorant cluster does not parallel C2 in an sC cluster on the place dimension. On the contrary, a closer parallel is observed between C1 in an obstruent + sonorant cluster and C2 in an sC cluster, which we return to below. We consider finally the sonority constraints that hold between C1 and C2 in initial clusters.6 Greek is not very revealing here, as both obstruent + sonorant and sC clusters can have a falling, flat, or rising sonority profile (although (3b) indicates that the productivity of the latter for sC clusters is questionable; we return to this below). We therefore focus on English and Dutch. (1) and (2) show that sC clusters need not rise in sonority, in contrast to obstruent + sonorant clusters. Although C1 in an sC cluster is an obstruent and so the potential exists for these 5

Stop + stop and fricative + stop are often considered to be archaic in spoken Modern Greek (Joseph and Philippaki-Warburton 1987; Morelli 1999). They do occur in higher registers, which is why they are included here. /ps ts ks/ are absent from (3a); I assume they are complex segments (following Tzakosta and Vis 2009). Clusters in parentheses (as well as /tm/) are not productive, although the number of /sm/-initial roots is somewhat larger than for the others. Thanks to Jenny Dalalakis and Katerina Klepousniotou for help with the Greek data. 6 I assume the following sonority scale, which is roughly based on relative intensity: stop < fricative < nasal < liquid < vocoid. See also chapter 49: sonority.

Heather Goad

5

clusters to be limited to those that rise in sonority, this is not what is observed. Both languages contain falling sonority s + stop, and Dutch has flat sonority /sø/.

3

sC clusters ≠ branching onsets

Our examination of phonotactic constraints has revealed that obstruent + sonorant and sC clusters pattern differently on three dimensions: place identity, C2 place profile, and C1C2 sonority profile. Hereafter, we turn to other types of phonological behavior. Our first goal is to motivate the position that sC clusters are represented differently from obstruent + sonorant clusters. We then delve into the various options for sC clusters. For both of these topics, we will focus on phenomena where sC clusters pattern as a single class. The following section will then examine evidence that s + sonorant and s + obstruent clusters may be organized differently, s + sonorant clusters as branching onsets and/or s + stop as complex segments. In the preceding section, we observed that obstruent + sonorant clusters have a profile consistent with the prototypical syllable, one that rises in sonority toward the peak. This indicates that clusters of this shape (modulo language-specific constraints on place identity and sonority distance) are organized as branching onsets; see (4). (4)

Branching onset O p

l

Following Kaye et al. (1990), I assume that onsets are universally left-headed. Heads can host a range of segmental contrasts; dependents are segmentally restricted. Considering place contrasts, earlier discussion of C2 place profile revealed that there is more in common between C1 in a branching onset and C2 in an sC cluster. This suggests that sC clusters are right-headed. If onsets are left-headed, this de facto places s outside of this constituent, as reflected in the representations in (5). In (5), skeletal slots have been suppressed to facilitate comparison across theories; PWd abbreviates Prosodic Word (see chapter 51: the phonological word). (5)

sC cluster a. Extraprosodic (e.g. Steriade 1982)

b. Licensed by PWd (e.g. Goldsmith 1990)

O

PWd

p

q

O s

p

The Representation of sC Clusters c. Licensed by q (e.g. van der Hulst 1984)

d. Coda (e.g. Kaye 1992) O

q

R

O

N

O s

6

s

p

p

In (5a), s is extraprosodic. I am using this term in its narrowest sense, to refer only to the situation where an element is licensed but not organized into higher structure; compare (5b) and (5c). These two share with (5a) the idea that s is an appendix: s is not organized by any sub-syllabic constituent, in contrast to (5d), where s is a coda (technically in Kaye’s model a rhymal dependent). Two predictions follow from the difference in representation and headedness in (4) vs. (5). First, there should be languages that permit dependents in branching onsets but not sC clusters, and vice versa. This prediction holds true. Spanish is a language with branching onsets, but lacking initial sC clusters (Harris 1983). Acoma, spoken in New Mexico, has the opposite profile: initial clusters are restricted to sC (Miller 1965). Relatedly, Fikkert’s (1994) study on the acquisition of Dutch reveals that some children acquire branching onsets first, parallel to Spanish, while others acquire sC clusters first, parallel to Acoma. Second, languages permitting both types of structures should not prevent them from being combined. To my knowledge, this prediction always holds. In languages that have branching onsets and sC clusters, three-member clusters of the shape s + branching onset are also well-formed. However, these clusters may be restricted to a subset of what would be expected from a free combination of sC clusters and branching onsets in the particular language (e.g. Greek /sx/, /xr/, */sxr/; English /sk/, /kl/, */skl/ (loans aside)). Further, an explanation emerges under (5) for why the constraints against place identity and for rising sonority do not hold of sC clusters. It is not enough for two consonants to be adjacent; they must be sisters, as in (4). Finally, we demonstrate that the difference in headedness between branching onsets and sC clusters can account for a commonly attested pattern of cluster reduction in acquisition, illustrated in (6) from two learners of German and English respectively, Annalena (Elsen 1991) and Amahl (Smith 1973).7 (6)

a.

Annalena (age 1;4–1;9) obs + son s + obs s + son

7

output [daQbe] [f>kH] [p>gHl] [da>nH] [m>sHn] [lA(fH]

target [tr]aube [fl]iege [œp]iegel [œt]ein [œm]eißen [œl]afen

‘grape’ ‘fly’ ‘mirror’ ‘stone’ ‘to throw’ ‘to sleep’

[d Ò ı] indicate voiceless unaspirated lenis stops in Amahl’s data (Smith 1973).

Heather Goad

7 b.

Amahl (age 2;2–2;6) output obs + son [de(t] [ıi(m] s + obs [daidH] [ı>p] s + son [ni(d] [lZg]

‘plate’ ‘cream’ ‘spider’ ‘skipping’ ‘sneezed’ ‘slug’

In (6), the head of the cluster survives, regardless of its position in the string or its relative sonority. This suggests that some learners recognize that branching onsets are left-headed and sC clusters right-headed, even though left-edge clusters are altogether absent from their outputs (Goad and Rose 2004).

4

Alternative representations for sC clusters

Thus far, we have motivated different representations for branching onsets and sC clusters, based on evidence from phonotactics and cluster reduction in acquisition. We turn now to critique the various options for sC clusters in (5). In (5a), s is extraprosodic.8 Although it is unaffiliated to any syllable constituent (at least at early levels of the phonology), it is licensed and thus not subject to stray erasure. This structure is motivated by, for example, Steriade (1982) with data from Attic Greek, Sanskrit, Latin, and English; Vennemann (1982) and Wiese (1988) from German; Kager and Zonneveld (1986) from Dutch; Chierchia (1986) and Davis (1990) from Italian. Although (5a) captures the unique behavior of sC clusters, it cannot easily express the observation that some languages have different types of extraprosodic segments. In English, for example, the same status should not be assigned to /s/ in sC clusters as to inflectional /s/ in (7a), yet both fall outside of core syllabification. Further, on the face of it, some languages permit adjacent extraprosodic segments. English allows an extra morpheme-internal position at the right edge of words that is not permitted word-internally, which some researchers have analyzed as extrarhymal (e.g. Myers 1987). Since such words can also be inflected, (7b), the result would appear to be two adjacent extraprosodic positions (see also chapter 36: final consonants).9 (7)

English a. b.

træp straps tæk taxed træm

tramps

If the final two positions in (7b) are both licensed as extraprosodic at the same point, the result would violate the Peripherality Condition: extraprosodic status can only be assigned to elements in peripheral positions (Hayes 1980; Harris 1983). 8

(5a) disregards the fact that scholars working with this type of representation hold different views on whether the syllable is internally structured. 9 More troubling are words like [nekst] next, with two morpheme-internal extraprosodic positions; we return to such cases when we examine the possibility that s + stop clusters form complex segments.

The Representation of sC Clusters

8

The alternative that is typically adopted, therefore, is that these two types of extraprosodic segments are extraprosodic at different points in the derivation. Extraprosodic /s/ in taxed and /p/ in tramps are incorporated as codas in the post-lexical phonology before inflectional /t/ and /s/ are added (cf. Borowsky 1986). Initial /s/ in (7a) similarly loses extraprosodic status in the post-lexical phonology, where lexical constraints on sonority profile are no longer assumed to hold. The problem with this approach, however, is that it fails to show that extraprosodic elements ever function as members of the constituents that ultimately come to organize them, the onset in (7a) and the coda in (7b) (Piggott 1991). This problem is resolved once the Strict Layer Hypothesis is abandoned – the requirement that elements be dominated by the immediately higher category in the prosodic hierarchy (Nespor and Vogel 1986). Extraprosodic segments on this view are technically not extraprosodic (unaffiliated); instead, they are organized by some higher constituent in the prosodic hierarchy. One possible representation for /tæm

/, consistent with this approach, is in (8): initial /s/ is organized by the PWd, extrarhymal /p/ is organized by the syllable, and inflectional /s/ is adjoined to the PWd. This representation is consistent with the Peripherality Condition: extrarhymal /p/ and inflectional /s/ are each at the right edge of a separate PWd (Goad and White 2006). PWd

(8)

PWd q O

s

t

R N

C

æ

m

p

s

Returning specifically to sC clusters, the representation for /st/ in (8) involves s linking directly to the PWd (see e.g. Goldsmith 1990, drawing on evidence from English; Trommelen 1984 and Fikkert 1994 from Dutch; Goad and Rose 2004 from German). An alternative involves s linking directly to the syllable, the inverse of extrarhymal /p/ in (8) (see e.g. van der Hulst 1984 with evidence from Dutch; Levin 1985 and Kenstowicz 1994 from English; Barlow 1997 and Gierut 1999 from English in phonologically delayed children; Drachman 1990 and Tzakosta and Vis 2009 from Greek; Tzakosta 2009 from child Greek).10 These two proposals were provided earlier as (5b) and (5c).11 10

Note that Levin (1985) adjoins, rather than directly links, s to the syllable. To the list in the text we can add Giegerich (1992), Hall (1992), and Booij (1995), who analyze s as an onset-internal appendix, using data from English, German, and Dutch, respectively; and Ewen and Botma (2009), who organize s into the specifier position of the onset for Germanic. 11 Other less commonly proposed licensers for s will not be considered due to space constraints: e.g. the Foot (Green 2003 on Munster Irish) and the Phonological Phrase (Vaux 1998 on Armenian).

Heather Goad

9

The alternatives in (5b) and (5c) make different predictions concerning the distribution of sC clusters. I show that both options (or their equivalents) are needed, indicating that sC clusters cannot be represented identically across languages (Goad and Rose 2004; Vaux 2004; Ewen and Botma 2009). In languages with (5b), sC is only licensed PWd-initially. German has this profile (Goad and Rose 2004). (9) shows that sC clusters only occur stem-initially; word-internal tautosyllabic sC clusters are actually stem-initial, (9b). If stem-initial corresponds to PWd-initial, this restriction on sC distribution can be captured through (5b). (9)

German a. b.

[œp>nH]PWd [œte(Hn]PWd [bH[œte(Hn]PWd]PWd [gH[œte(Hn]PWd]PWd *[CV.œCV]PWd

‘spider’ ‘to stand’ ‘to insist’ ‘to confess’

Hall (1992) argues against (5b) for German on grounds that it incorrectly predicts aspiration in s + stop clusters, as the stop is syllable-initial in this representation. However, Iverson and Salmons (1995), following Kim (1970), offer an explanation for the absence of aspiration in s + stop that holds independently of how s is organized: because s is voiceless, the peak of glottal width that characterizes aspiration is internal to s, not the following stop. Thus, we do not see the absence of aspiration in s + stop as reason to reject (5b). In contrast to German, sC clusters in Dutch and English have a wider distribution, requiring (5c). Both languages contain monomorphemic examples where the rhyme preceding sC appears unable to accommodate s (e.g. Dutch [ekstHr] ‘magpie’, English [ekstrH] extra (van der Hulst 1984; Levin 1985)). (5c), however, freely permits violations of the Peripherality Condition. Accordingly, before we definitively conclude that it is required for morpheme-internal sC, we must examine the following alternative: PWd-initial sC clusters involve appendices organized as in (5b); in word-medial clusters, s is a coda. If this analysis could be supported, we could dispense with (5c). To show that (5c) is truly needed, we examine word-medial sC clusters in English in detail. Harris (1994) discusses the constraints governing three-position rhymes shaped VVC in this language. As (10a) reveals, coda sonorants in these superheavy syllables are confined to coronals which share place with the following onset (*[œowlbHr], *[mawmpHn]). PWd-internal VCC rhymes are not considered by Harris (they are not well-formed in Government Phonology, the framework in which he works). (10b) reveals that the onset is similarly constrained to coronal and the preceding consonants must be homorganic nasal + stop (*[v>ltnHr], *[–ZlkœHn]). (10)

English a.

VVC rhymes [œowldHr] shoulder [mawntHn] mountain [kawnsHl] council

b. VCC rhymes [æntlHr] antler [v>ntnHr] vintner [–ZI(k)œHn] junction

The Representation of sC Clusters

10

With these constraints in mind, we turn to cases where the consonant following VV/VC is s. Parallel to (10a), (11a) shows that only coronal s is permitted after VV (*[i(ftHr]) and the following consonant must be coronal (*[i(spHr]).12 This leads Harris to conclude that s in (11a) is syllabified as the coda of a three-position rhyme. The forms in (11b) are similarly parallel to those in (10b), seemingly leading to the same conclusion. (11)

a.

VVs rhymes [i(stHr] Easter [DjstHr] oyster

b.

VCs rhymes [mAnstHr] monster [m>nstrHl] minstrel

Problems arise, however, in (12). (12a) shows that VCs does not always respect the constraints holding of (11b): the consonant preceding s is not restricted to placesharing sonorants. (12b) reveals that the onset following s can be other than coronal, in contrast to (11a) and (11b). One could object on grounds that, extra aside, the words in (12) involve Latinate prefixes. However, these prefixes are not synchronically productive: they fall within the stress domain, and must therefore be contained inside the lower PWd ([’AbstHkHl]PWd, *[’Ab[stHkHl]PWd ]PWd). (12)

Appendix s a.

b.

Non-coronal codas [’ekstrH] extra [’AbstHkHl] obstacle Non-coronal onsets [‘ekspH’z>œHn] exposition [’kAnskr>pt] conscript

If the forms in (12) truly involve appendixal s, we expect to find s occurring after three-position rhymes of the shape in (10). (13) shows that such words are well-formed, albeit rare. Bolster and holster are monomorphemic, and while -ster in upholster is historically a class 2 suffix (seventeenth-century uphold-ster ‘small furniture dealer’), this analysis no longer holds, as revealed by the fact that upholster is now a verb and -y can attach outside (upholstery).13 (13)

VVC rhymes followed by s [bowlstHr] [howlstHr] [Zp(h)owlstHr]

bolster holster upholster

In short, while some instances of medial sC in English may involve s as coda, appendixal s is required to capture the data in (12) and (13). This thereby supports the postulation of (5c), where s is licensed by the syllable. Although languages like English appear to require (5c), this analysis cannot straightforwardly capture the fact that these same languages syllabify s as a coda 12

Neither constraint holds in dialects with lengthened [A(]/[æ(] in e.g. after, basket (Harris 1994). Hayes (2009: 210–211) considers the vowel in such words to be monophthongized (so presumably short) in some dialects. However, this does not hold of all dialects; witness, for example, RP [bHwlstH]. 13

11

Heather Goad

after short stressed vowels. If the appendix representation is available wordmedially, why do native speakers judge sC as heterosyllabic (’pes.ter) rather than as appendix + onset (’pe.ster)? It cannot be due to sonority profile, as will be seen shortly for Dutch. Heterosyllabicity of medial sC is handled more elegantly in Kaye’s (1992) Government Phonology approach to sC clusters, which we consider now. Kaye (1992) proposes that sC clusters are syllabified as coda + onset sequences, shown earlier in (5d).14 He provides support from Italian, Ancient Greek, European Portuguese, and British English; see also Brockhaus (1999) on German and Cyran and Gussmann (1999) on Polish. We consider first Italian, where the coda + onset pattern observed for ’pester-type words is illustrated more concretely. In Italian, rhymes of stressed syllables must branch (Chierchia 1986). When a stressed syllable lacks a coda, the vowel lengthens; see (14a) and (14b). (14c) shows that sC clusters do not trigger lengthening as do branching onsets; instead, they pattern with coda + onset sequences (14d), revealing that medial s is a coda. (14)

Medial sC in Italian a. b. c. d.

[’fa(to] [’ka(pra] [’pasta] [’parko]

‘fate’ ‘goat’ ‘pasta’ ‘park’

Turning to word-initial position, Kaye proposes that s in this position is also a coda; the difference between the initial and medial environments is that, in the former, s is the coda of an empty-headed syllable. Word-initial coda s follows from the Uniformity Principle in Government Phonology, which requires syllabification to be constant for a given string of segments, within and across languages. Kaye provides empirical support for Uniformity from Italian masculine definite article allomorphy and raddoppiamento sintattico. In the former, vowel- and sC-initial words pattern together, in contrast to words beginning with (branching) onsets; compare (15a) and (15b) with (15c). Since the representation for s in an sC cluster in (5d) includes a preceding nucleus, there is a structural parallel with vowel-initial words. (15)

Masculine definite article allomorphy (Davis 1990) a. b. c.

l’est /lo est/ lo studente il burro il clima

‘the ‘the ‘the ‘the

east’ student’ butter’ climate’

In raddoppiamento sintattico, the first consonant in an onset geminates when the preceding word ends in a stressed vowel, while the first consonant in an sC cluster resists gemination; see (16). The pattern in (16b) follows directly from the view that s is a coda: as coda + onset, sC already has precisely the structure that holds of geminates. 14

A precursor to this appears in Vennemann (1988). Vennemann proposes that initial s is “quasinuclear” in some languages, a type of degenerate syllable. He argues that this analysis of Latin s explains its development into a regular Vs syllable in some Romance languages, notably Spanish.

The Representation of sC Clusters (16)

12

Raddoppiamento sintattico (Chierchia 1986) a. b.

palto pulito citta triste citta straniera

[pal’toppu’lito] [Œit’tat’triste] [Œit’tastra’niera] *[Œit’tasstra’niera]

‘clean coat’ ‘sad city’ ‘foreign city’

Although it is evident from (15) and (16) that sC clusters in Italian cannot be analyzed in the same fashion as branching onsets, the proposal that they contain an initial appendix can also handle these data (see Chierchia 1986; Davis 1990). Of the cases Kaye considers, the construction that poses a particular challenge for appendixal s is European Portuguese vowel nasalization. We turn to this case now. In European Portuguese, nasal consonants cannot close syllables. While /n/ is realized intact before vowel-initial bases (17a), before onset-initial bases, nasality surfaces on the preceding vowel (17b), (17c). Interestingly, sC-initial bases pattern as vowel-initial (17d). (17)

European Portuguese a. b.

[in]admissivel [h]pureza [h]satisfeito c. [h]tratavel d. [inœk]apavel

‘inadmissable’ ‘impurity’ ‘dissatisfied’ ‘unsociable’ ‘inescapable’

(17d) can be straightforwardly expressed under Kaye’s view that sC clusters are coda + onset, because, for independent reasons, all syllables in Government Phonology contain an onset constituent. Consider the representations below.15 In (18a), /n/ associates to the onset of the first syllable in the base. In (18b), this position is occupied, so nasality is preserved on the preceding vowel. The right result obtains in (18c) precisely because the syllable containing s includes an empty onset. (18)

Coda analysis a.

O

R

+

O

N

c.

O

i

n

R

+

N i 15

O

R

b.

R

N

N

a

i O

R

+

O

R N

n

p

u

R N

N n

O

œ

k

a

No representations are provided by Kaye. (18) reflects my best guess (minus X-slots), based on his discussion.

13

Heather Goad

An appendix analysis of s, it seems, cannot formally capture (17d). See (19c), where bases with initial sC are incorrectly predicted to pattern with onset-initial bases because there is no empty constituent to host /n/. (19)

Appendix analysis a.

q R

+

O

N i c. *

n

q

b. q

R

R

N

N

a

i

O

+

N i

+

O

R N

n

p

u

q

q R

q

R N

n

œ

k

a

Kaye’s paper compares the coda + onset analysis of sC clusters to the alternative that they form branching onsets; the option that s is analyzed as an appendix is not discussed. We have seen that appendixal s cannot straightforwardly capture European Portuguese. It also goes against the Uniformity Principle: Italian s is an appendix in stra’niera but a coda in ’pasta; this would likely be considered a weakness by proponents of Government Phonology. However, there are contexts where word-internal s maintains its appendix status, in contrast to the pattern observed for Italian; sC can follow a rhyme that is already full, so s cannot be accommodated as an ordinary coda. We have already observed this for English, but we have also seen that English permits word-internal three-position rhymes under limited circumstances and most of the problematic sC data are in words that historically involve prefixes. To ensure that there is nothing unusual about English concerning the distribution of sC clusters, let us turn to Acoma (Miller 1965). Two-position rhymes in Acoma are limited to VV and seemingly Vs (loans aside); see (20a) and (20b). If word-internal s were always a regular coda, as it appears to be in (20b), we would expect it to be restricted to occurring after short vowels, as in Italian. (20c) reveals that this is not the case. (20)

Acoma a. b.

[spúuná] [ja?ái] [sust’á] [?éská]

‘pottery’ ‘sand’ ‘I took water’ ‘rawhide’

The Representation of sC Clusters c.

14

[?úuscúutsh i] ‘drum’ [ku’isc haÏa] ‘knot’ [w’i’isp’i] ‘cigarette’

It appears that s is an appendix to the syllable in Acoma, rather than a regular coda. Before we accept this, however, an alternative analysis must be considered, that sC clusters are not actually clusters in Acoma but are, instead, adjacent onsets interrupted by an empty nucleus. This analysis would, of course, allow s to occur after long vowels, and it could be motivated by the observation that the plain– aspirated contrast is maintained after s ([?úu.sØ.cúu.tsh i] vs. [ku’i.sØ.c ha.Ïa]). Evidence from the right word edge, however, reveals a potential challenge: word-final consonants, which are always onsets of empty-headed syllables in Government Phonology (Kaye 1990), are not permitted in Acoma: *[. . . CV.sØ]. In sum, we have critiqued four representations for sC clusters. While we can likely dispense with the extraprosodic representation for s in (5a) in favor of the alternatives in (5b)–(5d), choosing between these alternatives is no easy task. On the one hand, there are languages like Acoma and English, which are most compatible with the position that s is an appendix to the syllable; on the other, there are languages like European Portuguese and to a lesser extent Italian, which seem to require a coda analysis for s. When German is compared with English and Dutch, it becomes clear that languages also differ in where sC can occur – stem-initially only or also morpheme-internally – revealing that appendixal s must be licensed by the PWd in the former case and by the syllable in the latter.

5

Are all sC clusters represented in the same manner in a given language?

Thus far, we have treated all sC clusters as a single class. Given that these clusters fall into two groups concerning their sonority profile, several researchers have proposed different representations for each group within a given language. One body of work considers rising sonority sC clusters to be structured as branching onsets, while s + obstruent clusters are appendix-initial (see e.g. Hall 1992 with data from German; Fikkert 1994 and Booij 1995 from Dutch; Gierut 1999 from English in phonologically delayed children). Another body of work motivates a different analysis for s + stop clusters: they are not actually clusters but instead form complex segments (see for example van de Weijer 1996, drawing on data from several languages; Fudge 1969, Ewen 1982, Fujimura and Lovins 1978, and Selkirk 1982 from English; Broselow 1983 from L2 English; Wiese 1996 from German; Barlow 1997 from English in phonologically delayed children). Since all of this research draws a boundary between rising sonority clusters and s + stop clusters, much of the data effectively shows evidence for either position. The section on cluster phonotactics revealed that, when place of articulation is examined, end-state grammars challenge the view that s + sonorant clusters form branching onsets. Evidence from acquisition (chapter 101: the interpretation of phonological patterns in first language acquisition), however, suggests that some learners disregard place, relying solely on sonority in their analysis of left-edge clusters: all rising sonority clusters – both obstruent + sonorant and s + sonorant – pattern together. Consider the Dutch children in Fikkert’s (1994)

15

Heather Goad

Table 38.1 Jarmo’s cluster development Stage

Pattern

Examples

Age

1

stop

/’vl>ndHr/ /’sla(pH(n)/

[’k>nr] [’ta(pH]

‘butterfly’ ‘to sleep’

2;2.6 2;0.28

2

liquid

/’vl>ndHr/ /’sla(pH(n)/

[’l>Ir] [’la(pH]

‘butterfly’ ‘to sleep’

2;2.27 2;3.9

3

fricative

/’ølei‘ba(n/ /’sla(pH(n)/

[’øeiøa(n] [’sa(pH]

‘slide’ ‘to sleep’

2;4.1 2;3.9

4

stop + liquid

(skipped by Jarmo)

5

fricative/s + liquid

/’flesjH/ /slAk/

[’slesjH] [flAk]~[slAk]

‘bottle (dim)’ ‘snail’

2;4.1 2;4.1

study: all follow the same developmental path for fricative + liquid and s + lateral clusters. The stages through which learners pass are exemplified in Table 38.1 with data from Jarmo. Further, all children in Fikkert’s study master s + stop clusters at a different point in time from rising sonority clusters. As all rising sonority clusters pattern together in Dutch acquisition, in contrast to s + obstruent, Fikkert concludes that s + sonorant clusters are represented in the same manner as branching onsets. For her, s + obstruent clusters involve s licensed by the PWd, as in (5b) above. While sonority plays a decisive role in Fikkert’s data, this is not the case for all children. Indeed, we observed in (6) that when only one member of a cluster is produced by Annalena and Amahl, it is the cluster head that survives, regardless of its relative sonority. To show that this pattern extends past the deletion stage, consider the developmental path for Amahl in Table 38.2. Table 38.2 Amahl’s cluster development Stage

obstruent + liquid

1–8

reduction to head reduction to head

13–14

branching onset acquired

15–19

/sl/

/sm sn/

/sp sk/

/st/

Age

reduction to head

reduction to head

reduction to head

2.60–2.175 2.233–2.256

fusion

fusion

vacuous fusion

20–22 24 25 26–29

vacuous fusion

2.261–2.333

fusion

2.345–3.38

appendix acquired

3.78–3.96 appendix acquired

3.104–3.128 appendix acquired

appendix acquired

3.133–3.355

The Representation of sC Clusters

16

All clusters reduce to the head through stage 8. Branching onsets are acquired first, emerging at stage 9 and being fully mastered at stage 13. sC clusters are reduced to the head until stage 15, at which point the two consonants undergo fusion. Fusion is overt for s + sonorant (e.g. /sn/ → [n]), but not initially for s + stop since both consonants are voiceless (e.g. /st/ → [t]). At stage 20, when stridency is acquired, fusion becomes overt for /st/ (/st/ → [ts~s~Ú]). Importantly, at this stage, no target s + sonorant clusters are realized as [s]; thus, earlier reduction to the head cannot have been due to the unavailability of [s] in Amahl’s productions. Appendices are not acquired until 210 days later than branching onsets, first for /sl/. The remaining sC clusters are then acquired over a period of 55 days. The developmental profiles for Jarmo and Amahl are completely different for s + sonorant clusters. It truly appears that these clusters are analyzed as branching onsets in Jarmo’s grammar, and with an initial appendix/coda in Amahl’s grammar. We briefly consider the consequences of this. In the section on phonotactics we observed that place identity and C2 place profile suggest that, in end-state Dutch, all sC clusters are represented differently from branching onsets. If learners like Jarmo initially only consider sonority in assigning clusters to categories, then some unlearning must take place to arrive at the target representations. These children must discover that: (i) place identity is respected in branching onsets (*/tl/), but not in s + sonorant clusters (/sl/); (ii) s + sonorant is represented in the same fashion as s + obstruent. The implication is that, prior to this, sonority pattern learners do not attend to place identity. Fikkert provides some data consistent with this (e.g. Jarmo’s /’dr>IkH(n)/ → [’tl>IkA] ‘to drink’ (2;4.1)). Positive evidence for (i) and (ii) comes from medial clusters following stressed vowels: as (21) shows, /tl/, /sl/, and /st/ are syllabified as coda + onset, while /tr/ forms a branching onset (Trommelen 1984; Kager and Zonneveld 1986).16 (The initial vowel in (21d) is not underlyingly long, which could have prevented heterosyllabification of medial /tr/; it is lengthened because rhymes are minimally bipositional in Dutch.) (21)

Dutch a. b. c. d.

[’At¬.lAs] [’Ds.lo(] [’pAs.ta(] [’ma(.tr>ks]

‘atlas’ ‘Oslo’ ‘paste’ ‘matrix’

Although data such as these reveal that s + sonorant clusters do not pattern as branching onsets, words of the profile in (21a) and (21b) are largely restricted to borrowings and proper names (van der Torre 2003) and are likely infrequent in child-directed speech (Fikkert 1994). Thus, the analysis that s + sonorant clusters are not actually branching onsets may be unlearnable under the scenario that children begin assigning clusters to classes based solely on sonority. Does this mean that adult Dutch is a language where s + sonorant clusters are analyzed as branching onsets? This is Fikkert’s (1994) position, but it leaves unexplained the 16

Thanks to Janet Grijzenhout and Stephanie Schreven for help with the phonetic detail in (21).

Heather Goad

17

differences in place profile of true branching onsets and s + sonorant clusters, as well as the syllabification of s + sonorant with s + obstruent in word-medial position. We have just observed that some researchers consider sonority decisive in assigning clusters to categories with the result that rising sonority s-initial clusters are analyzed as branching onsets. Consequently, it is s + obstruent that is singled out as formally different. Another approach to the different behavior of s + stop has been to treat these strings as complex segments. This view is autosegmentally expressed in (22). (22)

s + stop as complex segment X s

p

Many of the arguments for a single segment analysis of s + stop are distributional. Indeed, most researchers who motivate this analysis begin with the observation that all left-edge three-member clusters in the languages under examination are s + stop-initial (e.g. Fudge 1969; Selkirk 1982; van de Weijer 1996; Wiese 1996). The position that branching onsets contain a maximum of two consonants can be maintained if s + stop clusters form complex segments (chapter 29: secondary and double articulation). As Selkirk (1982) asserts, similar problems can be resolved at the right edge if the single segment analysis holds independent of position (see earlier Fudge 1969). On the one hand, words like next violate the finding that, inflection aside, English codas are maximally binary. On the other, words such as next, wasp, and task violate the constraint that C2 in an obstruent coda cluster is always coronal. If these words are segmented as [n-ek-st], [w-A-sp], and [t-æ-sk], these problems are resolved. Following on this, Ewen (1982) observes for English that, unlike other clusters which appear in mirror-image order in syllable-initial vs. -final position (clip, silk), sC clusters occur in the same order in both positions (skip, task). He notes that similar observations were made earlier by Vogt (1942) for Norwegian and Sigurd (1965) for Swedish. A different sort of distributional argument is made by Wiese (1996) for German. He shows that s + stop and other complex segments, namely affricates, have similar distributions. Alongside [œpl>t] ‘chippings’ and [œpri(sHn] ‘to sprout’, we find words like [pfl>çt] ‘duty’ and [pfri(m] ‘awl’. Turning to other types of phonological behavior, most of the evidence that van de Weijer (1996) puts forward involves languages where s + sonorant behaves differently from s + stop and together with true branching onsets, thereby supporting a branching onset analysis for s + sonorant and another analysis for s + stop. Reduplication in Gothic provides the most compelling evidence that the appropriate analysis for s + stop is as a complex segment (see also Cairns and Feinstein 1982; Wiese 1996). Gothic has a class of verbs where the preterite is formed through reduplication. The data in (23a) and (23b) (from van de Weijer 1996) reveal that singleton onsets and the first member of branching onsets are copied. (23c) and (23d) show that sC clusters do not behave uniformly. Those with rising sonority pattern with branching onsets – only C1 is copied – while s + stop is copied as a unit.

The Representation of sC Clusters (23)

18

Gothic reduplication a.

haita hwopa b. fraisa c. sle(pan d. skaida

‘I am called’ ‘I boast’ ‘I try’ ‘to sleep’ ‘I sever’

hai-hait hwai-hwop fai-frais sai-sle(p skai-skaih

‘I ‘I ‘I ‘I ‘I

was called’ boasted’ tried’ slept’ severed’

The pattern in (23d) is entirely as expected if s + stop forms a complex segment; indeed, accounting for it through an appendix/coda + onset representation is challenging at best. Somehow, the analysis would have to copy the segments up to and including the cluster head. We turn now to examine some challenges for the complex segment approach to s + stop. Although we have provided distributional evidence for this proposal, another look reveals problems on this front. Van der Hulst (1984) rejects the analysis for Dutch, on the grounds that s + stop does not have the same distribution as singleton obstruents. Putative complex segments can follow a vowel but not a consonant, unlike non-complex obstruents: [sesp] ‘wasp’ but *[selsp], cf. [selp] ‘lion cub’. The same problem holds for English: [wAsp] wasp, [wArp] warp, *[wArsp]. If s + stop forms a two-consonant string, this observation follows straightforwardly. Consider next patterns of epenthesis in cluster repair in L2 acquisition and loanword adaptation (chapter 95: loanword phonology). (24) presents four patterns of repair for learners/borrowers whose native grammars lack both branching onsets and sC clusters (V = epenthetic vowel). Although there are languages where the position of the epenthetic vowel is constant (24a) and (24b), in languages where both prothesis and anaptyxis are observed, two general patterns are found (24c) and (24d): s + stop is never interrupted, obstruent + sonorant always is, and s + sonorant is variably interrupted.17 (24)

Patterns of cluster repair a. b. c. d.

Japanese (Lovins 1974) Iraqi Arabic (Broselow 1983) Egyptian Arabic (Broselow 1983) Farsi (Karimi 1987)

s-V-stop V-s-stop V-s-stop V-s-stop

obstr-V-son V-obstr-son obstr-V-son obstr-V-son

s-V-son V-s-son s-V-son V-s-son

Broselow (1983) proposes that the reluctance of L2 learners to epenthesize into s + stop motivates their analysis as complex segments (see also van de Weijer 1996). While this analysis helps to explain the patterns in (24b)–(24d), it cannot straightforwardly account for the Japanese pattern. Clearly, absence of strings of the shape VsTV (T = stop) in Japanese makes prothesis an impossible repair. Given this, under the view that s + stop forms a single segment, the expected pattern for Japanese is simplification of the complex segment, as in van der Weijer’s (1996) analysis of Sanskrit reduplication: C1 is normally copied ([snih]-[si-Ïnih] ‘to be sticky’ (root-perf), [dru]-[du-druv] ‘to run’), but complex segments are reduced to their head (the stop): [skand]-[ka-’skand-a], *[ska-’skand-a] ‘to leap’ (see chapter 119: reduplication in sanskrit). 17

As Fleischhacker (2001) shows, more divisions among s + sonorant are actually observed. We return to this below.

19

Heather Goad

Finally, the single segment hypothesis for s + stop is challenged by Byrd (1994) on articulatory grounds. Byrd experimentally examines /sk/ strings in English in three contexts: word-initially, cross-word, and word-finally. She observes that #sk involves less overlap than s#k (as well as sk#), and sk# less overlap than gd#. If s + stop forms a single segment, we would expect to find a higher degree of overlap than is observed in strings of consonants that clearly do not form single segments (s#k and gd#). To sum up, the goal of this section was to examine whether all sC clusters pattern as a class, in contrast to branching onsets. On the one hand, we observed that s + sonorant sometimes patterns with obstruent + sonorant, suggesting that they may have the same representation. Supporting evidence was provided from the developmental path for left-edge clusters in Dutch. On the other hand, we examined evidence in favor of the position that s + stop clusters form complex segments. The strongest evidence for this comes from Gothic reduplication. However, both positions were challenged as well, the analysis for child Dutch by the observation that end-state Dutch treats s + sonorant together with s + obstruent. Perhaps the most damaging evidence against the complex segment analysis is that the articulatory evidence available does not support it.

6

Perceptual considerations

Throughout the chapter, it has been assumed that a structural difference holds between true branching onsets and (at least some) sC clusters. We have observed, however, that languages do not always draw a clear line between these two cluster types: s + sonorant sometimes patterns with s + stop and at other times with obstruent + sonorant. When we consider Fleischhacker’s (2001) survey on epenthesis in L2 and loanword phonology, the problem becomes even more acute: some languages draw the boundary between cluster type internal to the s + sonorant class, as shown in (25), in contrast to what was observed in (24).18 (25)

more prothesis more anaptyxis s + stop < s + m < s + n < s + l < s + r < s + glide < stop + son (Wolof) Egyptian Hindi Kazakh Farsi2 (Catalan) Arabic

Results like these lead Fleischhacker to abandon a structural approach to cluster representation, putting her in the company of others who advocate eliminating the syllable (e.g. Steriade 1999).19 She argues instead for a perceptually motivated 18

Farsi2 refers to the data collected by Fleischhacker. Although Karimi (1987) reports that all s + sonorant clusters pattern together, as per (24d), she provides no s + rhotic or s + glide examples. The data collected by Fleischhacker show a different pattern for these clusters, as revealed in (25). Since this may reflect a dialect difference, I have labeled the language from which Fleischhacker collected data Farsi2. In addition, Catalan is in parentheses because, although it draws a division between s + rhotic and s + glide, only prothesis is attested; s + glide and stop + sonorant do not undergo epenthesis. 19 Steriade’s arguments include the absence of clear evidence for word-internal syllable boundaries. Some of the evidence concerns the syllabification of s + stop clusters in English when the following vowel is stressed (e.g. mysterious). Speakers’ judgments vary on where to draw the boundary in such words, because neither parse is sanctioned at word edge: V1.stV2 is problematic because V1 is illicit word-finally; V1s.tV2 is problematic because [t] is not aspirated, as it would be word-initially.

The Representation of sC Clusters

20

approach to cluster behavior. The epenthesis site, in particular, is chosen to maximize perceptual similarity between the target (non-epenthesized) form and the output. In view of this, we consider in this section whether the differences that hold between sC clusters and true branching onsets can be explained by perceptual considerations alone; this, of course, would challenge the claim that a structural approach to cluster behavior is necessary. We do not have space to examine Fleischhacker’s proposal in detail, but the predictions she motivates are as follows: (i) anaptyxis is preferred to prothesis in stop + sonorant; (ii) prothesis is preferred to anaptyxis in s + stop; (iii) among s + sonorant, more anaptyxis is expected as C2 increases in sonority; and (iv) more anaptyxis is expected in stop + sonorant than in fricative + sonorant. Concerning (iv), note that Fleischhacker’s account does not distinguish s from other fricatives; that is, no explanation is provided for the observation that fricatives other than s pattern with stops in preferring anaptyxis to prothesis ((24) above). We return to this shortly. First, let us examine the role of perception in sC well-formedness in more detail (chapter 98: speech perception and phonology). As alluded to earlier, the acoustic properties of s, unlike other obstruents, enable it to appear in positions where it is not followed by a sonorant: strident fricatives have robust internal cues for both place and manner, ensuring their perceptibility in all contexts, even before stops (Wright 1996, 2004). Clearly, then, the view that segments are ordered to yield a rise in sonority toward the peak does not extend to s. Indeed, in spite of the sonority reversal, (strident) fricative + stop is superior to both stop + stop and stop + fricative,20 even though the latter two contain a sonority plateau and minimal rise respectively, because (strident) fricatives are less dependent on formant transitions for their identification than stops (Wright 1996, 2004).21 However, while the acoustic properties of s explain why appendices are so often limited to s on the one hand and why these segments can be followed by stops on the other, they cannot, as far as I can tell, explain cross-linguistic preferences on sC profile. Table 38.3 shows that sC clusters have a rather unusual distribution across languages when viewed from the perspective of perceptual robustness. We focus on word-initial position. Since the perceptibility of all consonants in C2 position in an initial sC cluster will be partly compromised by the preceding s, Table 38.3 sC cluster profiles across languages Spanish

French, Acoma

Greek

English

Dutch

German

Russian

s + stop

*













s + fricative

*

*



*



*



s + nasal

*

*

(*)









s + lateral

*

*

*









s + rhotic

*

*

*

*

(*)





20

Evidence that this observation is not restricted to strident fricative + stop comes from Greek: the stop + stop clusters in (3a) are often replaced by fricative + stop. 21 See Morelli (1999) for an alternative explanation of obstruent cluster well-formedness that appeals to markedness constraints on segment sequencing.

21

Heather Goad

we would expect consonants that are most perceptible to be positioned after s. Masking should not be too severe in this context; as mentioned earlier, Byrd (1994) observes that #sk clusters involve less overlap than s#k and sk#. The problem may rather be one of duration: Byrd finds that /s/ is longer in #sk than in both s#k and sk#, while /k/ is shorter in #sk than in both s#k and sk#. If the relatively short duration of C2 can be generalized to other #sC clusters, we would expect segments with robust internal cues to be favored in this position. Liquids should be optimal, since they have clear formant structure throughout. Nasals should be favored over stops, since their manner (and to a lesser extent their place) properties are present in the nasal spectrum. Stops, which have weak internal cues, should be the least optimal. What we observe in Table 38.3, by contrast, is that s + stop is favored. French and Acoma do not permit s + sonorant clusters at all (French has s + sonorant in loanwords), and depending on the status of marginal s + nasal clusters, Greek may fall into this category as well. Otherwise, it permits s + sonorant clusters of lower sonority than those of higher sonority. Although a larger typology of languages is required before firm conclusions can be drawn, Table 38.3 suggests that s + stop › s + nasal › s + lateral › s + rhotic (› = is more harmonic than). The favored profile in sC clusters is thus the opposite of that observed for branching onsets. This is not unexpected on a structural account if all sC clusters are head-final, in contrast to branching onsets. In sC clusters, C2 is the onset head; thus it should respect the preferences holding of singleton onsets. Since obstruents are the optimal onsets (e.g. Clements 1990), a parallel should be observed between obstruents in C1 position in branching onsets and stops in C2 position in sC clusters (not fricatives more generally, because of the preceding s; see Wright 2004: 51). While the C1C2 asymmetry in branching onsets vs. sC clusters follows from the status of s as an appendix, it is best captured, I suggest, under Kaye’s proposal that s is a coda. Recall from (14) and (21) that medial sC clusters in Italian and Dutch are heterosyllabic. If sC clusters are always syllabified as coda + onset clusters, then their profile should respect cross-linguistic preferences for optimal syllable contact. Syllable contact will favor C2 with lower sonority: Vs.TV › Vs.NV › Vs.lV › Vs.rV. As C2 increases in sonority, the cluster prefers to be syllabified as a branching onset, but if this option is never available for sC clusters, then higher-sonority sC clusters will be forbidden, regardless of their position in the word. The profile in Table 38.3 closely parallels Fleischhacker’s typology in (25) for preferred epenthesis sites, in sC clusters. Prothesis occurs more commonly when C2 has lower sonority. As the sonority of C2 increases, prothesis will result in poor syllable contact. Note as well that the proposed syllable contact account of sC well-formedness leads to a distinction between s + sonorant and fricative + sonorant, as only the latter can form branching onsets. Thus, the fact that fricative + sonorant patterns with stop + sonorant in epenthesis follows, in contrast to under Fleischhacker’s account (see (iv) above). In sum, I contend that both perceptual and structural considerations must be factored into our understanding of cluster well-formedness. While perceptual considerations can explain why appendices are so often limited to s and why s + stop is well-formed in spite of its sonority profile, it is the structural differences between sC clusters and branching onsets that explain the preference for sC profile on the sonority dimension as well as some observed differences in epenthesis site.

The Representation of sC Clusters

7

22

Conclusion

In this chapter, we have examined several alternative analyses for sC clusters. On the empirical front, we have seen that s + stop clusters reliably pattern differently from true branching onsets. Not surprisingly, then, the proposals we have examined for s + stop all share an important property: s + stop clusters are head-final, whether s is unaffiliated, an appendix organized by some prosodic constituent above the onset, the coda, or the first member of a complex segment. Branching onsets, by contrast, are head-initial. This difference in headedness helps to explain parallels on the place dimension between C2 in an s + stop cluster and C1 in a branching onset, as well as syllabification preferences in word-medial contexts. Beyond that, however, details of the proposals differ, and, following on this, each proposal is both supported and challenged by the available evidence. There are languages like Acoma, English, and Dutch where s can appear medially after rhymes that are “full,” thereby providing support for the analysis that s is linked to the syllable and potentially challenging the coda analysis. However, in some of these same languages, namely English and Dutch, as well as in languages like Italian, the observation that sC clusters are heterosyllabic after stressed vowels supports the coda analysis and thereby questions the proposal that s is licensed by the syllable or that s + stop form a complex segment. The complex segment analysis, in turn, is supported by the reduplication pattern in Gothic which both the appendix and coda analyses fail to elegantly capture. At present, then, it seems that multiple representations for sC clusters may be required. If the number of parametric options is limited and there is robust evidence available for learners to determine the appropriate representation for the language being acquired, this is far from problematic. For example, the fact that sC clusters have a more limited distribution in some languages (German) than in others (English, Dutch) can be captured if licensing by the PWd represents the least marked option and therefore the starting point for learners. There will then be positive evidence available to signal learners of some languages that sC clusters are licensed lower down, by the syllable. However, we have also seen that this type of scenario may not always work. If Dutch learners initially assume that s + sonorant clusters form branching onsets, the evidence available to undo this analysis in favor of one where all sC clusters pattern as a class is far from robust and may present a learnability challenge. The problem with s + sonorant clusters more generally is that they are phonotactically ambiguous and, following from this, they pattern ambiguously across languages. While one could fail to be surprised by this, on the grounds that these clusters are both s-initial and rise in sonority, exactly how their ambiguous behavior should be formally expressed is far from clear. They appear to be analyzed as branching onsets in some languages (e.g. Jarmo’s Dutch grammar) and as appendix/coda-initial in others (e.g. Amahl’s English grammar), but in this particular case, this finding is surprising, in view of the otherwise high degree of similarity between the two target languages. The solution that languages employ different analyses for s + sonorant is far from optimal and may lead some researchers to abandon a structural approach to the syllable altogether in favor of a perceptually grounded account of segmental

23

Heather Goad

contact. Indeed, the latter may find support in the observation that even within the class of s + sonorant, languages show different patterns of behavior; we have seen that the division between prothesis and anaptyxis can be drawn anywhere internal to this class. At the same time, however, a purely perceptually based account seems to be challenged by the finding that preferences for sC cluster profile are virtually the inverse of those observed for obstruent + sonorant clusters. While an appeal to syllable contact was made to capture both of these observations, the analysis follows most straightforwardly from the proposal that sC clusters are always syllabified as coda + onset strings. We have already seen that this proposal may be challenged by languages such as English, Dutch, and Acoma. In spite of the quantity of research that has been undertaken on sC clusters, it is perhaps most evident that more needs to be done before the issue of their representation can be resolved (if ever). A sampling of questions at the two extremes includes the following. At one end of the spectrum, can a more detailed examination of perceptual factors capture differences in the behavior of fricative + sonorant and s + sonorant clusters, thereby further questioning the need for a structurally based approach to segmental contact and syllabification behavior? At the other end, if a structural account of behavior based on syllable contact proves fruitful to pursue, with judicious use of abstract representations, can the coda + onset analysis be motivated for all languages? I leave these and many other questions in between to future research.

ACKNOWLEDGMENTS Many thanks to two anonymous reviewers and the editors for helpful comments. This work was supported by grants from SSHRC and FQRSC.

REFERENCES Arvaniti, Amalia. 2007. Greek phonetics: The state of the art. Journal of Greek Linguistics 8. 97–208. Barlow, Jessica. 1997. A constraint-based account of syllable onsets: Evidence from developing systems. Ph.D. dissertation, Indiana University. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Borowsky, Toni. 1986. Topics in the lexical phonology of English. Ph.D. dissertation, University of Massachusetts, Amherst. Brockhaus, Wiebke. 1999. The syllable in German: Exploring an alternative. In Harry van der Hulst & Nancy Ritter (eds.) The syllable: Views and facts, 169–218. Berlin: Mouton de Gruyter. Broselow, Ellen. 1983. Nonobvious transfer: On predicting epenthesis errors. In Susan Gass & Larry Selinker (eds.) Language transfer in language learning, 269–280. Rowley, MA: Newbury House. Byrd, Dani. 1994. Articulatory timing in English consonant sequences. Ph.D. dissertation, University of California, Los Angeles. Cairns, Charles & Mark Feinstein. 1982. Markedness and the theory of syllable structure. Linguistic Inquiry 13. 193–225. Chierchia, Gennaro. 1986. Length, syllabification and the phonological cycle in Italian. Journal of Italian Linguistics 8. 5–33.

The Representation of sC Clusters

24

Clements, G. N. 1990. The role of the sonority cycle in core syllabification. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 283–333. Cambridge: Cambridge University Press. Clements, G. N. & Samuel J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Cyran, Eugeniusz & Edmund Gussmann. 1999. Consonantal clusters and governing relations: Polish initial consonant sequences. In Harry van der Hulst & Nancy Ritter (eds.) The syllable: Views and facts, 219–247. Berlin & New York: Mouton de Gruyter. Davis, Stuart. 1990. Italian onset structure and the distribution of il and lo. Linguistics 28. 43–55. Drachman, Gaberell. 1990. A remark on Greek clusters. In Joan Mascaró & Marina Nespor (eds.) Grammar in progress: GLOW essays for Henk van Riemsdijk. Dordrecht: Foris. Elsen, Hilke. 1991. Erstspracherwerb: Der Erwerb des deutschen Lautsystems. Wiesbaden: Deutscher Universitäts-Verlag. Ewen, Colin J. 1982. The internal structure of complex segments. In van der Hulst & Smith (1982: Part II), 27–67. Ewen, Colin J. & Bert Botma. 2009. Against rhymal adjuncts: The syllabic affiliation of English postvocalic consonants. In Kuniya Nasukawa & Phillip Backley (eds.) Strength relations in phonology, 221–250. Berlin & New York: Mouton de Gruyter. Fikkert, Paula. 1994. On the acquisition of prosodic structure. Ph.D. dissertation, University of Leiden. Fleischhacker, Heidi. 2001. Cluster-dependent epenthesis asymmetries. UCLA Working Papers in Linguistics 7: Papers in Phonology 5. 71–116. Fudge, Erik C. 1969. Syllables. Journal of Linguistics 5. 253–286. Fujimura, Osamu & Julie B. Lovins. 1978. Syllables as concatenative phonetic units. In Alan Bell & Joan B. Hooper (eds.) Syllables and segments, 107–120. Amsterdam: North-Holland. Giegerich, Heinz J. 1992. English phonology: An introduction. Cambridge: Cambridge University Press. Gierut, Judith. 1999. Syllable onsets: Clusters and adjuncts in acquisition. Journal of Speech, Language, and Hearing Research 42. 708–726. Goad, Heather & Yvan Rose. 2004. Input elaboration, head faithfulness and evidence for representation in the acquisition of left-edge clusters in West Germanic. In René Kager, Joe Pater & Wim Zonneveld (eds.) Constraints in phonological acquisition, 109–157. Cambridge: Cambridge University Press. Goad, Heather & Lydia White. 2006. Ultimate attainment in interlanguage grammars: A prosodic approach. Second Language Research 22. 243–268. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Green, Antony Dubach. 2003. Extrasyllabic consonants and onset well-formedness. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 238–253. Cambridge: Cambridge University Press. Hall, T. A. 1992. Syllable structure and syllable-related processes in German. Tübingen: Niemeyer. Harris, James W. 1983. Syllable structure and stress in Spanish: A nonlinear analysis. Cambridge, MA: MIT Press. Harris, John. 1994. English sound structure. Oxford: Blackwell. Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Hayes, Bruce. 2009. Introductory phonology. Malden, MA & Oxford: Wiley-Blackwell. Hulst, Harry van der. 1984. Syllable structure and stress in Dutch. Dordrecht: Foris. Hulst, Harry van der & Norval Smith (eds.) 1982. The structure of phonological representations. 2 parts. Dordrecht: Foris. Iverson, Gregory K. & Joseph C. Salmons. 1995. Aspiration and laryngeal representation in Germanic. Phonology 12. 369–396.

25

Heather Goad

Joseph, Brian D. & Irene Philippaki-Warburton. 1987. Modern Greek. London: Croom Helm. Kager, René & Wim Zonneveld. 1986. Schwa, syllables, and extrametricality in Dutch. The Linguistic Review 5. 197–221. Karimi, Simin. 1987. Farsi speakers and the initial consonant cluster in English. In Georgette Ioup & Steven Weinberger (eds.) Interlanguage phonology: The acquisition of a second language sound system, 305–318. Cambridge, MA: Newbury House. Kaye, Jonathan. 1990. “Coda” licensing. Phonology 7. 301–330. Kaye, Jonathan. 1992. Do you believe in magic? The story of s+C sequences. SOAS Working Papers in Linguistics and Phonetics 2: 293–313. Reprinted 1996 in Henryk Kardela & Bogdan Szymanek (eds.) A Festschrift for Edmund Gussmann, 155–176. Lublin: University Press of the Catholic University of Lublin. Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1990. Constituent structure and government in phonology. Phonology 7. 193–231. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kim, Chin-Wu. 1970. A theory of aspiration. Phonetica 21. 107–116. Klepousniotou, Katerina. 1998. Onset clusters in Modern Greek. Unpublished ms., McGill University. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. Lovins, Julie. 1974. Why loan phonology is natural phonology. Papers from the Annual Regional Meeting, Chicago Linguistic Society: Parasession on natural phonology, 240–250. Miller, Wick R. 1965. Acoma grammar and texts. Berkeley & Los Angeles: University of California Press. Morelli, Frida. 1999. The phonotactics and phonology of obstruent clusters in Optimality Theory. Ph.D. dissertation, University of Maryland at College Park. Myers, Scott. 1987. Vowel shortening in English. Natural Language and Linguistic Theory 5. 485–518. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Piggott, Glyne L. 1991. Apocope and the licensing of empty-headed syllables. The Linguistic Review 8. 287–318. Rice, Keren. 1992. On deriving sonority: A structural account of sonority relationships. Phonology 9. 61–99. Selkirk, Elisabeth. 1982. The syllable. In van der Hulst & Smith (1982: Part II), 337–383. Sigurd, Bengt. 1965. Phonotactic structures in Swedish. Lund: Uniskol. Smith, Neil V. 1973. The acquisition of phonology: A case study. Cambridge: Cambridge University Press. Steriade, Donca. 1982. Greek prosodies and the nature of syllabification. Ph.D. dissertation, MIT. Steriade, Donca. 1999. Alternatives to syllable-based accounts of consonantal phonotactics. In Osamu Fujimura, Brian D. Joseph & Bohumil Palek (eds.) Item order in language and speech, 205–245. Prague: Karolinum Press. Torre, Erik Jan van der. 2003. Dutch sonorants: The role of place of articulation in phonotactics. Ph.D. dissertation, University of Leiden. Trommelen, Mieke. 1984. The syllable in Dutch: With special reference to diminutive formation. Dordrecht: Foris. Tzakosta, Marina. 2009. Asymmetries in /s/ cluster production and their implications for language learning and language teaching. In Anastasios Tsangalidis (ed.) Selected Papers from the 18th International Symposium of Theoretical and Applied Linguistics, 365–373. Thessaloniki: Monochromia. Tzakosta, Marina & Jeroen Vis. 2009. Phonological representations of consonant sequences: The case of affricates vs. “true” clusters. In Giorgos K. Giannakis, Mary Baltazani, Giorgos I. Xydopoulos & Anastasios Tsangalidis (eds.) Proceedings of the 8th International

The Representation of sC Clusters

26

Conference on Greek Linguistics, 558–573. Ioannina: Department of Linguistics, University of Ioannina. Vaux, Bert. 1998. The phonology of Armenian. Oxford: Clarendon Press. Vaux, Bert. 2004. The appendix. Paper presented at the Symposium on Phonological Theory: Representations and Architecture, City University of New York. Vennemann, Theo. 1982. Zur Silbenstruktur der deutschen Standardsprache. In Theo Vennemann (ed.) Silben, Segmente, Akzente, 261–305. Tübingen: Niemeyer. Vennemann, Theo. 1988. Preference laws for syllable structure and the explanation of sound change: With special reference to German, Germanic, Italian, and Latin. Berlin: Mouton de Gruyter. Vogt, Hans. 1942. The structure of the Norwegian monosyllable. Norsk Tidsskrift for Sprogvidenskap 12. 5–29. Waals, Juliette. 1999. An experimental view of the Dutch syllable. Ph.D. dissertation, Utrecht University. Weijer, Jeroen van de. 1996. Segmental structure and complex segments. Tübingen: Niemeyer. Wiese, Richard. 1988. Silbische und lexikalische Phonologie: Studien zum Chinesischen und Deutschen. Tübingen: Niemeyer. Wiese, Richard. 1996. The phonology of German. Oxford: Clarendon Press. Wright, Richard. 1996. Consonant clusters and cue preservation in Tsou. Ph.D. dissertation, University of California, Los Angeles. Wright, Richard. 2004. A review of perceptual cues and cue robustness. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 34–57. Cambridge: Cambridge University Press.

39

Stress: Phonotactic and Phonetic Evidence Matthew Gordon

1

Introduction

Stress can be signaled through a number of different acoustic properties, including increased duration, greater intensity, and higher fundamental frequency. Stress may also affect segmental and syllable structure. Typically, stressed syllables trigger qualitative fortition and/or lengthening, whereas unstressed syllables are associated with lenition and/or shortening. To take an example of a stress-driven fortition process affecting syllable structure, Dutch (Booij 1995) inserts an intervocalic glottal stop as an onset to stressed vowels; epenthesis does not interrupt vowel sequences in which the second vowel is unstressed. We thus have pairs such as [’xa.Ds] ‘chaos’ and [a.’?Dr.ta] ‘aorta’, in which the presence of glottal stop is predictable from stress. American English provides well-described cases of lenition in unstressed syllables. For example, post-vocalic coronal stops weaken to taps before unstressed syllabic sounds, e.g. /s/ti/ → [’s/7i] ‘city’. Furthermore, most unstressed vowels reduce to schwa, e.g. [’kan‘tekst] context vs. [kHn’teksŒuHl] contextual, or may delete in certain contexts delete, e.g. [’tme/7oÁ] ~ [’tHme/toÁ] tomato, [’ksændPH] ~ [kH’sændPH] Cassandra. While most segmental effects of metrical structure can be transparently linked to stress, there are others that are not predictable from stress, despite displaying properties typically associated with stress-induced alternations. For example, Nganasan, a Uralic language (Tereshchenko 1979; Helimski 1998; Vaysman 2009), has an alternation between strong and weak intervocalic consonants, termed “consonant gradation,” whereby strong consonants, generally voiceless or prenasalized obstruents, alternate with weak consonants, typically voiced or not prenasalized. The appearance of strong and weak consonants is predictable from syllable count (1). In the onset of even-numbered non-initial syllables, the strong grade appears, while the weak grade appears in the onset of oddnumbered non-initial syllables. Long vowels interrupt the alternating syllable count and, as long as they are not word-initial, are always preceded by weak consonants.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0039

Stress: Phonotactic and Phonetic Evidence (1)

2

Nganasan consonant gradation (Vaysman 2009: 43) ‘jama’Ïa-tu ‘Ioru’mu-tu su(’ÏH(-Ïu Iu’hu-Ïu

‘his/her/its ‘his/her/its ‘his/her/its ‘his/her/its

animal’ copper’ lung’ mitten’

As Vaysman shows, this pattern is explained if one assumes that words are parsed into binary feet starting at the left edge of words with long vowels forming monosyllabic feet and degenerate feet allowed word finally. Strong consonants occur foot medially and weak consonants occur in foot-initial syllables that are not also word-initial (2). (2)

Nganasan consonant gradation as a reflex of foot structure (Vaysman 2009: 43) (‘jama)(’Ïa-tu) (‘Ioru)(’mu-tu) (su()(’ÏH()-(Ïu) (Iu’hu)-(Ïu)

‘his/her/its ‘his/her/its ‘his/her/its ‘his/her/its

animal’ copper’ lung’ mitten’

The interesting feature of the Nganasan data is that stress does not always fall on syllables predicted to be stressed by the metrical structure diagnosed by consonant gradation. Primary stress in Nganasan falls on the penultimate syllable in all the words in (2), with a secondary stress occurring on initial syllables that are not adjacent to the primary stress. The monosyllabic foot in the last two words is thus completely unstressed, as is the first foot in the penultimate word. This chapter provides a typological overview of the phonetic correlates of stress and the various types of effects of stress and metrical structure on segmentlevel features, exploring how these effects can offer insight into the nature of stress and metrical structure and their formal representation (see chapter 40: the foot, chapter 41: the representation of word stress and chapter 57: quantity-sensitivity for related issues). The structure of the chapter is as follows. §2 examines supra-segmental correlates of stress including the phonetic parameters of duration, fundamental frequency, and intensity. §3 examines segmental alternations conditioned directly by the presence or absence of stress. §4 focuses on the role of foot structure in predicting fortition and lenition of vowels and consonants. §5 addresses segmental changes triggered by foot structures that conflict with metrical constituency as diagnosed by the stress system. §6 explores the role of history in shaping these mismatches between stress and the foot structure relevant for segmental alternations. Finally, §7 summarizes the chapter.

2

Suprasegmental phonetic correlates of stress

Fry (1955, 1958) pioneered research on the acoustic correlates of stress in his examination of the effect of stress in English on duration, intensity, and fundamental frequency. Focusing on the vowels in noun–verb minimal pairs such as ’convert (noun) vs. con’vert (verb) and ’import (noun) vs. im’port (verb), Fry found that stressed vowels were associated with greater duration, greater intensity, and higher

3

Matthew Gordon

fundamental frequency than their unstressed counterparts, with the last of these properties being most reliable as a cue to stress. Since Fry’s work, phoneticians have considerably broadened the typological database on stress correlates by examining other potential correlates of stress and by targeting a diverse set of languages for phonetic study. This research program has yielded many important results. For example, beyond the acoustic domain, stress is also associated with hyperarticulation of segments, which has ramifications for the segmental alternations discussed in §3. Furthermore, other potential acoustic correlates of stress have come to light, such as measurements of stress that are sensitive to spectral tilt (Sluijter and van Heuven 1996a, 1996b) or that integrate intensity over time (Lieberman 1960; M. Beckman 1986). Finally, typological study has shown that many languages are similar to English in using duration, intensity, and/or fundamental frequency to signal stress, e.g. Polish (Jassem et al. 1968), Tagalog (Gonzalez 1970), Mari (Baitschura 1976), Indonesian (Adisasmito-Smith and Cohn 1996), Pirahã (Everett 1998), Aleut (Taff et al. 2001), Chickasaw (Gordon 2004), Turkish (Levi 2005), and Kabardian (Gordon and Applebaum 2010). It has also become increasingly clear that the phonetic study of stress is a complicated matter for several reasons. Languages differ in their relative reliance on different cues to stress where the relevance of certain properties is functionally constrained in many languages by the extent to which potential stress correlates are used to mark phonemic contrasts other than stress. For example, lexical tone languages – e.g. Thai (Potisuk et al. 1996) and Pirahã (Everett 1998) – are less reliant on fundamental frequency to cue stress, and languages with phonemic length contrasts, e.g. Finnish, may have phonetically longer unstressed vowels than stressed vowels. There are also languages in which potential phonetic markers of stress do not converge on a single syllable but rather are shared between multiple, often, though not always, adjacent syllables. For example, in Welsh (Williams 1985) an unstressed final syllable often has higher fundamental frequency and longer vowel duration than an unstressed penultimate syllable in the same word. In such cases, lengthening of the consonant immediately following the stressed vowel seems to be the most reliable correlate of stress. A similar situation arises in Estonian, where the primary stressed initial syllable, if it contains a phonemic short vowel, will be shorter than the immediately following syllable and often have less intensity and lower fundamental frequency (Lehiste 1965; Eek 1975; Gordon 1995). Lengthening of the consonant in the onset of the stressed syllable serves as the most reliable cue to stress in Estonian (Lehiste 1966; Gordon 1997). Hyman (1989) discusses cases in Bantu of different diagnostics leading to different conclusions about the location of stress. For example, certain Eastern and Southern Bantu languages display evidence for metrical prominence on the penultimate syllable, such as vowel lengthening, attraction of high tone, and even phonetic stress. However, these properties may conflict with other properties that suggest stress on another syllable, e.g. high tone on the antepenult in Zulu, even though the penult conditions vowel lengthening. A similar pattern of high tone on the antepenult preceding a stressed penult is found in the Northern Iroquoian language Onondaga (Chafe 1970, 1977; Michelson 1988), the Polynesian language Tongan (Schütz 1985), and several Micronesian languages (Rehg 1993). In “split-cue” stress systems such as those described in this paragraph, determining the location of stress is potentially problematic.

Stress: Phonotactic and Phonetic Evidence

4

The separation of high tone and stress ties in with another problematic issue in the phonetic realization of stress. Since Fry’s work on stress correlates in English, it has become apparent that word-level stress must be distinguished from phrase-level intonational prominence, which is characteristically associated with a prominent fundamental frequency event or “pitch accent” (see also chapter 116: sentential prominence in english). Because a word uttered in isolation constitutes an entire phrase, this means that stress in isolated words is confounded with phrasal pitch accent. Many studies intending to examine correlates of wordlevel stress but targeting words in isolation are thus more accurately regarded as studies of phrasal rather than word-level stress, although there is potentially overlap between the two levels of prominence in their acoustic manifestations. Sluijter and van Heuven (1996a, 1996b) are important acoustic studies that show that a frequency-dependent measure of intensity skewed toward higher frequencies rather than either an overall measure of intensity or a measure of fundamental frequency acts as a reliable predictor of word-level stress disambiguated from phrasal pitch accent in Dutch and English. Unfortunately, cross-linguistic phonetic research aimed at disentangling word-level stress from higher-level pitch accent is in its relative infancy. A related issue involving the relationship between prominence at different prosodic levels is the interplay between phrasal tones falling at or near boundaries of intonational constituents larger than the word but smaller than the domain characterized by pitch accents. In certain languages, such as Korean (Jun 1993) and French (Jun and Fougeron 1995), the prominence traditionally regarded as stress has turned out to be attributed to fundamental frequency peaks assigned by a phrase-level intonational constituent termed an Accentual Phrase. For example, in French, the prominence associated with phrase-final syllables is due to a high tone aligned with the right edge of an Accentual Phrase (Jun and Fougeron 1995). It is conceivable that the stress in many, if not most, languages described as having a phrasal rather than a word-level distribution will turn out to be a tonal property attributed to the intonational system, as in French. Another important issue lurking in the typology of stress correlates is the relationship between the phonetic manifestations of stress and the taxonomy of prominence systems (see also chapter 45: the representation of tone and chapter 42: pitch accent systems). The prototypical stress language possesses a number of characteristics that differentiates it from a tone language, one of these differences lying in the phonetic realization of prominence as primarily a fundamental frequency phenomenon in a tone language but potentially distributed over multiple phonetic parameters – e.g. duration, intensity, and spectral tilt – in a stress language. M. Beckman (1986) demonstrates the phonetic validity of this distinction in her comparative study of English and Japanese, in which she shows that a measure of intensity integrated over time is an important correlate of prominence in English but not in Japanese, a language employing tone-based lexical contrasts. In practice, however, a purely phonetic characterization of prominence is unlikely to yield a perfect dichotomy of languages into prosodic prototypes, especially for languages possessing some traits of a stress system but other traits of an – albeit limited – tone system; these languages are often regarded as having a “pitch accent” system (Hyman 2006).

5

3

Matthew Gordon

The taxonomy of segmental correlates of metrical structure

Fortition and lenition effects associated with metrical structure may be broadly classified into three groups according to the property triggering these segmental alternations. The first type of segmental effect is well documented and involves stress (or lack thereof) directly as a trigger of fortition and/or lenition. A second type of segmental effect is predictable from constituent structure rather than stress, but the constituent structure motivating the segmental change accords with the metrical parse evinced by the stress system. A third type of segmental alternation, exemplified by Nganasan, is linked to metrical constituency, where the foot structure diagnosed by the segmental change is at odds with that suggested by the stress system. In the following sections we take a closer look at examples of each of these types of relationships between segmental properties and metrical structure.

3.1 Stress-driven segmental phenomena: Fortition and lenition Many languages display segmental changes that are conditioned by stress or lack of stress. The typical pattern is for sounds to strengthen in stressed contexts and to weaken in unstressed positions. Fortition and lenition can target either consonants or vowels. In the case of consonants, unstressed position is usually associated with decreased resistance to coarticulatory effects and hypo-articulation (de Jong 1995), resulting in reduced constriction either temporally or in magnitude. Kirchner (2001), Lavoie (2001), Bye and de Lacy (2008), and Vaysman (2009) summarize a number of segmental alternations conditioned by stress, of which I mention a few here (see chapter 66: lenition for an overview of the typology of lenition). Post-vocalic coronal stops in American English reduce to flaps before an unstressed syllabic sound, and stops become aspirated in the onset of stressed syllables. In Kupia (Christmas and Christmas 1975), the stops /p Í/ have lenited variants in the onset of unstressed syllables: /p/ is realized as a fricative and /Í/ as a tap. West Tarangan (Nivens 1992) displays fortition in the onset of stressed syllables: /j/ affricates to /–/, and /w/ occlusivizes to /g/, a change that also applies to word-initial consonants. In the development from Proto-Samurian to pre-Lezgian (Topuria 1974; Giginejshvili 1977; Yu 2004), voiced stops devoiced, a type of fortition, and geminated in the onset of stressed syllables. Stress often also triggers lengthening of consonants. Thus, in Urubu Kaapor (Kakumasu 1986) and optionally in Tukang Besi (Donohue 1999) oral stops lengthen in the onset of primary stressed syllables. Lengthening is also employed as a strategy to beef up the rime of stressed syllables. Hayes (1995: 83) discusses several cases of lengthening in order to enhance the weight of stressed syllables. For example, in Munsee (Goddard 1979), a consonant geminates after metrically prominent short vowels, thereby converting the stressed syllable from light (CV) to heavy (CVC). Vowels are also subject to fortition and lenition processes conditioned by stress. As in the case of consonantal alternations, vowels may be affected either qualitatively or quantitatively by the presence or absence of stress. Cross-linguistically, it is very common for vowels to lengthen in stressed syllables. Hayes (1995: 83)

Stress: Phonotactic and Phonetic Evidence

6

catalogs a number of cases of vowel lengthening under stress, where the lengthening effect is quite substantial, even potentially neutralizing an underlying phonemic length contrast. Vowel lengthening can also be associated with qualitative differences as well. The short low central vowel /ô/ in Kabardian lengthens and lowers to /a(/ under stress in the first syllable of disyllabic nouns and adjectives (Colarusso 1992). Revithiadou (2004) discusses the Livisi dialect of Greek, in which stressed high vowels lower to mid vowels, a shift which increases the sonority of the stressed vowel. Conversely, vowels often shorten and qualitatively reduce in unstressed syllables. Crosswhite (2001, 2004) presents a typology of vowel reduction, in which she broadly classifies reduction into two groups. One type involves centralization of unstressed vowels. For example, most vowels in English reduce to a schwa-like vowel when unstressed. This type of reduction is typically linked to articulatory factors. The smaller duration of unstressed vowels leaves relatively little time for the tongue to reach more peripheral targets in the articulatory space. Crosswhite also points out that this type of reduction has the advantage of reducing the intrinsic prominence of unstressed vowels. The other type of vowel reduction in Crosswhite’s taxonomy involves vowels becoming more, rather than less, peripheral in unstressed syllables. This increase in peripherality can be manifested as either vowel raising or lowering, depending on the language. For example, the phonemic mid vowels /e o/ raise to the high vowels [i u] when unstressed in Luiseño (Munro and Benson 1973), and unstressed /e D/ raise to [e o] in standard Italian (Maiden 1995). In Belarussian, on the other hand, the unstressed mid vowels /e o/ lower to [a] (Kryvitskij and Podluzhni 1994). Crosswhite attributes this superficially less intuitive raising type of reduction to the goal of maximizing the perceptibility of contrasts in unstressed contexts where they are perceptually more vulnerable. Either raising mid to high vowels or lowering mid to low vowels in unstressed syllables creates greater perceptual dispersion of different phonemic vowel qualities in the face of the shorter duration and reduced intensity associated with unstressed vowels. The most extreme manifestation of vowel reduction is deletion (see chapter 68: deletion), which often targets unstressed vowels. It is pervasive in casual speech in English, even creating syllable structures that are otherwise illicit underlyingly, e.g. [’tme/7oÁ] ~ [’tHme/7oÁ] tomato, [’pte/7oÁ] ~ [’pHte/7oÁ] potato. Vowel deletion in certain contexts has the advantage of increasing the weight of stressed syllables. For example, deletion of the medial vowel in [’pæP‘lel] parallel creates a heavy initial stressed syllable, thereby increasing its prominence (see Gordon 2001 and Gouskova 2003 on the relationship between vowel deletion and syllable weight).

3.2 Harmony The increased strength of stressed syllables can also manifest itself in harmony systems. Flemming (1994) identifies three types of harmony patterns that are sensitive to metrical structure. One pattern involves spreading of a feature from a stressed syllable to unstressed syllables. Eastern Mari (Vaysman 2009) provides an example of a language in which harmony propagates from a stressed vowel to a neighboring unstressed vowel. Stress in Eastern Mari falls on the rightmost full, i.e. non-schwa, vowel in monomorphemic roots (3a) and otherwise on the

Matthew Gordon

7

initial syllable in non-derived words containing only reduced vowels underlyingly (3b). (We abstract away from cases where an underlying schwa alternates with a full vowel on the surface.) (3)

Eastern Mari stress (Vaysman 2009: 62–64) a.

b.

koI’ga œer’ge køgør’Œen ’teIgHz ’olHk ’joIHlHœ pu’œaIgH ’ßHJHr ’HœkHl ’lHßH

‘oven’ ‘comb’ ‘dove’ ‘sea’ ‘meadow’ ‘mistake’ ‘tree’ ‘canvas’ ‘step’ ‘butterfly’

Rounding harmony is propagated rightward from the stressed vowel in a word. Thus, the 3rd person possessive suffix surfaces as [œe] when the stressed vowel is unrounded (4a) but as [œø] or [œo] when the stressed vowel is a rounded vowel (4b). (The backness of the rounded vowel is conditioned by a process of front–back harmony.) (4)

Eastern Mari rounding harmony (Vaysman 2009: 92) a.

b.

’ergHœe y’remHœe pykœer’me-œe ’œyrH-œø kHO’mo-œø ’œoœH-œo

‘his/her/its ‘his/her/its ‘his/her/its ‘his/her/its ‘his/her/its ‘his/her/its

boy’ street’ walnut tree’ soup’ shovel’ spring’

Another type of harmony that is sensitive to stress involves the propagation of a feature from an unstressed syllable up through a stressed syllable, which blocks further spreading of the harmonizing feature. Tudanca Spanish (Penny 1978) instantiates this type of harmony. In Tudanca, underlying final high vowels, which surface as more centralized than their non-final counterparts, induce centralization (in the front/back and/or height dimension depending on the vowel) of preceding vowels up to and including the stressed vowel (5). (Centralized vowels are marked by the R diacritic.) Stress is lexically governed and may fall on either the penultimate or antepenultimate syllable. (5)

Tudanca laxness harmony (Penny 1978: 54 –55) ’pÇntÑ ’sÅkÑ ’pÉrtÇkÑ ’pÑlpÇtÑ anti’gwÇsÇmÑ o’rÅgÄnÑ

‘male calf’ ‘dry (masc)’ ‘portico’ ‘pulpit’ ‘very old’ ‘oregano’

cf. ’pinta cf. ’seka

‘female calf’ ‘dry (fem)’

Stress: Phonotactic and Phonetic Evidence

8

Walker (2004, 2005) discusses several cases of metaphony in Romance languages involving harmonizing in height of a stressed vowel to a posttonic one (see also chapter 110: metaphony in romance for an overview of similar processes in Italian dialects). In some language varieties, as in Tudanca Spanish, harmony propagates left-wards from the triggering vowel to the stressed vowel through any intervening unstressed vowels. In other varieties, e.g. Asturian Lena Bable (Hualde 1989, 1998), the stressed vowel is transparent to the harmonizing feature, which propagates past the stressed vowel leftward to the pre-tonic vowel.

3.3 Exceptional lenition in prominent syllables Despite the cross-linguistic tendency for stressed syllables to be associated with increased segmental strength, this pattern is not universal. Mok:a Mordvin (Vaysman 2009) optionally lenites consonants in the onset of stressed syllables that are not word initial. Stress in Mok:a is sensitive to a distinction between the low-sonority vowels [i u H] and the high-sonority vowels [a æ o e]. In words containing vowels belonging to the same sonority class, stress falls on the first syllable of a word (6a). In words consisting vowels of different sonority classes, stress falls on the leftmost vowel belonging to the higher-sonority group (6b). (6)

Mok:a Mordvin a. ’b j HnHŒ ’mHkur ’kuœin ’juÚH ’aka ’lopæ ’pango ’s j eja b. ts j H’ræ vi’na Hz’na s j iãHk-’ka

stress (Vaysman 2009: 135–137) ‘boat’ ‘buttocks’ ‘jug’ ‘skin’ ‘older sister, aunt’ ‘leaf’ ‘mushroom’ ‘goat’ ‘son’ ‘alcohol’ ‘older sister’s husband’ ‘elm (prolative)’

Lenition in stressed onsets entails voicing of underlying voiceless obstruents, liquids, and glides, the spirantization of underlying voiced stops, the conversion of /m/ to /w/ and the deletion of /n/, with concomitant nasalization of the stressed vowel. Crucially, lenition does not target word-initial consonants, as the examples in (7) indicate. (7) Mok:a Mordvin lenition in stressed onsets (Vaysman 2009: 142–143) kur’ka ~ kur’ga ‘turkey’ j j s Hr’pe ~ ’s Hr’be ‘heart’ j j j j b Hn H’Œ-oze ~ b Hn H’–-oze ‘my boat’ pHjH’l j-oze ~ pHjH’l joze ‘my knife’ bu’jæ-ze ~ bu’jæ-ze ‘my end’ t jH’b j e-ze ~ t j e’ß j e-ze ‘my work’ pin’gæ-ze ~ pin’:æ-ze ‘my period of time’ p j i’ma ~ p j i’wa ‘large cup, mug’ kH’nak-oze ~ kH’ãk-oze ‘my guest’

9

Matthew Gordon

3.4 The phonetic basis for fortition and lenition in stressed syllables The tendency for consonants to undergo fortition in the onset of stressed syllables finds an explanation in terms of speech perception (see chapter 98: speech perception and phonology), as suggested by J. Smith (2000, 2004) and Gordon (2005). In the Smith and Gordon accounts, two key independently known aspects of auditory processing underlie the relationship between sonority and stress. First, the auditory system is most attuned to the beginning of a stimulus before a gradual decline in sensitivity sets in. This reduction in sensitivity, termed “adaptation,” has both physical and psycho-acoustic manifestations, including a reduction in auditory nerve firing rates (Delgutte 1982) and a lessening of perceived loudness (Plomp 1964; Wilson 1970; Viemeister 1980). On the other hand, after a phase of silence or reduced acoustic intensity, auditory nerve firing rates and perceived loud-ness increase during an immediately following sound characterized by greater intensity, reflecting a process of auditory “recovery” (R. Smith 1979; Viemeister 1980; Delgutte 1982, 1997; Delgutte and Kiang 1984). Given the auditory benefit afforded by a period of reduced intensity or silence, it is easy to see the perceptual advantage of fortition. By either devoicing a consonant in the onset of a stressed syllable or increasing its degree of constriction, the immediately following stressed rime – including the vowel and coda consonants, if any are present – receives a boost in auditory prominence. Under this auditorily driven account of fortition, prominence is not enhanced directly, by the increased strength of the onset consonant, but rather indirectly, through its effect on the following rime. A limitation of this approach is its apparent failure in predicting lenition in the onset of stressed syllables, as optionally occurs in Mok:a Mordvin. Since lenition increases the sonority, and thus the acoustic intensity, it would be expected under the Smith and Gordon account to actually decrease the auditory prominence of the immediately following rime by reducing the positive effects of recovery. The reconciliation of lenition with a phonetically driven approach to stress-sensitive fortition must await further research.

3.5 The formal analysis of stress-driven alternations in segments and syllables 3.5.1 Positional faithfulness The analysis of stress-driven effects on segmental properties and syllable structure has attracted substantial attention in the phonology literature, in particular within the optimality-theoretic framework. There are two basic approaches to stressinduced segmental phenomena. One approach (e.g. Casali 1997; Steriade 1997; J. Beckman 1998; Lombardi 2001) focuses on cases in which underlying contrasts are asymmetrically preserved in stressed syllables (as well as other prominent positions) but lost in unstressed contexts. In this type of analysis, termed “positional faithfulness,” input–output faithfulness constraints requiring preservation of an underlying property are divided into two classes: those that are context free and those that are only enforced in prosodically privileged positions such as stressed syllables. Positional faithfulness arises when a markedness constraint is prioritized above a generic faithfulness constraint but below a positional faithfulness constraint.

Stress: Phonotactic and Phonetic Evidence

10

This constraint interaction can be illustrated by considering the analysis of Guaraní vowel harmony developed in J. Beckman (1998). In Guaraní (Gregores and Suárez 1967), nasalized and oral vowels contrast in stressed syllables, but in unstressed syllables nasalized vowels may only surface before a nasal consonant. The [nasal] feature also spreads leftward from a prenasalized stop and from a phonemic nasalized vowel up to but not including a stressed vowel (8). Nasalization additionally spreads rightward (as the examples below indicate) although its phonetic properties are different, which has led certain researchers, e.g. Flemming (1994), to analyze it as phonetic rather than phonological. Beckman thus does not develop an analysis of rightward spreading of nasality. (8)

Guaraní nasal harmony (Gregores and Suárez 1967: 69) /amaa’porõre’ju/ → ?ãm ba?a’porõrgju ‘if I work you come’ /je’ qntena/ → je’qntgnã ‘just once more!’ /ija‘kãra’ku/ → ?hnã‘kãrã’ku ‘is hot-headed’ /rojotopa’pamarõro’xova‘rã/ → rojotopa’pamãXõXõ’xoºã‘Xã ‘if now we meet all of us, we will have to go’

Two faithfulness constraints play a pivotal role in Beckman’s analysis. First, a generic faithfulness constraint, Ident(nasal), requires that segments underlyingly associated with a [nasal] feature preserve that feature on the surface. The second constraint is the positionally defined analog to Ident(nasal), Ident-Ä(nasal), which requires that surface segments in stressed syllables preserve their underlying [nasal] specification. The existence of contrastive nasality on stressed vowels but not on unstressed vowels follows from the ranking of a markedness constraint banning nasalized vowels, *Vnasal, above generic Ident(nasal) but below positionspecific Ident-Ä(nasal). This ranking ensures that any underlyingly nasalized vowel will lose its nasality if it surfaces in an unstressed position. Critical to the analysis of nasal harmony in Beckman’s analysis is an alignment constraint, Align-L(nasal), requiring that all instances of the feature [nasal] be aligned with the left edge of a word. This constraint is honored in forms in which nasality either is underlyingly associated with a segment in the first syllable or has propagated to the first syllable through nasal spreading. One violation is incurred for each segment intervening between a nasal feature and the left edge of the word. By sandwiching Align-L(nasal) above Ident (nasal) but below Ident-Ä(nasal), nasality is correctly predicted to spread as far leftward as the stressed syllable, where it is blocked from spreading any further (9). (9)

/je’qntenã/ Ident-Ä(nas) Align-L(nas) Ident(nas) ‘just once more’ ☞ a. je’qntgnã b. ng’0ntgna c. je’qntenã

*** *!

* ***

***, *!*****

Lenition in unstressed syllables can be handled similarly in the positional faithfulness approach. A constraint banning non-lenited segments is ranked above a generic faithfulness constraint, but below a positional faithfulness constraint

11

Matthew Gordon

requiring featural identity between underlying forms and their surface correspondents in stressed syllables. The result is a typologically common type of system; cf. many of the vowel reduction patterns described by Crosswhite (2001, 2004), in which a contrast between lenited and non-lenited segments in stressed syllables is neutralized to the lenited variant in unstressed syllables.

3.5.2 Positional markedness Another approach to stress-induced segmental alternations employs positional markedness constraints (Zoll 1998; J. Smith 2000, 2004; de Lacy 2001; see also the positional licensing constraints of Walker 2004, 2005), targeting stressed and other prominent positions. A positional markedness approach has an advantage over the positional faithfulness approach in capturing fortition phenomena, since it explicitly predicts that faithfulness will be violated in strong positions. To illustrate the positional markedness approach, let us consider J. Smith’s (2000, 2004) analysis of glottal stop epenthesis before vowel-initial stressed syllables in Dutch. An anti-epenthesis constraint, Dep, is ranked above a generic markedness constraint requiring that syllables have an onset, Onset, but below a positional markedness constraint, Onset-Ä, mandating that stressed syllables have an onset. The result is insertion of a glottal stop before an otherwise onsetless stressed syllable but not before an unstressed syllable beginning with a vowel (10). (10)

/aDrta/ ‘aorta’ Onset-Ä Dep Onset ☞ a. a.’?Dr.ta b. a.’Dr.ta

* *!

* **

/xaDs/ ‘chaos’ ☞ a. ’xa.Ds b. ’xa.?Ds

* *!

The common process of stressed vowel lengthening is also standardly attributed to a positional markedness constraint requiring that stressed syllables be heavy, the Stress-to-Weight Principle (Crosswhite 1998), or its counterpart specific to primary stress, Main-to-Weight (Bye and de Lacy 2008). By ranking the Stressto-Weight Principle above the constraint banning surface moras that do not have a correspondent in the underlying representation, Dep-[, lengthening of the stressed syllable results. Whether the lengthening targets a vowel or consonant is attributed to the relative ranking of faithfulness constraints pertaining to the two types of segments. The blocking of lenition in stressed syllables can also be captured by assuming that the constraint driving lenition – e.g. Kirchner’s (2001, 2004) Lazy constraint banning articulatory effort – outranks a generic faithfulness constraint, but is ranked below a constraint blocking the type of segment resulting from lenition from surfacing in stressed contexts. For example, the lenition of coronal stops to flaps in post-tonic contexts in English can be attributed to a constraint banning the amount of articulatory effort required to produce a full-fledged stop in intervocalic position. This constraint is ranked above a generic faithfulness constraint requiring preservation of the features characterizing a stop, but below a constraint banning flaps in stressed syllables.

Stress: Phonotactic and Phonetic Evidence

12

Likewise, the blocking of feature spreading by stressed syllables, e.g. nasality in Guaraní, can be handled by assuming a positional markedness constraint banning that feature in stressed syllables. While this type of analysis is equipped to handle the harmony process, underlying featural restrictions holding of unstressed syllables are difficult to treat formally. For example, the ban on contrastive nasalized vowels in lexically unstressed syllables in Guaraní does not follow from the same markedness constraint blocking nasal spreading onto stressed syllables. Cases of lenition in stressed syllables, e.g. in Mok:a Mordvin, are problematic for principles of markedness in general, independent of the paradigm in which they are couched, since such processes contradict the scales of segmental strength observed in most languages (see chapter 4: markedness). It is possible to posit a positional markedness constraint requiring that stressed syllables be associated with lenited segments, but such a constraint would undermine the restrictiveness of the theory by admitting both lenition and fortition constraints referring to the same context. The proper formal treatment of apparently “unnatural” phenomena such as lenition in stressed syllables hinges on the understanding of the motivating factors driving them. It is conceivable that phonetic considerations offer an explanation for lenition in stressed syllables, but it is also possible that a confluence of historic events have conspired to produce a phonology whose synchronic naturalness has been rendered opaque (see §6).

4

Metrically harmonic constituency-driven segmental alternations

4.1 Foot-initial fortition A second type of effect of metrical structure on segmental properties requires reference to feet for an adequate characterization, but these feet correspond to the feet diagnosed by stress placement. One subclass of these effects involves foot-level manifestations of phonetic patterns characteristic of prosodic words and larger constituents. For example, the Alutiiq dialects of Yupik (Leer 1985) have a process of fortition affecting consonants in foot-initial position. Leer (1985: 84–85) describes: two major distinguishing characteristics of the fortis consonant: complete lack of voicing with voiceless consonants (stops and voiceless fricatives), and preclosure [which is a voiceless interval following the preceding segment], during which the mouth also assumes the configuration of the [fortis] consonant.

Leer further observes that the phonetic distinction between fortis and non-fortis consonants is most salient in the Prince William Sound dialect of Chugach Alutiiq, “where lenis stops may become so loosely articulated as to sound virtually like voiced fricatives,” which themselves also either weaken to an approximant or are deleted, thereby avoiding neutralization. Feet in Alutiiq are iambic, with long vowels and word-initial (but not word-medial) CVC counting as heavy. A key feature of stress, however, that has implications for footing and by extension the analysis of fortition is that stress adheres to a ternary

13

Matthew Gordon

pattern where there are consecutive light syllables word medially between the first and last foot of a word (see chapter 52: ternary rhythm). For example, stress (indicated here uniformly as primary stress, since it is unclear which stress is most prominent) falls on the first and fourth syllables in the four-syllable Chugach Alutiiq word /’anŒiqu’kut/ ‘we’ll go out’ (Leer 1985: 84). The strengthened consonant (in bold) occurs in the onset of the pretonic syllable, which is analyzed by Hayes (1995) as the first syllable in a disyllabic foot comprising the pretonic and the tonic syllable. Under his approach, which assumes that a post-tonic light syllable is skipped over in the metrical parse (a phenomenon termed “weak local parsing” by Hayes) in languages with ternary stress intervals, /’anŒiqu’kut/ is footed as (’an)Œi(qu’kut). A virtue of this analysis is that fortition in Alutiiq can be characterized as a foot-initial phenomenon, thereby bringing it into line with the general cross-linguistic pattern of strengthening associated with initial position of prosodic domains (e.g. Pierrehumbert and Talkin 1992; Byrd 1994; Dilley et al. 1996; Cho and Keating 2001; Keating et al. 2003). In contrast, the locus of fortition is not easily defined with reference only to stress; it would need to be described as fortition in the onset of pretonic syllables, which would not appear to be a natural phenomenon with parallels in other languages. chapter 41: the representation of word stress describes several other segmental alternations that require reference to foot structure as opposed to stress for an adequate characterization.

4.2 Foot-final lengthening The parallel between the foot and larger prosodic domains has also been proposed to extend to other phonetic properties. In his analysis of consonant gradation in Finnic languages, Gordon (1998) proposes that a phenomenon of foot-final lengthening accounts for certain lengthening phenomena found in the language family. According to Gordon’s account, foot-final lengthening played a crucial role in the proto-language in triggering consonant gradation. Gordon argues that vowels in absolute foot-final position, i.e. in foot-final open syllables, lengthened in the proto-language, thereby disrupting the prosodic profile of the trochaic feet characteristic of the proto-language. This disruption ultimately triggered lengthening in the consonant immediately preceding the lengthened foot-final vowel. Gordon’s analysis can be seen most clearly when cast in moraic terms. Assuming consonants were non-moraic and there were no phonemic long vowels in the proto-language (see Gordon 1998), the lengthening of a foot-final vowel created a light–heavy trochaic foot, a prosodically non-optimal foot in which the unstressed syllable is heavier than the stressed syllable. Lengthening the consonant preceding the lengthened foot-final vowel may thus be viewed as a foot optimization process, beefing up the light stressed syllable in a trochee. The result was a chain shift whereby single consonants became geminates and geminates became overlong before foot-final vowels (CVCV → CVC(V( and CVC(V → CVC((V(, but CVCVC → CVCVC and CVC(VC → CVC(VC), thus setting in motion the paradigmatic consonant gradation alternations that still exist, albeit in modified form, in many languages in the Finnic and Sámi language families. The reconstructed process of foot-final lengthening finds support from analogous phenomena found in certain modern Finnic languages, such as Ingrian. Broader cross-linguistic evidence for foot-final lengthening comes from a pattern reported by Teranishi (1980) of lengthening targeting the second vowel in a disyllabic sequence, analyzed by Poser

Stress: Phonotactic and Phonetic Evidence

14

(1990) as a foot, in recited Japanese verse. Finally, foot-final lengthening may be viewed as the foot-level manifestation of the well-attested phenomenon of final lengthening observed at levels above the foot, such as the word and phrase (Wightman et al. 1992). Foot-final lengthening has also been appealed to by Revithiadou (2004) as a factor in promoting the cross-linguistically pervasive phenomenon of iambic lengthening discussed in the next section.

4.3 Iambic/trochaic length asymmetries Certain asymmetries in quantitative alternations are also best explained with reference to foot structure rather than directly to stress. In particular, stressed syllables in iambic and trochaic feet appear to display fundamentally different length characteristics (see Hayes 1985, 1995 and chapter 44: the iambic–trochaic law). Stressed syllables in languages employing an iambic parse are often lengthened cross-linguistically, whereas those with trochaic feet characteristically either fail to lengthen the stressed syllable, or, in some cases, even shorten the stressed syllable. Chickasaw (Munro and Ulrich 1984; Munro and Willmond 1994, 2005; Munro 1996, 2005; Gordon et al. 2000; Gordon 2003, 2004; Gordon and Munro 2007) is an iambic language in which closed syllables and syllables containing long vowels are heavy, i.e. are parsed as monosyllabic feet word-initially or following a stressed syllable. The final syllable is also parsed as a foot even if it is light (CV). The rightmost stress, i.e. the one on the final syllable, is the primary one, except that a long (or lengthened) vowel in pre-final position attracts the primary stress from a final CV(C). As shown in (11), stressed vowels in open non-final syllables substantially lengthen, where the output of lengthening is a vowel that is either nearly neutralized or completely neutralized in length with a phonemic long vowel depending on the vowel and the speaker (see Gordon et al. 2000 for phonetic duration results). (11)

Iambic lengthening in Chickasaw (lengthened vowels indicated by )) (Œi‘pi))(sa’li))(‘tok) (Œi‘Œo))(‘koœ)(ko’mo))(‘Œi) (Œi‘ki))(si’li))(‘tok) (a‘sa))(bi’ka))(‘tok)

‘I looked at you.’ cf. (pi’sa))(li‘tok) ‘He looked at you.’ ‘He makes you play.’ cf. (Œo‘koœ)(ko’mo))(‘tok) ‘He played.’ ‘He bit you.’ cf. (ki’si))(li‘tok) ‘He bit it.’ ‘I was sick.’ cf. (a’bi))(ka‘tok) ‘He was sick.’

The process of iambic lengthening has an intuitive purpose, in that it enhances the prominence of the stressed syllable. In some languages, the beefing up of the stressed syllable in an iambic foot is achieved by lengthening a consonant rather than a vowel (see Hayes 1995: 82–83 for a survey of iambic lengthening; but see Bye and de Lacy 2008 for re-analyses of iambic consonant lengthening). This consonant can either be the coda consonant in a stressed CVC syllable, as in the Chevak dialect of Central Alaskan Yupik (Woodbury 1981) or the onset of the following syllable, the first half of which ends up closing the stressed syllable, as in Delaware (Goddard 1979, 1982).

15

Matthew Gordon

Interestingly, substantial lengthening of stressed syllables in trochaic feet appears to be less sparsely attested, and those cases that do exist differ in certain important respects from their iambic counterparts (Hayes 1985, 1995; Mellander 2001). Cases of trochaic lengthening are found in languages with quantity-insensitive stress systems unlike cases of iambic lengthening, which are limited to languages with quantity-sensitive stress systems. Particularly striking is the existence of trochaic languages that shorten stressed syllables. For example, in Fijian (Schütz 1985) a trochaic foot is constructed at the right edge of a word. Phrases that underlyingly end in a heavy penult (one containing a long vowel) followed by a light ultima shorten the long vowel in the penult; thus, underlying /mbu(Igu/ ‘my grandmother’ surfaces as (’mbuIgu) (Schütz 1985: 528). Mellander (2001) finds that languages with trochaic shortening differ from those reported to display trochaic lengthening in being quantity-sensitive.

5

Segmental alternations predicted by foot structure rather than stress

5.1 Nganasan consonant gradation In the introduction, we briefly examined the consonant alternations in Nganasan (Tereshchenko 1979; Helimski 1998; Vaysman 2009) that were shown to be predictable from foot structure but not from stress. Stress is confined to a threesyllable window at the right edge of a word, falling on the final syllable if it contains a long vowel or diphthong (12a) and otherwise on either the penult or antepenult depending on vowel quality in those syllables. If the penult contains a vowel other than the central vowels [q H], stress falls on the penult (12b). If the penult contains a central vowel and the antepenult contains a non-central vowel, stress falls on the antepenult (12c). Cases involving a central vowel in both the penult and the antepenult display a more complex and variable pattern that will be discussed along with secondary stress below. (12)

Nganasan stress (Vaysman 2009: 23, 52) a. b. c.

ky’ma( le’hua ba’kunu ’kaÏar ku’butHnu ’hwa(tHnu

‘knife’ ‘board’ ‘salmon’ ‘light’ ‘skin, fur (loc sg non-poss)’ ‘tree (loc sg non-poss)’

Consonant lenition is predictable if one assumes a left-to-right parse into binary feet, with stray syllables being parsed into a monosyllabic foot: lenition targets foot-initial consonants that are not also word-initial. Lenition entails a number of different alternations including the loss of prenasalization (/I h/ → [h], /n t/ → /t/, /Ik/ → [k], /ns/ → [s], /Jç/ → [ç], /Jc/ → [c]), voicing with or without a change in manner (/t/ → [Ï], /k/ → [g], /s/ → [Á], /ç/ → [Á], /c/ → [Á]), and a shift from /h/ to [b] and from /j/ to zero. Data illustrating the consonant alternations for two suffixes and their relationship to foot structure appear in (13).

Stress: Phonotactic and Phonetic Evidence (13)

16

Nganasan consonant alternation and foot structure (Vaysman 2009: 43, 52) (‘jama)(’Ïa-tu) (‘Ioru)(’mu-tu) (su()(’ÏH()-(Ïu) (Iu’hu)-(Ïu) (ba(r)(’pH-n tH)(nu) (hqa)(’ŒH-n tH)(nu) (ku’bu)(tHnu) (’hwa()(tHnu)

‘his/her/its animal’ ‘his/her/its copper’ ‘his/her/its lung’ ‘his/her/its mitten’ ‘master, chief (loc sg non-poss)’ ‘thumb (loc sg non-poss)’ ‘skin, fur (loc sg non-poss)’ ‘tree (loc sg non-poss)’

As these examples show, the strong variant of the 3rd person possessive suffix (beginning with [t]) and the locative singular non-possessive suffix (beginning with a prenasalized [n t]) surface foot medially, whereas the weak allophone (beginning with [Ï] and plain [t] in the two suffixes, respectively) occurs foot initially. The strong grade and the weak grade both occur in the onset of unstressed syllables, meaning that stress does not predict the alternation. Furthermore, the foot structure diagnosed by the consonant alternations does not accord with the foot structure that would be required to predict primary stress. It is not the case, however, that stress assignment in Nganasan is completely blind to foot structure. Secondary stress falls on odd-numbered syllables counting from the left edge of a word in keeping with the footing predicted by consonant gradation (14a). Two provisions to this generalization, however, make the relationship even between secondary stress and foot structure opaque. First, secondary stress may not clash with an immediately following stress (14b) and, second, secondary stress skips over a light (CV) syllable in favor of a heavy (CVV) syllable (14c). In both situations, a final syllable is potentially footed but unstressed, and a word may display a mix of iambic and trochaic feet. (14)

Nganasan secondary stress (Vaysman 2009: 24) a. b. c.

(‘baku)(‘numH)(’numH) (‘tiri)(‘mimH)(’numH) (‘kaÏar)(mH’nu)(mH) (‘ŒemJi)(mH’nu)(mH) (ky)(‘ma()(mH’nu)(mH) (le)(‘hua)(mH’nu)(mH)

‘my ‘my ‘my ‘my ‘my ‘my

salmon (prolative)’ caviar (prolative)’ light (prolative)’ salary (prolative)’ knife (prolative)’ board (prolative)’

A further context in which foot structure is relevant to stress arises in words in which both the penult and the antepenult contain a central vowel. We abstract away from cases in which the penult and antepenult contain different central vowels, a situation that gives rise to variability in the location of stress, and consider here only cases in which both the penultimate and antepenultimate syllables contain the same central vowel. In words of this profile, the penult attracts stress if it is footinitial (15a) but the antepenult carries the stress if the penult is foot-final (15b). (15)

Nganasan stress in words with the same central vowel in the penult and antepenult (Vaysman 2009: 36) a.

(‘bqÏq)(’tqrH) (‘Jqlq)(’tqrH)

‘you (sg) are drinking (intr)’ ‘you (sg) are living’

17

Matthew Gordon b.

(’bqnq)(mH) (‘bqÏqp)(’tqtq)(rH)

‘my rope’ ‘you (sg) are drinking (tr)’

The location of stress in these words is consistent with an analysis assuming a left-to-right parse into trochaic foot in keeping with the foot structure diagnosed by consonant gradation and secondary stress. As in the case of secondary stress assignment in many words, however, a stray final light syllable is predicted to be footed by consonant gradation but is unstressed if the penult and antepenult together form a foot.

5.2 Eastern Mari schwa fortition Vaysman (2009) presents another case of a foot-driven segmental alternation that is not predictable from stress in Eastern Mari, another Uralic language. As discussed in §3.2, stress in Eastern Mari falls on the rightmost full, i.e. non-schwa, vowel in monomorphemic roots (16a) and otherwise on the initial syllable in non-derived words containing only reduced vowels underlyingly (16b). (16)

Eastern Mari stress (Vaysman 2009: 62–64) a.

b.

koI’ga œer’ge køgør’Œen ’teIgHz ’olHk ’joIHlHœ pu’œaIgH ’ßHJHr ’HœkHl ’lHßH

‘oven’ ‘comb’ ‘dove’ ‘sea’ ‘meadow’ ‘mistake’ ‘tree’ ‘canvas’ ‘step’ ‘butterfly’

In addition to rounding harmony (see §3.2), Eastern Mari also has an alternation between reduced and full vowels. Underlying reduced vowels surface as full vowels in absolute word-final position, i.e. not in final closed syllables, under conditions that Vaysman argues are metrically governed. Schwa only strengthens to a full vowel in final position of a foot, where feet are parsed from left to right and a stray syllable left at the end of the word remains unparsed. Thus, unlike Nganasan, which allows degenerate feet at the end of the word, Eastern Mari does not. The quality of the full vowel resulting from fortition is determined by both front/back and rounding harmony. The examples in (17) illustrate the fortition of schwa foot finally in the suffix meaning ‘the one who/ that is’ (17a), and the preservation of schwa in metrically unparsed final syllables (17b). (17) Eastern Mari vowel fortition (Vaysman 2009: 83) a.

(’taI-se) (’my-sø) (’joIH)(lHœ-so) (pu’œaI)(gH-so)

‘the ‘the ‘the ‘the

one one one one

who is a friend’ that is honey’ that is a mistake’ that is a tree’

Stress: Phonotactic and Phonetic Evidence b.

(’jeIH)-sH (ok’sa)-sH (o’la)-sH (puœaI)(gH-’na)-sH (pare)(IH-’na)-sH

‘the ‘the ‘the ‘the ‘the

one one one one one

18

who is human’ that is money’ that is a city’ that is our tree’ that is our potato’

That we are dealing with fortition of schwa rather than lenition of full vowels to schwa is demonstrated by the patterning of alternating schwas as light syllables in the stress system. Thus, vowels that are underlyingly schwa pass stress on to a preceding full vowel even if they wind up as full vowels due to foot-final fortition (18). (18)

Eastern Mari rejection of stress by alternating schwa (Vaysman 2009: 79) ’susko ’muno ’jeIe

‘scoop’ ‘egg’ ‘human’

*sus’ko *mu’no *je’Ie

cf. ’suskHm ‘scoop (acc sg)’ cf. ’munH ‘egg (acc sg)’ cf. ’jeIHÚ ‘human (lat sg)’

The Nganasan consonant alternations and the Eastern Mari vowel alternations present problems both for theories that attempt to link fortition and lenition directly to stress and for theories that analyze stress patterns without recourse to foot structure (e.g. Prince 1983; Selkirk 1984; Gordon 2002). On the other hand, stress cannot be predicted from the type of metrical structure needed to account for segmental alternations in these languages.

6

The role of history in foot-driven segmental alternations

The mismatch between stress and metrical structure diagnosed through segmental alternations raises the question of whether such cases are the result of a confluence of historical events that have removed an originally transparent link between stress and the alternations. The literature contains a number of cases of seemingly unprincipled synchronic patterns being veiled by later diachronic changes that have obscured a process that at one time was clearly phonetically grounded (e.g. Buckley 2000; Hyman 2001; Yu 2004). In fact, there is evidence that the cases of disharmony between stress and segmental alternations in the Uralic languages Eastern Mari and Nganasan are the result of diachronic changes. Let us first consider the case of Eastern Mari schwa fortition (§5.2). Recall that this alternation involved the strengthening of wordfinal schwa in absolute foot-final position, e.g. /puœaIgH-sH/ → (pu’œaI)(gH-so) ‘the one that is a tree’ but (’jeIH)-sH ‘the one who is human’. As we have seen, this alternation falls out from an analysis positing binary feet constructed from left to right. These feet, however, do not accord with the stress system, which places a single stress on the rightmost full, i.e. non-schwa, vowel in monomorphemic roots and otherwise on the initial syllable in non-derived words containing only reduced vowels. It is likely the case that the binary feet evinced through schwa fortition at one time corresponded to those diagnosed through stress. The dominant stress pattern

19

Matthew Gordon

in the Uralic language family, of which Mari is a member, involves stress on the initial syllable of a word, with many languages displaying a binary trochaic pattern from left to right. In keeping with the synchronic evidence, Sammallahti (1988) thus reconstructs a binary trochaic stress system for the proto-language. Bereczki (1988) reconstructs initial stress for pre-Mari on the basis of vowel reduction shifts occurring in non-initial syllables. Initial stress is also the default pattern for Eastern Mari words with all light syllables. It is plausible that the binary pattern reconstructed for Proto-Uralic existed through the period during which schwa fortition developed as a productive alternation in Eastern Mari. Assuming this to be the case, the foot structure required to account for fortition was transparently linked to stress at one time. Furthermore, schwa strengthening becomes a phonetically motivated phenomenon driven by the same process of foot-final lengthening claimed by Gordon (1998) to drive consonant gradation in Proto-Saamic-Fennic (§4.2) and by Revithiadou (2004) to play a role in iambic lengthening cross-linguistically (§4.3). Schwa is phonetically much shorter than its non-reduced counterparts in Mari (Gruzov 1960; Vaysman 2009) and, more generally, in other languages with a weight distinction between central and peripheral vowels (Gordon 2002). Assuming that foot-final vowels are lengthened in Eastern Mari, the conversion from schwa to a peripheral vowel foot-finally plausibly follows from the incompatibility of schwa with increased duration. The reason this lengthening is likely confined to foot-final vowels that are also word final is due to the tendency for an enhanced lengthening effect in word-final position, which is independently predisposed to lengthening crosslinguistically. Support for this analysis comes from languages such as Yupik (Reed et al. 1977) and Javanese (Horne 1974), where underlying schwa in word-final position shifts to a peripheral vowel quality on the surface (see also chapter 26: schwa). It is also possible to pursue a diachronic approach to the mismatch between stress and the foot structure required to account for rhythmic consonant gradation in Nganasan (§5.1), although the complexity of the data complicates the discussion. Recall that consonants in foot-initial position are lenited in Nganasan, e.g. foot-initial (ku’bu)(tHnu) vs. foot-medial (hqa)(’ŒH-n tH)(nu), where footing adheres to a trochaic pattern from left to right. Stress, on the other hand, is limited to a three-syllable window at the right edge of a word. As mentioned above, a trochaic stress system is reconstructed for Proto-Uralic. Indeed this basic pattern is preserved in the placement of secondary stress in Nganasan itself subject to interruptions in the alternating count due to heavy syllables. If one assumes that a trochaic stress pattern existed at the time rhythmic gradation became entrenched as a process, the link between gradation and foot structure diagnosed through stress becomes transparent. Under this analysis one puzzling aspect of consonant gradation remains, however: the fact that segments in foot-initial, i.e. historically stressed, syllables undergo lenition rather than fortition. Regardless of the historical origins of mismatches between stress and the metrical structure driving segmental alternations, a synchronic analysis must admit the possibility of orthogonal representations of stress and foot structure. Furthermore, the data from languages like Nganasan and Eastern Mari suggest that stressless feet are a possibility even if typologically rare (see Hyde 2002 for a recent constraint-based theory explicitly allowing for stressless feet).

Stress: Phonotactic and Phonetic Evidence

7

20

Summary

Stress or lack of stress is associated with both suprasegmental and segmental properties. On a suprasegmental level, stress typically triggers lengthening, higher fundamental frequency, and greater intensity, although there are many languages in which these properties do not converge on a single syllable but rather are distributed over multiple syllables. On a segmental level, stress characteristically, although not always, triggers consonant fortition or blocks lenition targeting unstressed syllables. Some metrically conditioned segmental alternations, on the other hand, are better explained with reference to foot structure. For example, boundary-driven processes such as foot-initial fortition and foot-final lengthening are plausibly the foot-level analogs of well-documented phenomena applying at the word level. Furthermore, stressed vowel lengthening in many languages is explicable in terms of stress, but its cross-linguistic bias toward applying in iambic stress systems suggests that it is sensitive to foot structure. A final type of segmental alternation cannot be accounted for with reference to stress but rather suggests the relevance of foot structure that is orthogonal to the stress system in certain languages. Examination of historical data potentially provides insight into these mismatches between foot structure diagnosed by fortition and lenition and foot structure diagnosed by stress by showing that the segmental changes became entrenched at a chronologically earlier stage, when stress and foot structure coincided.

ACKNOWLEDGMENTS The author wishes to thank two anonymous reviewers and the editors for their many helpful comments and suggestions on an earlier draft of this chapter.

REFERENCES Adisasmito-Smith, Niken & Abigail C. Cohn. 1996. Phonetic correlates of primary and secondary stress in Indonesian: A preliminary study. Working Papers of the Cornell Phonetics Laboratory 11. 1–16. Baitschura, Uzbek. 1976. Instrumental phonetische Beiträge zur Untersuchung der Sprachmelodie und des Wortakzentes im Tscheremissischen. Études Finno-Ougriennes 3. 109 –122. Beckman, Jill N. 1998. Positional faithfulness. Ph.D. dissertation, University of Massachusetts, Amherst. Beckman, Mary E. 1986. Stress and non-stress accent. Dordrecht: Foris. Bereczki, Gábor. 1988. Geschichte der wolgafinnischen Sprachen. In Sinor (1988), 314 – 350. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Buckley, Eugene. 2000. On the naturalness of unnatural rules. Proceedings from the 2nd Workshop on American Indigenous Languages, 1–14. Santa Barbara: Department of Linguistics, University of California, Santa Barbara. Bye, Patrik & Paul de Lacy. 2008. Metrical influences on lenition and fortition. In Joaquim Brandão de Carvalho, Tobias Scheer & Philippe Ségéral (eds.) Lenition and fortition, 173–206. Berlin & New York: Mouton de Gruyter.

21

Matthew Gordon

Byrd, Dani. 1994. Articulatory timing in English consonant sequences. Ph.D. dissertation, University of California, Los Angeles. Casali, Roderic F. 1997. Vowel elision in hiatus contexts: Which vowel goes? Language 73. 493–533. Chafe, Wallace. 1970. A semantically based sketch of Onondaga. Baltimore: Waverly Press. Chafe, Wallace. 1977. Accent and related phenomena in the Five Nations Iroqouis languages. In Larry M. Hyman (ed.) Studies in stress and accent, 169–181. Los Angeles: Department of Linguistics, University of Southern California. Cho, Taehong & Patricia Keating. 2001. Articulatory and acoustic studies on domaininitial strengthening in Korean. Journal of Phonetics 29. 155 –190. Christmas, Raymond B. & J. Elisabeth Christmas. 1975. Kupia phonemic summary. Kathmandu: Tribhuvan University. Colarusso, John. 1992. A grammar of the Kabardian language. Calgary: University of Calgary Press. Crosswhite, Katherine. 1998. Segmental vs. prosodic correspondence in Chamorro. Phonology 15. 281–316. Crosswhite, Katherine. 2001. Vowel reduction in Optimality Theory. New York: Routledge. Crosswhite, Katherine. 2004. Vowel reduction. In Hayes et al. (2004), 191–231. de Jong, Kenneth J. 1995. The supraglottal articulation of prominence in English: Linguistic stress as localized hyperarticulation. Journal of the Acoustical Society of America 97. 491–504. de Lacy, Paul. 2001. Markedness in prominent positions. MIT Working Papers in Linguistics 40. 53–66. (ROA-282.) Delgutte, Bertrand 1982. Some correlates of phonetic distinctions at the level of the auditory nerve. In Rolf Carlson & Björn Granström (eds.) The representation of speech in the peripheral auditory system, 131–150. Amsterdam: Elsevier Biomedical. Delgutte, Bertrand. 1997. Auditory neural processing of speech. In W. J. Hardcastle & John Laver (eds.) The handbook of phonetic sciences, 507–538. Oxford & Cambridge, MA: Blackwell. Delgutte, Bertrand & Nelson Y. S. Kiang. 1984. Speech coding in the auditory nerve. I: Vowel-like sounds. Journal of the Acoustical Society of America 75. 879 –886. Dilley, Laura C., Stefanie Shattuck-Hufnagel & Mari Ostendorf. 1996. Glottalization of wordinitial vowels as a function of prosodic structure. Journal of Phonetics 24. 423–444. Donohue, Mark. 1999. A grammar of Tukang Besi. Berlin & New York: Mouton de Gruyter. Eek, Arvo. 1975. Observations on the duration of some word structures II. Estonian Papers in Phonetics 1975. 7–51. Everett, Keren. 1998. Acoustic correlates of stress in Pirahã. Journal of Amazonian Languages 1. 104 –162. Flemming, Edward. 1994. The role of metrical structure in segmental rules. Papers from the Annual Meeting of the North East Linguistic Society 24. 97–110. Fry, D. B. 1955. Duration and intensity as physical correlates of linguistic stress. Journal of the Acoustical Society of America 27. 765 –768. Fry, D. B. 1958. Experiments in the perception of stress. Language and Speech 1. 126 –152. Giginejshvili, B. K. 1977. Sravnitel′naja fonetika dagestanskix iazykov. Tbilisi: Tbilisi University. Goddard, Ives. 1979. Delaware verbal morphology: A descriptive and comparative study. New York: Garland. Goddard, Ives. 1982. The historical phonology of Munsee. International Journal of American Linguistics 48. 16 –48. Gonzales, A. 1970. Acoustic correlates of accent, rhythm, and intonation in Tagalog. Phonetica 22. 11– 44. Gordon, Matthew. 1995. Acoustic properties of primary and secondary word-level stress in Estonian. Poster presented at the 130th Meeting of the Acoustical Society of America, St Louis.

Stress: Phonotactic and Phonetic Evidence

22

Gordon, Matthew. 1997. Phonetic correlates of stress and the prosodic hierarchy in Estonian. In Jaan Ross & Ilse Lehiste. Estonian prosody: Papers from a symposium, 100–124. Tallinn: Institute of Estonian Language. Gordon, Matthew. 1998. A fortition-based approach to Balto-Fennic-Sámi consonant gradation. Folia Linguistica Historica 18. 49–79. Gordon, Matthew. 2001. Syncope induced metrical opacity as a weight effect. Proceedings of the West Coast Conference on Formal Linguistics 20. 206–219. Gordon, Matthew. 2002. A phonetically-driven account of syllable weight. Language 78. 51–80. Gordon, Matthew. 2003. The phonology of pitch accents in Chickasaw. Phonology 20. 173–218. Gordon, Matthew. 2004. A phonetic and phonological study of word-level stress in Chickasaw. International Journal of American Linguistics 70. 1–32. Gordon, Matthew. 2005. A perceptually-driven account of onset-sensitive stress. Natural Language and Linguistic Theory 23. 595 –653. Gordon, Matthew & Ayla Applebaum. 2010. Acoustic correlates of stress in Turkish Kabardian. Journal of the International Phonetic Association 40. 35 –58. Gordon, Matthew, Pamela Munro & Peter Ladefoged. 2000. Some phonetic structures of Chickasaw. Anthropological Linguistics 42. 366 –400. Gordon, Matthew & Pamela Munro. 2007. A phonetic study of final vowel lengthening in Chickasaw. International Journal of American Linguistics 73. 293–330. Gouskova, Maria. 2003. Deriving economy: Syncope in Optimality Theory. Ph.D. dissertation, University of Massachusetts, Amherst (ROA-610). Gregores, Emma & Jorge A. Suárez. 1967. A description of colloquial Guaraní. The Hague: Mouton. Gruzov, Leonid Petrovich. 1960. Sovremenyi mariiskii yazyk: Fonetika. Joshkar-ola: Mariiskoe Knizhnoe Izdatel′stvo. Hayes, Bruce. 1985. Iambic and trochaic rhythm in stress rules. Proceedings of the Annual Meeting, Berkeley Linguistics Society 11. 429 –446. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hayes, Bruce, Robert Kirchner & Donca Steriade (eds.) 2004. Phonetically based phonology. Cambridge: Cambridge University Press. Helimski, Eugene. 1998. Nganasan. In David Abondolo (ed.) The Uralic languages, 480 –515. London & New York: Routledge. Horne, Elinor Clark. 1974. Javanese–English dictionary. New Haven & London: Yale University Press. Hualde, José Ignacio. 1989. Autosegmental and metrical spreading in the vowel-harmony systems of northwestern Spain. Linguistics 27. 773–805. Hualde, José Ignacio. 1998. Asturian and Cantabrian metaphony. Rivista di Linguistica 10. 99–108. Hyde, Brett. 2002. A restrictive theory of metrical stress. Phonology 19. 313–359. Hyman, Larry M. 1989. Accent in Bantu: An appraisal. Studies in the Linguistic Sciences 19. 115–134. Hyman, Larry M. 2001. The limits of phonetic determinism in phonology: *NC revisited. In Elizabeth Hume & Keith Johnson (eds.) The role of speech perception in phonology, 141–185. San Diego: Academic Press. Hyman, Larry M. 2006. Word-prosodic typology. Phonology 23. 225 –257. Jassem, Wiktor, John Morton & Maria Steffen-Batóg. 1968. The perception of stress in synthetic speech-like stimuli by Polish listeners. Speech Analysis and Synthesis 1. 289–308. Jun, Sun-Ah. 1993. The phonetics and phonology of Korean prosody. Ph.D. dissertation, Ohio State University. Jun, Sun-Ah & Cécile Fougeron. 1995. The accentual phrase and the prosodic structure of French. In Kjell Elenius & Peter Branderud (eds.) Proceedings of the 13th International Congress of the Phonetic Sciences, vol. 2, 722–725. Stockholm: KTH & Stockholm University.

23

Matthew Gordon

Kakumasu, James. 1986. Urubu-Kaapor. In Desmond C. Derbyshire & Geoffrey K. Pullum (eds.) Handbook of Amazonian languages, vol. 3, 326–406. Berlin & New York: Mouton de Gruyter. Keating, Patricia, Taehong Cho, Cécile Fougeron & Chai-Shune Hsu. 2003. Domain-initial articulatory strengthening in four languages. In John Local, Richard Ogden & Rosalind Temple (eds.) Phonetic interpretation: Papers in laboratory phonology VI, 145–163. Cambridge: Cambridge University Press. Kirchner, Robert. 2001. An effort-based approach to consonant lenition. New York & London: Routledge. Kirchner, Robert. 2004. Consonant lenition. In Hayes et al. (2004), 313–345. Kryvitskij, A. A. & A. I. Podluzhni. 1994. Uchebnik belorusskogo jazyka dlja samoobrazovanija. Minsk: Vysheishaia Shkola. Lavoie, Lisa. 2001. Consonant strength: Phonological patterns and phonetic manifestations. New York: Garland. Leer, Jeff. 1985. Prosody in Alutiiq (the Koniag and Chugach dialects of Alaskan Yupik). In Michael Krauss (ed.) Yupik Eskimo prosodic systems: Descriptive and comparative studies, 77–133. Fairbanks: Alaska Native Language Center. Lehiste, Ilse. 1965. Vowel quantity in word and utterance in Estonian. Congressus Secundus Internationalis Fenno-Ugristarum, 293–303. Helsinki: Societas Fenno-Ugrica. Lehiste, Ilse. 1966. Consonant quantity and phonological units in Estonian. Bloomington: Indiana University Press. Levi, Susannah V. 2005. Acoustic correlates of lexical accent in Turkish. Journal of the International Phonetic Association 35. 73–97. Lieberman, Philip. 1960. Some acoustic correlates of word stress in American English. Journal of the Acoustical Society of America 32. 451–454. Lombardi, Linda. 2001. Why Place and Voice are different: Constraint-specific alternations in Optimality Theory. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 13–45. Cambridge: Cambridge University Press. Maiden, Martin. 1995. Evidence from the Italian dialects for the internal structure of prosodic domains. In John Charles Smith & Martin Maiden (eds.) Linguistic theory and the Romance languages, 115–131. Amsterdam & Philadelphia: John Benjamins. Mellander, Evan W. 2001. Quantitative processes in trochaic systems. Proceedings of the West Coast Conference on Formal Linguistics 20. 414 –427. Michelson, Karin. 1988. A comparative study of Lake-Iroquoian accent. Dordrecht: Kluwer. Munro, Pamela. 1996. The Chickasaw sound system. Unpublished ms., University of California, Los Angeles. Munro, Pamela 2005. Chickasaw. In Heather Hardy & Janine Scancarelli (eds.) Native languages of the southeastern United States, 114–156. Lincoln: University of Nebraska Press. Munro, Pamela & Peter John Benson. 1973. Reduplication and rule ordering in Luiseño. International Journal of American Linguistics 39. 15 –21. Munro, Pamela & Charles Ulrich. 1984. Structure-preservation and Western Muskogean rhythmic lengthening. Proceedings of the West Coast Conference on Formal Linguistics 3. 191–202. Munro, Pamela & Catherine Willmond. 1994. Chickasaw: An analytical dictionary. Norman, OK: University of Oklahoma Press. Munro, Pamela & Catherine Willmond. 2005. Chikashshanompa’ kilanompoli’. Los Angeles: UCLA Academic Publishing. Nivens, Richard. 1992. A lexical phonology of West Tarangan. In Donald Burquest & Wyn Laidig (eds.) Phonological studies in four languages of Maluku, 127–227. Arlington: Summer Institute of Linguistics. Penny, Ralph J. 1978. Estudio estructural del habla de Tudanca. Tübingen: Niemeyer.

Stress: Phonotactic and Phonetic Evidence

24

Pierrehumbert, Janet B. & David Talkin. 1992. Lenition of /h/ and glottal stop. In Gerard J. Docherty & D. Robert Ladd (eds.) Papers in laboratory phonology II: Gesture, segment, prosody, 90 –117. Cambridge: Cambridge University Press. Plomp, Reinier 1964. Rate of decay of auditory sensation. Journal of the Acoustical Society of America 36. 277–282. Poser, William J. 1990. Evidence for foot structure in Japanese. Language 66. 78 –105. Potisuk, Siripong, Jackson Gandour & Mary P. Harper. 1996. Acoustic correlates of stress in Thai. Phonetica 53. 200 –220. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19 –100. Reed, Irene, Osahito Miyaoka, Steven Jacobson, Paschal Afcan & Michael Krauss. 1977. Yup’ik Eskimo grammar. Fairbanks: Alaska Native Language Center. Rehg, Kenneth L. 1993. Proto-Micronesian prosody. In Jerold A. Edmondson & Kenneth J. Gregerson (eds.) Tonality in Austronesian languages, 25 –46. Honolulu: University of Hawaii Press. Revithiadou, Anthi. 2004. The Iambic/Trochaic Law revisited: Lengthening and shortening in trochaic systems. Leiden Papers in Linguistics 1. 37– 62. Sammallahti, Pekka. 1988. Historical phonology of the Uralic languages. In Sinor (1988), 478–554. Schütz, Albert J. 1985. The Fijian language. Honolulu: University of Hawaii Press. Selkirk, Elisabeth. 1984. Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press. Sinor, Denis (ed.) 1988. The Uralic languages: Description, history and foreign influences. New York: Brill. Sluijter, Agaath M. C. & Vincent J. van Heuven. 1996a. Spectral balance as an acoustic correlate of linguistic stress. Journal of the Acoustical Society of America 100. 2471–2485. Sluijter, Agaath M. C. & Vincent J. van Heuven. 1996b. Acoustic correlates of linguistic stress and accent in Dutch and American English. Proceedings of the International Conference on Spoken Language Processing 4, 630–633. Smith, Jennifer. 2000. Prominence, augmentation, and neutralization in phonology. Proceedings of the Annual Meeting, Berkeley Linguistics Society 26. 247–257. Smith, Jennifer. 2004. Phonological augmentation in prominent positions. New York: Routledge. Smith, Robert L. 1979. Adaptation, saturation and physiological masking in single auditory nerve fibers. Journal of the Acoustical Society of America 65. 166–178. Steriade, Donca. 1997. Licensing laryngeal features. Unpublished ms., University of California, Los Angeles. Taff, Alice, Lorna Rozelle, Taehong Cho, Peter Ladefoged, Moses Dirks & Jacob Wegelin. 2001. Phonetic structures of Aleut. Journal of Phonetics 29. 231–271. Teranishi, R. 1980. Two-moras-cluster as a rhythm unit in spoken Japanese sentence or verse. Paper presented at the 99th Meeting of the Acoustical Society of America, Atlanta. Tereshchenko, N. M. 1979. Nganasankii iazyk. Leningrad: Nauka. Topuria, G. V. 1974. On one regularity in the system of preruptives in the Lezgian language. Annual of Ibero-Caucasian Linguistics 1. 180 –184. Vaysman, Olga. 2009. Segmental alternations and metrical theory. Ph.D. dissertation, MIT. Viemeister, Neal. 1980. Adaptation of masking. In G. van den Brink & F. A. Bilsen (eds.) Psychophysical, physiological and behavioural studies in hearing, 190–198. Delft: Delft University Press. Walker, Rachel. 2004. Vowel feature licensing at a distance: Evidence from Northern Spanish language varieties. Proceedings of the West Coast Conference on Formal Linguistics 23. 787–800. Walker, Rachel. 2005. Weak triggers in vowel harmony. Natural Language and Linguistic Theory 23. 917–989.

25

Matthew Gordon

Wightman, Colin W., Stefanie Shattuck-Hufnagel, Mari Ostendorf & Patti J. Price. 1992. Segmental durations in the vicinity of prosodic phrase boundaries. Journal of the Acoustical Society of America 91. 1707–1717. Williams, Briony. 1985. Pitch and duration in Welsh stress perception: The implications for intonation. Journal of Phonetics 13. 381–406. Wilson, J. P. 1970. An auditory afterimage. In R. Plomp & G. F. Smoorenberg (eds.) Frequency analysis and psychophysics of hearing, 303–315. Leiden: Sijthoff. Woodbury, Anthony. 1981. Study of the Chevak dialect of Central Alaskan Yupik. Ph.D. dissertation, University of California, Berkeley. Yu, Alan C. L. 2004. Explaining final obstruent voicing in Lezgian: Phonetics and history. Language 80. 73–97. Zoll, Cheryl. 1998. Positional asymmetries and licensing. Unpublished ms., MIT (ROA-282).

40 The Foot Michael Hammond

1

Overview

The metrical foot organizes the syllables of words into higher-order units built around stressed syllables. In this chapter, we review the evidence for, and structure of, the foot. Along the way, we treat some of the major issues that have arisen in the development of this notion. The organization of this chapter is as follows. First, we review the background against which the foot was proposed: linear generative phonology and then early footless metrical phonology. We then turn to the earliest foot proposals and the arguments advanced at the time, including arguments from stress theory and prosodic morphology. We then go on to consider how the theory of the foot has changed in Optimality Theory (OT). (For a more general discussion of stress, see chapter 39: stress: phonotactic and phonetic evidence.)

2

Background

In this section, we lay out the necessary background for understanding the earliest proposals about the foot. First and foremost is the background of generative phonology generally and Chomsky and Halle (1968) specifically. We then go on to consider the foundation for the foot laid in early metrical theory.

2.1

Generative phonology and SPE

Here we discuss Chomsky and Halle (1968; henceforth SPE) as the foundation for the foot.1 The main contribution of SPE for our purposes is an explicit treatment of the regularities of stress in English. The analysis is comprised of a number of 1

Some of the issues in this section are developed further in chapter 51: the phonological word, chapter 116: sentential prominence in english and chapter 41: the representation of word stress. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0040

2

Michael Hammond

rules written in the specific formalism proposed there. Their full rule for main stress in English is given in (1) (SPE: 240). (1)

A G−tense J A G avoc J D D G V → [1 stress] / X __ C0 B HcstressK C10 B HaconsK E E I CI V L C I −ant L F F

/ __

5 1 G−stressJ 4 4 〈 +C 〉 H−tense K [+cons] 1 0 1 0 4 J ( fik)At # 4 ! I−cons L 6 L (@ ) 2 [+D]C0 $ 4 4 G −seg J 4 〈1 〉1 C0[bstress]C0〈2V0C0〉2 4 I〈 −FB〉 L 2 2 7 3

Conditions:

〈NSP〈1 VA〉1〉

!2# b=@ $ 1 c≤ 2 X contains no internal #

Setting aside morphological and diacritic variables, and focusing on nouns in particular, the main stress rule assigns main stress and secondary stress to nouns in the following way. First, a [+tense] vowel in a final syllable gets primary or secondary stress, e.g. kangaroo [‘kæIgH’ru] or chickadee [’Œ>kH‘di]. If the final syllable does not have a stress, and the penultimate vowel is followed by appropriate consonants, then it gets stress, e.g. agenda [H’–endH]. Likewise, if the penultimate vowel is [+tense], it gets stress as well, e.g. aroma [H’romH]. Finally, in other cases, the antepenult receives stress, e.g. America [H’merHkH] or remedy [’remHdi]. Abstracting away from the formalism of the time, what we see is a restriction of the primary stress to the last three syllables of the word and a pressure to stress syllables of appropriate weight. The alternating stress rule in (2) (SPE: 240) is responsible for stresses further to the left, e.g. the first stress in kangaroo [‘kæIgH’ru]. Like the main stress rule, it places stress subject to a basically alternating pattern. (2)

V → [1 stress] / __ C0 (=) C0VC0[1 stress]Co]NAV

Both rules alternate in a similar way; both rules assign stress with respect to a following landmark. The fact that both rules exhibit similar patterns and the fact that this kind of alternation is ubiquitous in other languages was a missed generalization in SPE, and one that found an explanation in the development of the metrical foot. A second important aspect of the analysis proposed in SPE is that, unlike other phonological features, the feature [stress] exhibited more than two levels in the phonology. Thus, while a feature like [high] could have the values [+high] or [−high] in the phonology, the feature stress could have an infinite number of values: [0stress], [1stress], [2stress], [3stress], etc. These different numerical values corresponded to degrees of stress that were held to be contrastive. Moreover, the values had a rather odd interpretation, where [0stress] is the least stressed element and [1stress] has the most stress. Since the values can increase without bound, as the integer value increases, the degree of stress gets smaller, but never quite reaches [0stress].

The Foot (3)

Value

Interpretation

[0stress] [1stress] [2stress] [3stress] [nstress]

stressless primary stress secondary stress tertiary stress etc.

3

What made this interpretation work was the Stress Subordination Convention (SSC): “when primary stress is placed in a certain position, then all other stresses in the string under consideration at that point are automatically weakened by one” (Chomsky and Halle 1968: 16–17). The SSC governed application of the stress rules through the cycle (see chapter 85: cyclicity). The cycle held that phonological rules were reapplied as morphological and syntactic operations proceeded. Let us see how this works with the Nuclear Stress Rule (NSR) and Compound Stress Rule (CSR) (Chomsky and Halle 1968: 18): 1

(4)

. . . V . . . ]NAV # G1 stressJ !__ 1 I V L → [1 stress] / @V . . . __ . . . ] $

a.

CSR

b.

NSR

The NSR basically re-assigns primary stress to the rightmost stress of a domain. The CSR re-assigns primary stress to the penultimate primary stress of a domain. This re-assignment of stress is vacuous with respect to the particular vowel that is targeted, but has consequences for all other vowels in the domain via the SSC (Chomsky and Halle 1968: 22). (5)

[NP John’s 1 – – 2

[N [N black 1 1 1 1

board]N eraser ]N ]NP 1 2 3 4

1 – 2 3

NSR CSR CSR NSR

First, stress is assigned to each word in isolation. Then blackboard is assembled, and main stress is assigned to the penultimate/first primary stress, since this is a compound. Next, eraser is appended, and compound stress is re-assigned. Notice how, since there are only two primary stresses present at that point, the penultimate primary is actually the third stress from the right. Finally, we add John’s, and main stress is re-assigned to the rightmost primary, which again is the third stress from the right. Chomsky and Halle thus contributed four key elements to early stress theory: binary alternation, the cycle, the n-ary [stress] feature, and the Stress Subordination Convention (SSC). All of these figured in the development of early metrical theory.

2.2

Early metrical theory

The earliest versions of metrical theory directly addressed the n-ary stress feature, the cycle, and the SSC. It was only later that the foot was introduced.

4

Michael Hammond

Rischel (1972) was the earliest proposal to replace aspects of the SPE stress system with a hierarchical tree. Specifically, Rischel proposes that compound stress in Danish does not require a cycle and that degrees of stress can be easily read off the morphosyntactic tree. Compare the following two compounds in Danish: (6)

fædrelandssang perlehalsbånd

‘patriotic song’ ‘pearl necklace’

[father-land]song pearl[neck-band]

In the first case, the compound is left-branching and has the stress values 132. In the second case, the compound is right-branching and has the stress pattern 123. The SPE rules given for English here would actually accommodate these directly, as shown in the derivations below. (7) [N [N fædre

lands ]N sang ]N

1 1 1 (8)

[N perle

1 2 3 [N hals

1 – 1

1 1 2

1 – 2

NSR CSR CSR

bånd ]N ]N 1 2 3

NSR CSR CSR

Rischel proposes that cyclic effects can be gotten by reading stress levels directly off of trees. He gives trees like the following for the examples above. The pluses and minuses reflect the relative strength of left and right branches and the numbers on nodes reflect the relative effects of those strengths at different levels of the tree. (9)

+ + fædre

2



1

− sang

lands

+ perle

1

− +

hals

2

− bånd

Rischel does not propose a specific algorithm for reading stress values off trees like these, but it is easy to see that various interpretations will produce what appear to be reasonable values. The gist is that reapplication of stress rules per se is not required to get the same kind of cyclic effects cited above from SPE.2 Liberman and Prince (1977) made a similar proposal a few years later, proposing a fairly complete analysis of English stress along similar lines. Basically, they propose that [stress] be treated as a binary feature, with the values [+stress] and 2

Ultimately, Kiparsky (1979) argued that cyclicity is still necessary in a metrical theory of stress. The debate resurfaced again a few years later. See Hammond (1989), Halle and Kenstowicz (1991), and Cole and Coleman (1992) for more discussion.

The Foot

5

[−stress]. Degrees of stress would follow from tree structures erected over the stress values. Trees are binary branching, with each pair of nodes labeled either “strong” (s) or “weak” (w). Liberman and Prince posit the following labeling convention (1977: 257). (10)

In a configuration [c A B ]c: a. b.

NSR: If C is a phrasal category, B is strong. CSR: If C is a lexical category, B is strong iff it branches.

In phrases, clause (a) of the convention places strong nodes uniformly on the right. (11) w John

s

s left w Mary

w loves

s John

In compounds, labeling depends on branching. In a left-branching compound, left nodes are uniformly strong. When right nodes branch, however, by clause (b), they are strong. (12) s s black

s w board

w eraser

w union

s w finance committee

In the case on the left, all right nodes are non-branching, so the left node of each pair of nodes is labeled strong. In the case on the right, finance committee has a non-branching right node, so its left node is labeled strong, but in union finance committee the right node is branching, so it is labeled strong. Like Rischel (1972), Liberman and Prince replace degrees of stress with a treebased algorithm. However, Liberman and Prince go one step further and propose a similar word-internal system. There are three parts to their system. First, [+stress] values are assigned to vowels in words. They initially propose two separate rules for this, much as in SPE. There is the English Stress Rule (ESR (preliminary version); 1977: 272). (13)

V → [+stress] / __ C0 (h(C)) (hC0) #

This rule assigns the rightmost [+stress] in a word. There is also a Stress Retraction Rule (SRR; 1977: 278). (14)

V → [+stress] / __ C0 (h(C))a (VC0)b

V [+stress]

Having two separate rules misses a generalization and Liberman and Prince move a major step forward from SPE in recognizing this:

Michael Hammond

6

Both rules measure leftward from a fixed point of reference, the ESR from a word boundary, the SRR from a stressed syllable; and the standard of measure is in both cases virtually the same. This parallelism strongly suggests that we are witnessing a single unified process of stress assignment, repeating itself across the word, feeding on its own output. (1977: 278)

To deal with this, Liberman and Prince propose a unified English Stress Rule (ESR (iterative version); 1977: 278). (15)

V → [+stress] / __ C0 (V(C))a ( V C0)b ( V X)c# d [+stress] Conditions: ¬c ⊃ d; ¬a, ¬b under certain morphological and lexical circumstances.

A variety of conditions must be imposed on the rule, much as on the SPE equivalents. That said, the rule captures the intuition that there is a similar pattern of iteration. The second part of Liberman and Prince’s algorithm is a key step in the development of the metrical foot. Once values of stress have been assigned to a string by the rule(s) above, syllables are gathered into trees. Above the word level, these trees correspond to syntactic and morphological structures. Below the word level, there are two essential components. First: every sequence of syllables + −, + − −, + − − −, etc., forms a metrical tree. Because of the condition limiting [−stress] to weak positions, and because of the bivalent (binary branching) character of metrical trees, the structure and labeling of the sequences is uniquely determined. (1977: 266)

By this algorithm, we get trees like the following: (16) s +

s

w − s +

s s

w −

w −

s +

w −

w −

w −

...

These trees are then gathered into larger right-branching trees: N

(17)

#

D

D



D

D

D

#

The Foot

7

As noted above, the labeling of the lower-level trees is unambiguous, because of a general constraint against [+stress] in weak position. The higher-level trees are labeled in accord with the Lexical Category Prominence Rule (LCPR; 1977: 270). (18)

In the configuration [N1N2], N2 is strong iff it branches.

Let us take a look at an example: Winnepesaukee [‘w>nHpH’sDki]. First, [+stress] values are assigned by the ESR, producing: (19)

+ − −+ − Winnepesaukee

Syllables are gathered into feet as below: (20) s s +

w −

w −

s +

w −

Finally, the feet are gathered into a tree, the right node of which is labeled strong, since it is branching. (21) w

s

s s +

w −

w −

s +

w −

What is important about this entire tree-construction and tree-labeling procedure is that it explicitly recognizes two levels: a foot level and a higher word level. This is the first step toward an explicit theory of the foot. Liberman and Prince showed how the foot could be employed in a reanalysis of the basic stress facts of English that SPE introduced.

3

Why we need feet

The next step was the parametric elaboration of the foot. At around the same time as Liberman and Prince (1977), Hyman (1977) offered the first typological treatment of stress. While he was not able to go very far in terms of the technical analysis offered, this paper was an important catalyst in forcing phonologists interested in stress to look at the broader typological implications of their work. The first parametric approaches to the metrical foot showed up in Halle and Vergnaud (1978) and McCarthy (1979), but the most influential early proposal was that of Hayes (1980). Let us look at the Hayes proposal in some depth.

8

Michael Hammond

Hayes offered a theory of the foot based on the trees proposed in Liberman and Prince. In particular, feet were parameterized for the following: (22)

Headedness Is the designated terminal element – the strongest element of the foot – on the left edge or the right? b. Boundedness Are feet binary or unbounded? Do feet contain only two syllables or as many as possible? c. Directionality Are feet built left-to-right or right-to-left? d. Iterativity Are feet constructed iteratively or not? That is, is only a single foot built on some edge or are as many feet built as possible? e. Quantity-sensitivity There are three choices here. First, feet can be quantity-sensitive (QS): weak nodes cannot dominate heavy syllables. Second, feet can be quantity-insensitive (QI): syllable weight is irrelevant. Last, feet can be obligatory-branching (OB): in OB feet, strong nodes must dominate heavy syllables and weak nodes may not. f. Syllable weight If feet are sensitive to syllable weight, are they sensitive to the weight of the syllable nucleus or the syllable rhyme? a.

Let us go through some of the examples Hayes cites in support of this theory. Maranungku (Tryon 1970) is cited as an example of left-headed binary left-to-right QI feet. Here, main stress falls on the first syllable of the word and secondary stresses fall on alternating syllables to the right. (23)

’tiralk ’mere‘pet ’jangar‘mata ’langka‘rate‘ti ’wele‘pene‘manta

‘saliva’ ‘beard’ ‘the Pleiades’ ‘prawn’ ‘kind of duck’

Here are two examples: (24) s w ’me re ‘pet

s w s w ’jan gar ‘ma ta

Notice how the left-to-right construction of feet is apparent from the fact that in words with an odd number of syllables, a monosyllabic, or degenerate, foot is built on the right. The difference between primary and secondary stress is captured by positing a higher level of structure: the word tree. These are left- or right-headed unbounded trees built on the roots of feet (see chapter 41: the representation of word stress). In Maranungku, the word tree is left headed.

The Foot

9

(25) w

s

s w ’me re ‘pet Note that in this and in subsequent diagrams we circle the roots of feet when the word tree is represented.3 Warao (Osborn 1966) provides an example of right-to-left footing with left-headed binary feet. (26)

‘japu‘ruki‘tane’hase ‘naho‘roa‘haku’tai ji‘wara’nae e‘naho‘roa‘haku’tai

‘verily to climb’ ‘the one who ate’ ‘he finished it’ ‘the one who caused him to eat’

Warao differs from Maranungku in that, in words with an odd number of syllables, there is no initial degenerate foot; rather that foot is removed by an additional destressing rule. For example, [ji‘wara’nae] is first footed as follows: (27) s w s w ji ‘wa ra ’na e This intermediate representation is then converted to: (28) s w s w ji ‘wa ra ’na e The word tree in Warao is right headed. Hayes assumes that unfooted syllables are adjoined as weak nodes to the word tree. For example: (29) s w

s

w s w s w ji ‘wa ra ’na e Hayes cites Weri (Boxwell and Boxwell 1966) as an example of binary right-headed feet constructed from right to left with a right-headed word tree. 3

Hayes (1980) uses underlining, rather than circles.

10 (30)

Michael Hammond ‘bee’ ‘hair of arm’ ‘mist’ ‘times’

I>n’t>p ‘kÁl>’pÁ Á‘lÁa’m>t ‘akÁ‘nete’pel

And again, a couple of examples: (31) s

w

w

w s ‘kÁ l/ ’pÁ

s

w s w s Á ‘lÁ a ’m/t

Completing the set of binary QI systems, Hayes cites Southern Paiute (Sapir 1930) as an example of binary right-headed feet built left to right. Main stress is assigned with a left-headed word tree. (32)

mant’çac‘qac ma’roÜ‘qwaj’qq•c

‘to hold out one’s hands’ ‘(I) stretch it’

There are several complications to the Southern Paiute system, both noted by Hayes. First, it appears as if there are elements that might be analyzed as long vowels, but that must be treated as adjacent short vowels. (Hayes cites additional evidence for this claim.) In addition, there is an another mechanism that prevents footing of the final syllable: extrametricality (see chapter 43: extrametricality and non-finality). With these provisos, and marking extrametricality with angled brackets, we get structures like these: (33) s s w

s s

w s w s w mant ’ça ‘qa < >

s

w

w

w s w s ma ’ro Ü ‘qwa j’q

w

Southern Paiute is a rather complex case. A simpler example is Araucanian, as described by Echeverría and Contreras (1965). There are some complications here too, but stress in Araucanian basically falls on even-numbered syllables counting from the left.4

4

Complications involve three-syllable words ending in a consonant: these have a final secondary stress, e.g. [.u’Iu‘lan] ‘I do not speak’. There are also contextual effects on short words.

The Foot (34)

wu’le Íi’panto e’lumu‘ju e’lua‘enew ki’mufa‘luwu‘laj

11

‘tomorrow’ ‘year’ ‘give us’ ‘he will give me’ ‘he pretended not to know’

The analysis here is binary right-headed feet built left to right. (35) s

w

w s w s e ’lu mu ‘ju Monosyllabic feet are generally disallowed (or removed) in odd-syllabled cases: (36) s w s w Íi ’pan to Let us now consider quantity-sensitivity (QS; see chapter 57: quantity-sensitivity). This parameter allows heavy syllables to attract stress. Hayes cites Tübatulabal (Voegelin 1935) as an example of right-to-left right-headed QS bounded feet. The generalization is that stress falls on (a) the final syllable, (b) any long vowel, and (c) any vowel that is two syllables to the left of a stress. Since stresses are unranked, there is no word tree. (37) ’taa’hawi’laap ‘in the summer’ pq’tqpq’tqqdi’nat ‘he is turning it over repeatedly’ ’qq’?qqi’?aani’œa ‘he will meat-fast’ ’pDnih’wqn ‘of his own skunk’ Here are two examples of the footings produced by these parameter settings. (38) w s ’taa ’ha wi ’laap

’qq

’?qq

w i

s w ’?aa ni

s ’œa

Notice that long vowels count as heavy in Tübatulabal; thus QS refers to the nucleus, not the rhyme.5 Hayes’ theory of feet also allows for unbounded feet. When these are quantityinsensitive, they simply position stress on the first or last syllable of the word. No actual examples are cited, but we would expect trees like the following for a language with initial stress and QI left-headed unbounded feet: 5

The data cited by Hayes do not establish unequivocally that codas do not contribute to weight.

Michael Hammond

12

s

(39) s s s w ’q q

w q

w q

Notice how, since the foot expands to fill the domain, no more than a single foot will ever be built. A language like Czech, with regular initial stress, might qualify as such a system. Hayes cites Eastern Cheremis (Sebeok and Ingemann 1961) as an example of unbounded left-headed QS footing: a single such foot is built on the right edge of the word; a word tree is not needed. The generalization is that the rightmost long vowel bears stress. If there is no such vowel, the initial vowel bears stress. (40)

œiin’Œaam œlaa’paaÚHm ’pyygHlmH ’kiidHœtHÚH ’tHlHzHn

‘I sit’ ‘his hat (acc)’ ‘cone’ ‘in his hand’ ‘moon’s’

This footing produces examples like the following. Notice how the foot starts at the right edge, expands as far as possible subject to the QS restriction that its weak nodes cannot dominate a long vowel. Notice that QS here is sensitive to only vowel length. (41) w s œlaa ’paa ,Hm

s

s s ’tH

w lH

w zHn

s s w ’kii dHœ

w tH

w ,H

The OB parameter is required for languages that exhibit a curious parallelism to languages like Eastern Cheremis. Khalkha Mongolian (Street 1963) is an example of this sort. Stress in Khalkha falls on the leftmost long vowel in the word; in the absence of a long vowel, stress falls on the initial vowel.6 (42)

6

bos’guul bari’aad xojHrdu’gaar ga’raasaa ’ali ’xøtHbHrH

‘fugitive’ ‘after holding’ ‘second’ ‘from one’s own hand’ ‘which’ ‘leadership’

See Walker (1997) for a different description and analysis.

The Foot

13

When there are long vowels in the word, the pattern could be treated with a single right-headed unbounded QS foot built on the left edge of the word. The problem with this is that it produces an incorrect result in the case of words with no long vowels: (43) s

w s ga ’raa saa s ba

w s *a ’li

w w ri ’add

To treat such systems, Hayes proposes a new parameter and makes interesting use of word trees. Specifically, he proposes that QS feet can be further restricted so that the strong/dominant node must dominate a heavy syllable: obligatorybranching (OB). If no such syllable is available, no foot is built. In the case of Khalkha, a single right-headed unbounded OB foot is built on the left edge of the word and a left-headed word tree is constructed. If there is at least one long vowel in the word, the OB foot will be built over the leftmost one, assigning stress to it, and the word tree is vacuous. (44) s w ga

s w ’raa saa

If there are no long vowels, then the OB foot fails to be constructed. In that case, the word tree is still built, taking syllables as terminals. Since, the word tree is left headed, this results in initial stress. (45) s s s w w w ’xø tH bH rH Obligatory-branching allows for a treatment of systems where stress is assigned to the first or last of the available heavy syllables and in the absence of heavy syllables, the same end of the word gets primary stress. This is in contrast to systems like Eastern Cheremis where the opposite edge of the word gets the default stress. These latter systems are treated with unbounded QS feet.7 7

The parallel between these systems was first discussed by Kiparsky (1973). An alternative formalization of OB footing was proposed by Hammond (1986).

14

Michael Hammond

We have only exemplified some of the combinations of parameter settings that this theory allows. The claim of the theory is that all settings can be freely combined and that the set of possible stress languages is fully defined by these settings. The argument for feet per se comes from their role in this parametric system. If the set of possible stress languages is best defined in terms of a theory that adopts the foot as a central descriptive device, then the typology of stress is an argument for the foot.

4

Do we need constituency?

One could argue that while the foot is a central computational device in the system Hayes develops, the full predictive power of the foot in that system is not exploited; specifically, while the foot is a constituent in metrical trees, its constituency plays no specific role in the system. Prince (1983) takes this observation to its logical conclusion, proposing an alternative metrical theory without constituency and without feet. To understand this proposal, let us return to Liberman and Prince (1977) and their theory of the metrical grid. Liberman and Prince propose the metrical grid as a mechanism for identifying the environment for the rhythm rule, the phenomenon whereby stress is shifted in certain contexts. Thus, when a word like ‘thir’teen is combined with ’men, we get a shift of stress in the former: ’thir‘teen ’men. The effect of this shift can be diagrammed as follows: (46) w

w



s w s ‘thir ’teen ’men

s w s ’thir ‘teen ’men

Interestingly, the shift also happens with phrases like ’achro‘matic ’lens, but not with phrases like ‘Mon’tana ’cow‘boy. Why this should be the case is not apparent from the metrical trees. (47) w w s ‘a

w s chro ’ma

w s

s

→ w s tic ’lens

s ’a

w w s chro ‘ma

w s tic ’lens

The Foot

15

(48) s s

s

w s w s w ‘Mon ’ta na ’cow‘boy To identify cases where rhythm is applicable, Liberman and Prince propose an alternative representation of stress: the metrical grid. The grid represents relative stress as columns of elements where the heights of those columns are projected from metrical trees. Specifically, Liberman and Prince propose the Relative Prominence Projection Rule (1977: 316). (49)

Relative Prominence Projection Rule (RPPR) In any constituent on which the strong–weak relation is defined, the designated terminal element of its strong subconstituent is metrically stronger than the designated terminal element of its weak subconstituent.

The way this is interpreted is that the column for any element must be tall enough so that the RPPR is satisfied for all node pairings defined by its metrical tree. For the phrases we have discussed we then have grids as below. (50) 6 *4 *5 1 2 3 thir teen men

11 *9 *10 6 7 8 1 2 3 4 5 a chro ma tic lens

8 6 7 1 2 3 4 5 Mon ta na cow boy

Each syllable is marked with a grid element. We then go through the tree, making sure that the RPPR is satisfied for each branching node. If it is not, we add a grid element to the relevant column. For example, the second syllable of Montana gets a grid element because of the lowest-level pairing with the syllable na. This same element is sufficient to satisfy the RPPR when we come to the pairing of tana with Mon. On the other hand, in achromatic, the third syllable gets a second-level grid mark because of its pairing with the fourth syllable. It must get an additional grid mark because of the pairing with the first two syllables. We have marked certain grid elements with asterisks. The rhythm rule applies when two columns are too big and too close, i.e. when stresses clash. These properties are formalized in terms of two adjacent elements at two adjacent levels of the grid. For example, in thirteen men, elements 2 and 3, and elements 4 and 5 are adjacent and we therefore mark elements 4 and 5. Likewise, in achromatic lens, elements 7 and 8 and elements 9 and 10 are adjacent, so again we mark this as a clash. The Rhythm Rule applies on the metrical tree to eliminate these clashes.8 The following grids show the results of the relabeling we have already shown for the relevant metrical grids. 8

However, see Hayes (1984) for a treatment of rhythm in English not making use of clash.

16

Michael Hammond

(51) 6 4 5 1 2 3 thir teen men

9 6 1 a

11 10 7 8 2 3 4 5 chro ma tic lens

Notice how there are no longer clashes in these grids. The metrical grid thus correctly distinguishes cases like thirteen men and achromatic lens from cases like Montana cowboy. It is, of course, unfortunate that a single representation for stress was not available. There were two broad responses. One response was a proposal by Hammond (1988) for a blended representation.9 The key insight in this proposal was that the designated element of a constituent should be marked the same way regardless of whether the constituent branches or not. This gives us equivalences as below for degenerate and binary left-headed feet. Notice how the heads of the feet have the same representation in Hammond’s approach, but not in Hayes’s approach. (52)

Feet

Hayes (1980)

Hammond (1988)

q

o q

degenerate

binary s q

w q

o q

q

Here is what a phrase like achromatic lens would look like after application of the rhythm rule. (53)

a chro ma tic lens While this particular formalism did not survive, this general proposal – that heads of constituents should be marked uniformly – did. Halle and Vergnaud (1987) and Hayes (1995) adopt the following equivalent notations that exhibit the same uniformity.10 These are referred to as “bracketed grids.” 9

This notation came to be referred to as “lollipops.” Idsardi (1992) pursues a representation with similar properties. Since Idsardi’s representation allows unpaired parentheses, it entails a somewhat different notion of constituency. 10

The Foot (54)

Feet

Halle and Vergnaud (1987)

Hayes (1995)

x (x) q

(x) q

x (x x) q q

(x .) q q

degenerate

binary

17

The other proposal in response to parallel tree and grid representations of stress was that of Prince (1983). Specifically, Prince proposed a grid-only theory of stress without the foot. The basic idea behind the proposal as far as feet are concerned is that binary patterns of iteration are replaced by appeal to the perfect grid. This device allows for a binary pattern of stress to be assigned in one of four ways, depending on whether the assignment is from left to right or from right to left and on whether one begins with a stressed syllable or a stressless syllable. (55)

Feet

Left-to-right (LR)

Right-to-left (RL)

peak first

x x x x x x x x q q q q q ...

x x x x x x x x ... q q q q q

trough first

x x x x x x x q q q q q ...

x x x x x x x ... q q q q q

Notice how this pattern is achieved with no appeal to binary constituents. To get the effect of word trees and unbounded foot construction, Prince proposes the End Rule. This device assigns a grid mark to the leftmost or rightmost element of the highest level of the grid present. If no stresses have already been assigned, the effect of the End Rule is to assign a stress to the first or last syllable of the word. (56)

End Rule Left x x x x q q q q →

x x x x x q q q q

x x x x q q q q →

x x x x x q q q q

End Rule Right

If stresses are already present, however, then the effect of the End Rule is to promote the leftmost or rightmost stress to primary stress: (57) End Rule Left x x x x x x q q q q →

x x x x x x x q q q q

18

Michael Hammond End Rule Right x x x x x x q q q q →

x x x x x x x q q q q

Again, no constituents per se are required. Finally, the effect of heavy syllables is achieved by allowing heavy syllables to project their own grid marks. Such marks are placed before assignment of the perfect grid. The perfect grid is interrupted by heavy syllable marks or End Rule marks in different ways. One possibility – the default – is that marked heavy syllables are treated as if they were assigned by the perfect grid itself; such syllables may not have a stress assigned to an adjacent syllable. The other possibility – Forward Clash Override (FCO) – is that iteration by the perfect grid continues right up to the marked syllable. The following schematic examples show how FCO works with (left-to-right trough-first) iteration toward a syllable marked by the End Rule Right. (58)

[−FCO] x x x x x q q q q q



[+FCO] x x x x x q q q q q →

x x x x x x q q q q q



x x x x x x x q q q q q

x x x x x x q q q q q →

x x x x x x x x q q q q q

Similar effects obtain when a syllable has been marked as a heavy syllable. In the following example, heavy syllables are marked with H; light syllables with an L. (59)

[−FCO] x x x x x x x L L L L H L L



x x x x x x x x L L L L H L L



x x x x x x x x L L L L H L L

[+FCO] x x x x x x x L L L L H L L



x x x x x x x x x x L L L L H L L



x x x x x x x x x x x L L L L H L L

Forward Clash Override, in conjunction with the End Rule, does the work of degenerate footing and destressing. There is an additional complication involved in whether heavy syllables occupy a single grid position or two positions in sequence. Recall that Hayes (1980) had to make a similar move in the case of Southern Paiute. The central result of Prince (1983) for our purposes is that it established that, on purely stress-based arguments, there is no argument for the foot as a constituent.

5

Other arguments for feet

There were three broad classes of additional arguments for feet: phonological processes, poetic meter, and prosodic morphology. Some of these arguments persevere and some do not.

The Foot

5.1

19

Flapping

One of the earliest arguments for feet outside of stress per se comes from Kiparsky (1979), who argues that flapping in English is best described in terms of feet (see chapter 113: flapping in american english). The basic environment for flapping is as follows. Coronal stops in English ([t d]) are pronounced as flaps before a stressless vowel and after a [−consonantal] element. Thus, word initially we have stops whether the following vowel is stressed or stressless, e.g. toe [’t ho] or doe [’do] and tonight [t h H’nait] or deny [dH’nai]. Medially, before a stressed vowel, we also have stops, e.g. attack [H’t hæk] or adult [H’dZlt]. However, medially before a stressless vowel, we get flaps, e.g. caddy [’khæ7i] or pity [’ph >7i]. Kiparsky describes this with two rules. The first is a cyclic rule of laxing that makes a consonant lax when it follows a [−consonantal] element within a foot (denoted with U here). (60)

Laxing (cyclic) C → [+lax] / U[. . . [−cons] __ . . .]U

The second rule converts some of these lax consonants into voiced ones when they are initial in a syllable. Kiparsky uses strong–weak labeling for syllable structure as well. The tree structure in the environment of this rule encodes the syllable-initial restriction. (61)

Voicing (postcyclic) t +lax

→ [+voiced]/ —

s

A form like pity [’ph >7i] would be syllabified and footed as follows. (62) s w p

w s /

w t

s i

First, the /t/ undergoes Laxing because it is medial in the foot and follows a [−consonantal] element. Then it can undergo Voicing because the /t/ is syllableinitial. The approach nicely accommodates examples like at ease [‘æ ’7iz], where flapping applies across word boundaries, and examples like a tease [H’t h iz], where it does not. For at ease, we get a derivation as follows: (63)

→ s w ’æ

s t

s ’i

w z

w ’æ ’t

s i

w z

Michael Hammond

20

Each word is syllabified separately as a separate syllable and foot. Since the /t/ follows a [−consonantal] element in a foot, it undergoes Laxing. Postcyclically, the words are combined and syllable structure is readjusted so that the /t/ is resyllabified as an onset, reflecting a general preference for onsets over codas. At this stage, Voicing is applicable and we get a flap. For a tease, the /t/ begins in the second syllable. Hence at the cyclic stage of the derivation, there is no opportunity for Laxing to apply. Postcyclically, there is no pressure to resyllabify the onset /t/ as a coda. Even if the /t/ had undergone Laxing, Voicing would still be inapplicable. (64) s

H

w ’t

s i

w z

The main virtue of this approach is that it does not require ambisyllabicity. Kahn (1980) had proposed that flapping occurred when a consonant occurred in two syllables simultaneously: q

(65) p

/

q t

i

This argument has not survived the test of time. Hammond (1982) showed that the ambisyllabicity approach of Kahn (1980) extended to phrasal instances of flapping, while the foot-based approach did not, citing examples like go tomorrow [‘go7H’maro] vs. buy tomatoes [‘bajt h H’me7oz]. Under appropriate phrasal conditions, flapping can apply to examples where the /t/ begins the second word. There is also substantial psycholinguistic evidence for ambisyllabicity (Treiman and Zukowski 1990; Kessler and Treiman 1997; Treiman and Danis 1988), so eliminating it from linguistic representations is not an obvious desideratum.

5.2

Poetic meter

Another initial argument for the foot came from poetic meter. Kiparsky (1977) argued that constraints on meter occasionally require reference to foot constituency. Subsequently, Hayes (1983) showed that this foot constituency was not necessary, arguing for a purely grid-based theory of meter. Let us look at this argument a little more closely. Iambic pentameter in English is characterized by lines with 10 syllables, where even-numbered syllables are “strong” and odd-numbered syllables are “weak.” Traditionally, such a line is seen as a sequence of five iambic feet. For example (Shakespeare, Sonnet 1): (66) w To

s eat

w s the world’s

w due,

s by

w the

s grave

w and

s thee

The Foot

21

The effect of the division of syllables into strong and weak is that poets can restrict the distribution of stresses in these positions. In English, poets restrict stressed syllables in weak positions; strong positions are unrestricted.11 Kiparsky argues that constraints on the distribution of stresses in weak position refer to constituency, including foot constituency. For example, he argues that Milton’s verse is subject to the following constraint, which he labels “Milton I.” (67) *w s

where s is the strongest syllable of its phrase. Here, the tree above is the line structure and the tree below is the actual prosodic structure of the line. A constraint like this rules out a line like the following for Milton (though it is well formed for Shakespeare, Sonnet 7). (68) w Rew

s sembs

w ling w s

s w strong youth w s

s in w

w his w

s mids

s

w dle w

s age s

w

w

s s

Here the word youth is strongest in its phrase, labeled s, and a right branch. On the upper side, this word occurs as the weak left branch of a foot. If the bracketing agrees, however, such a line is acceptable for Milton (Paradise Lost 4.556).

11

See Fabb and Halle (2008) for a recent comprehensive theory.

22

Michael Hammond

(69) w On w

s a w

w s Sun- beam, s w

w swift w

s as w

w a w

s shoots

s

w ing w

s Star s

w

s

s s s

Finally, examples like the following show that the element must be the strongest element of the phrase, and that the bracketing restrictions do not suffice of themselves (Paradise Regained 2.424). (70) w And w

s his w

w Son s

s Hes

w

w rod w

s plac’d ...

w on

s Ju-

w s dah’s Throne

s s

Hayes (1983) argues that references to foot constituency can be done away with if we define stress peaks over metrical grids and refer to higher-level prosodic constituency. His version of Milton I looks like this: (71)

Milton I (grid version) *Peak / [. . . __ ]phrase

A peak is defined in terms of the grid as a grid column that is higher than at least one of its neighbors. Let us now look at how this constraint separates the cases we have considered so far. Hayes represents grids in terms of a single symbol “x,” rather than numbers. In addition, he represents the line template as a simple single-level grid, rather than with nodes labeled “s” and “w.” For the line in (68), we would have this template: (72)

. x . x . x . x . x Resembling strong youth in his middle age

The Foot

23

In subsequent diagrams, we leave the template out. The relevant part of the illegal line in (68) is: (73)

x . x . x x [Resembling strong youth] in his middle age

The syllable youth is a peak, defined with respect to the preceding syllable. It is in a weak position and phrase final. Hence it is illegal by the revised constraint. The two legal cases we considered above involve mismatches that are not phrase final: (74)

a.

x . . x x [On a Sunbeam], swift as a shooting star

b.

x . . x x [And his Son Herod] plac’d on Judah’s Throne

Although Kiparsky (1977) was an important step forward in our understanding of meter, it would be fair to say that his argument for the foot from meter did not survive.

5.3

Reduplication

An additional class of arguments for feet come from prosodic morphology (McCarthy and Prince 1986, 1993). The key claim here is that the size and location of reduplication, infixation, and related morphological operations refer directly to prosodic units, including the metrical foot. In the remainder of this section, we review four arguments of this sort for the foot: reduplication, locus of infixation, minimal word constraints, and language games. Yidiny (Dixon 1977a, 1977b; Hayes 1982; Marantz 1982) offers an example where a foot is reduplicated to mark the plural.12 (75)

Singular

Plural

mulari gindalba

mulamulari gindalgindalba

‘initiated man’ ‘kind of lizard’

The first two syllables of the word are reduplicated; there is extensive evidence that stress is assigned with binary feet in Yidiny and that the first two syllables of words like these would be footed together.13

12 13

For more general discussion of reduplication, see chapter 100: reduplication. The stress system of Yidiny is complex; see Hayes (1982) for further discussion.

Michael Hammond

24

5.4

Locus of infixation

The position of infixation can also be sensitive to feet. One of the most celebrated examples of this is expletive infixation in English (McCarthy 1982; Hammond 1997, 1999). Feet in English are clearly binary and left headed. There are a number of complications involving quantity-sensitivity and various sorts of ternary patterns, but the basic left-headed binary nature of stress feet is clear (Liberman and Prince 1977; Hayes 1981; Halle and Vergnaud 1987).14 The English expletive fuckin’ can be inserted in the middle of a word, e.g. in fantastic, producing fan-fuckin’-tastic, to indicate emphasis. This is relevant in the present context because the expletive must occur between two feet and cannot interrupt a foot. Thus *fantas-fuckin’-tic is illegal. It then follows that if a word has only one stress – and therefore only one foot – it cannot undergo expletive infixation. In the following examples, stresses are marked in the usual way and foot boundaries with square brackets. (76)

Word

Legal

Illegal

a[n’nounce] a[’genda]

– –

A[’meri]ca



a-fuckin’-nnounce a-fuckin’-genda agen-fuckin’-da A-fuckin’-merica Ame-fuckin’-rica Ameri-fuckin’-ca

With a two-syllable word with two stresses, expletive infixation is possible between the syllables: (77) Word [‘mun][’dane]

Legal

Illegal

mun-fuckin’-dane



With three-syllable words with two stresses, the position of the infix is precisely between the feet. (78)

Word

Legal

Illegal

[‘fan][’tastic] [‘Tenne][s’see] a[‘long][’side]

fan-fuckin’-tastic Tenne-fuckin’-ssee along-fuckin’-side

fantas-fuckin’-tic Te-fuckin’-nnessee a-fuckin’-longside

There are no monomorphemic examples of the third sort – like alongside – so these are confounded with morphological effects. Longer examples behave as expected. Interestingly, if there are more than two feet, many speakers find multiple infixation sites acceptable.

14

Though see Burzio (1994) for another view.

The Foot (79)

Word

Legal

Illegal

[‘Minne][’sota]

Minne-fuckin’-sota

[‘Tim][‘buk][’tu]

Tim-fuckin’-buktu Timbuk-fuckin’-tu Hali-fuckin’-carnassus Halicar-fuckin’-nassus Apa-fuckin’-lachicola Apalachi-fuckin’-cola

Mi-fuckin’-nnesota Minneso-fuckin’-ta –

[‘Hali][‘car][’nassus] [‘Apa][‘lachi][’cola]

25

Ha-fuckin’-licarnassus Halicarna-fuckin’-ssus A-fuckin’-palachicola Apala-fuckin’-chicola Apalachico-fuckin’-la

Strikingly, there are multiple infixation possibilities just in case we find two medial stressless syllables in a row. This follows directly from the claim that feet in English are binary. (80)

Word

Legal

Illegal

[‘Winne]pe[’saukee]

Winne-fuckin’-pesaukee Winnepe-fuckin’-saukee Kala-fuckin’-mazoo Kalama-fuckin’-zoo

Wi-fuckin’-nnepesaukee Winnepesau-fuckin’-kee Ka-fuckin’-lamazoo

[‘Kala]ma[’zoo]

The second stressless syllable is affiliated with neither of the adjacent feet allowing the infix to be positioned to either side of it, still satisfying the requirement that there be feet to each side and that the primary stress follows. There are additional complications to the system (Hammond 1997, 1999). First, the main stress cannot precede the infix. Thus ‘Kalama-fuckin’-’zoo is decidedly better than ’catama-fuckin’-‘ran. In addition, if the syllable preceding the infix is stressed, it must be at least bimoraic. Hence, mun-fuckin’-dane [‘mZn‘fZkHn’den] is better than ra-fuckin’-ccoon [‘ræ‘fZkHn’kun]. Those complications notwithstanding, the locus of infixation provides additional evidence for foot constituency.

5.5

Minimum word size

Lardil (Wilkinson 1988) provides a nice example of a minimum word constraint based on the foot: words in Lardil must have at least two vowels. If they do not, then they are augmented to meet this target with an epenthetic [a]. This provides for alternations in the shape of the stem depending on whether it is suffixed or not; an unsuffixed sub-minimal stem is augmented. Verbs with at least two syllables are inflected as follows: (81) underlying uninflected non-future future

‘tree’

‘dugong’

‘beach’

‘inside’

/}uIal/ }uIal }uIalin }uIaluÈ

/kentapal/ kentapal kentapalin kentapaluÈ

/kela/ kela kelan kelaÈ

/wiÍe/ wiÍe wiÍen wiÍeÈ

Monosyllabic consonant-final roots with long vowels behave in similar fashion.

Michael Hammond

26 (82)

underlying uninflected non-future future

‘ti-tree sp.’

‘spear gen.’

/pee7/ pee7 pee7in pee7uÈ

/maaK / maaK maaKin maaKkuÈ

However, nouns with only a single vowel get augmented when uninflected. (83) underlying uninflected non-future future

‘thigh’

‘shade’

/te7/ te7a te7in te7uÈ

/wik/ wika wikin wikuÈ

The two-vowel target can be seen as foot based if we treat Lardil as Hayes (1980) treated Southern Paiute: each vowel element is a potential terminal element for footing. Alternatively, if we view vowels as the sole bearers of moras in Lardil, we can view this as a bimoraic target, which was later proposed to be a foot.15

5.6

Language games

Hammond (1990) discusses a language game in English that provides further evidence for the foot. The game is played by substituting names into the following rhyme. (84)

Jack, Jack bo back [–æk –æk bo bæk] banana fana fo fack [bHnænH fæ nH fo fæk] me my mo mack [mi maj mo mæk] Ja–ack [–æ–æk]

— , — , bo b — banana fana fo f — me my mo m — —

The onset of the name undergoes various substitutions not relevant here. The relevant point here is that the name must fit a particular prosodic template: from one to three syllables, where the first syllable is stressed and any subsequent syllables are stressless. This corresponds to a single left-headed binary foot plus an optional extrametrical syllable. Marking feet with square brackets and the extrametrical syllable with angled brackets, we get a clear difference between names that are acceptable and those that are not. (85)

15

Permissible

Impermissible

[’Jack] [’To] [’Jenni] [’Gwendo]

A[n’nette] [‘Isa][’do] [’Mira][‘beil] O[’livi]

Garrett (1999) argues, though, that word minimality is not connected to foot structure.

The Foot

27

The game thus provides corroborating evidence for the role of foot constituency in phonology.

6

The typology of feet

Hayes (1987) proposes a radically different foot inventory. Rather than a symmetric parametric system, Hayes develops a non-parametric asymmetric system with only three basic feet: the syllabic trochee, the moraic trochee, and the iamb. This development and the subsequent responses is sufficiently important that it is treated in a chapter of its own: chapter 44: the iambic–trochaic law.

7

Some subsequent proposals

Kager (1993) offers a reformulation of the asymmetric typology that maintains a symmetric foot system, and derives surface quantitative asymmetries from syllable structure. Consider, for example, the asymmetry between an iambic foot and a moraic trochee. The iambic foot can contain a bimoraic right element, but the moraic trochee cannot. Kager argues that this follows from two independently required observations. First, when a heavy/bimoraic syllable bears stress, it is the first mora that does so. Second, languages avoid lapses, two stressless elements in a row. We can see then that lengthening the right node of an iambic foot is well formed, but lengthening the left node of a trochaic foot is not, since the latter results in two stressless elements – moras – in a row. (Syllable boundaries are marked with square brackets here.) (86)

Iamb

Trochee

( . x) ( . x .) [[][[] → [[][[[]

(x .) (x . .) [[][[] → [[[][[]

An interesting advantage of invoking lapse like this is that the same principle can be used to rule out ternary feet. If we assume that the head of a foot must be peripheral, then either sort of ternary foot would result in a lapse. (87) Left-headed (x . . ) q q q

Right-headed ( . . x) q q q

Gordon (2002) offers an OT analysis of quantity-insensitive stress. This includes systems with a single peripheral stress and systems with binary and ternary patterns of iteration. The paper is important for several reasons. First, it brought quantity-insensitive systems back into the theoretical discourse that had focused on quantitative asymmetries for several years. Second, it offered a rigorous application of standard OT constraints to a broad cross-section of languages. The logic of the approach is as follows. First, there are several Align constraints that put stresses on the edges of words. In addition, there is the Non-finality constraint, the OT analog of extrametricality.

28

Michael Hammond

Iterative footing is accomplished with various versions of Clash and Lapse constraints. Let us look at Cayuvava (Key 1961, 1967) to see how the system works. (88)

’eJe ’œakahe ki’hibere ari’uuŒa ‘–ihira’riama ma‘rahaha’eiki ikit‘apare’repeha ‘Œaadi‘roboßu’ruruŒe me‘daruŒe‘Œeiro’hiiJe Œaa‘dairo‘boiro’hiiJe

‘tail’ ‘stomach’ ‘I ran’ ‘he came already’ ‘I must do’ ‘their blankets’ ‘the water is clean’ ‘99’ (1st digit) ‘15 each’ (2nd digit) ‘99’ (2nd digit)

Here are the key constraints Gordon assumes for this case, along with critical rankings: (89)

*ExtendedLapse, *Clash, Non-finality, Align(X2,R) >> Align(X1,L) >> Align(X1,R) >> ...

The *ExtendedLapse constraint rules out stresses more than two syllables apart; this does the work of ternary iteration. The *Clash constraint prevents stresses from being adjacent. Non-finality is final extrametricality. Align(X2 ,R) forces main stress on the right. Finally, the relative ranking of Align(X1,L) and Align(X1,R) forces iteration from the right. Let us now see how this works in the case of [ma‘rahaha’eiki]. (90)

/marahahaeiki/

*EL *Cl NF Al(X1,L) Al(X1,L) Al(X1,R)

☞ a. ma‘rahaha’eiki b. ‘marahaha’eiki

*!

c. ma’rahaha‘eiki d. marahaha’eiki e. ‘maraha’haeiki

*! *!* *!

5

7

4

8

5

7

4 3

2 9

Stresses are positioned at the right distance apart because of the interaction of *ExtendedLapse and *Clash. The former also forces stress to iterate in the first place. In the context of foot theory, the key observation is that the system does not require feet to describe QI stress patterns. As with Prince (1983), however, such an account leaves open how apparent patterns of foot-related prosodic morphology are accommodated. Finally, another interesting proposal is developed by Hyde (2002). The core of the proposal is that there is a metrical grid that is independent of footing. Feet are all binary and have heads, but foot heads need not bear stress. A further point of interest is that feet may overlap. These innovations allow Hyde to maintain exhaustive footing and develop a spare model of directional stress effects.

The Foot

8

29

Summary

There are a number of other important and interesting proposals regarding foot theory, but we have only been able to touch on some very salient ones here. What does the future hold? There are a number of threads that seem like promising avenues of development for feet. First, Optimality Theory has a huge effect on all of phonological theory, but the framework seems to be reaching its limits. There are several alternative versions of the framework that have developed – for example, Stochastic OT (Boersma 1997) and Harmonic OT (Smolensky and Legendre 2006) – but it is unclear where the general framework is going at this time. What OT has left us with is a clear refocusing of attention from structural elements of phonological representations to constraints on those representations. On such a view, feet per se no longer exist. Moreover, there is no one-to-one mapping of constraints and the foot inventory (necessarily). If this thread continues, we would reasonably expect less attention to large-scale foot effects and more attention on the constraints from which those effects derive. For example, we might expect more attention to constraints like *Clash or the Weight-to-Stress Principle. Given the shift in attention from strictly ranked constraints to other ranking algorithms, including probabilistic ranking, we would expect developments along the lines of probabilistic feet. Perhaps some metrical phenomena are best treated in terms of footing which is only probabilistic in nature. A word might not have a fixed structure, but a structure that is only partially fixed, e.g. locus of footing or headedness, might be indeterminate. One might use this to account for variation in stress or exceptions of various sorts, i.e. ternarity. Another direction of current research is increased attention to the extragrammatical features that impinge on the grammar, e.g. phonetics, perception, production, lexical access, and acquisition. It seems quite likely that both our understanding of the basic facts of footing and the theoretical frameworks we use to describe footing phenomena will change as these efforts expand our empirical focus. The role of quantity has already been discussed from this perspective, for example, by Hayes (1987) and Kager (1993). These are extremely sketchy predictions, however.

ACKNOWLEDGMENTS Thanks to Joyce McDonough, Diane Ohala, Adam Ussishkin, several anonymous reviewers, and the editors for useful discussion. All errors are my own.

REFERENCES Boersma, Paul. 1997. How we learn variation, optionality, and probability. Proceedings of the Institute of Phonetic Sciences of the University of Amsterdam 21. 43–58 (ROA-221). Boxwell, Helen & Maurice Boxwell. 1966. Weri phonemes. In S. A. Wurm (ed.) Papers in New Guinea linguistics No. 5, 77–93. (Pacific Linguistics A37.) Canberra: Australian National University.

30

Michael Hammond

Burzio, Luigi. 1994. Principles of English stress. Cambridge: Cambridge University Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Cole, Jennifer & John Coleman. 1992. No need for cyclicity in generative phonology. Papers from the Annual Regional Meeting, Chicago Linguistic Society 28. 36 –50. Dixon, R. M. W. 1977a. A grammar of YidiJ. Cambridge: Cambridge University Press. Dixon, R. M. W. 1977b. Some phonological rules in YidiJ. Linguistic Inquiry 8. 1–34. Echeverría, Max S. & Heles Contreras. 1965. Araucanian phonemics. International Journal of American Linguistics 31. 132–135. Fabb, Nigel & Morris Halle. 2008. Meter in poetry: A new theory. Cambridge: Cambridge University Press. Garrett, Edward, 1999. Minimal words aren’t minimal feet. UCLA Working Papers in Linguistics 1, Papers in Phonology 2. 68–105 (ROA-1031). Gordon, Matthew. 2002. A factorial typology of quantity-insensitive stress. Natural Language and Linguistic Theory 20. 491–552. Halle, Morris & Michael Kenstowicz. 1991. The Free Element Condition and cyclic versus noncyclic stress. Linguistic Inquiry 22. 457–501. Halle, Morris & Jean-Roger Vergnaud. 1978. Metrical structures in phonology. Unpublished ms., MIT. Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA: MIT Press. Hammond, Michael. 1982. Foot-domain rules and metrical locality. Proceedings of the West Coast Conference on Formal Linguistics 1. 207–218. Hammond, Michael. 1986. The obligatory-branching parameter in metrical theory. Natural Language and Linguistic Theory 4. 185 –228. Hammond, Michael. 1988. Constraining metrical theory: A modular theory of rhythm and destressing. New York: Garland. Hammond, Michael. 1989. Cyclic secondary stresses in English. Proceedings of the West Coast Conference on Formal Linguistics 8. 139 –153. Hammond, Michael. 1990. The Name Game and onset simplification. Phonology 7. 159–162. Hammond, Michael. 1997. Vowel quantity and syllabification in English. Language 73. 1–17. Hammond, Michael. 1999. The phonology of English: A prosodic optimality-theoretic approach. Oxford: Oxford University Press. Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Published 1985, New York: Garland. Hayes, Bruce. 1982. Metrical structure as the organizing principle of YidiJ phonology. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part 1, 97–110. Dordrecht: Foris. Hayes, Bruce. 1983. A grid-based theory of English meter. Linguistic Inquiry 14. 357–394. Hayes, Bruce. 1984. The phonology of rhythm in English. Linguistic Inquiry 15. 33–74. Hayes, Bruce. 1987. A revised parametric metrical theory. Papers from the Annual Meeting of the North East Linguistic Society 17. 274 –289. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hyde, Brett. 2002. A restrictive theory of metrical stress. Phonology 19. 313–359. Hyman, Larry M. 1977. On the nature of linguistic stress. In Larry M. Hyman (ed.) Studies in stress and accent, 37–82. Los Angeles: Department of Linguistics, University of Southern California. Idsardi, William J. 1992. The computation of prosody. Ph.D. dissertation, MIT. Kager, René. 1993. Alternatives to the iambic-trochaic law. Natural Language and Linguistic Theory 11. 381–432. Kahn, Daniel. 1980. Syllable-based generalizations in English phonology. New York: Garland. Kessler, Brett & Rebecca Treiman. 1997. Syllable structure and the distribution of phonemes in English syllables. Journal of Memory and Language 37. 295 –311.

The Foot

31

Key, Harold H. 1961. Phonotactics of Cayuvava. International Journal of American Linguistics 27. 143–150. Key, Harold H. 1967. Morphology of Cayuvava. The Hague: Mouton. Kiparsky, Paul. 1973. “Elsewhere” in phonology. In Stephen R. Anderson & Paul Kiparsky (eds.) A Festschrift for Morris Halle, 93–106. New York: Holt, Rinehart & Winston. Kiparsky, Paul. 1977. The rhythmic structure of English verse. Linguistic Inquiry 8. 189–247. Kiparsky, Paul. 1979. Metrical structure assignment is cyclic. Linguistic Inquiry 10. 421–441. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. Marantz, Alec. 1982. Re reduplication. Linguistic Inquiry 13. 435 –482. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCarthy, John J. 1982. Prosodic structure and expletive infixation. Language 58. 574–590. McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. McCarthy, John J. & Alan Prince. 1993. Prosodic morphology I: Constraint interaction and satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. Osborn, Henry A. 1966. Warao I: Phonology and morphophonemics. International Journal of American Linguistics 32. 108–123. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19 –100. Rischel, Jørgen. 1972. Compound stress in Danish without a cycle. Annual Report of the Institute of Phonetics 6. 211–228. Sapir, Edward. 1930. Southern Paiute, a Shoshonean language. Proceedings of the American Academy of Arts and Sciences 65. 1–296. Sebeok, Thomas A. & Frances J. Ingemann. 1961. An Eastern Cheremis manual: Phonology, grammar, texts and glossary. Bloomington: Indiana University. Smolensky, Paul & Géraldine Legendre (eds.) 2006. The harmonic mind: from neural computation to optimality-theoretic grammar. Cambridge, MA: MIT Press. Street, John C. 1963. Khalkha structure. Bloomington: Indiana University & The Hague: Mouton. Treiman, Rebecca & Catalina Danis. 1988. Syllabification of intervocalic consonants. Journal of Memory and Language 27. 87–104. Treiman, Rebecca & Andrea Zukowski. 1990. Toward an understanding of English syllabification. Journal of Memory and Language 29. 66 –85. Tryon, Darrell T. 1970. An introduction to Maranungku (Northern Australia). (Pacific Linguistics B15.) Canberra: Australian National University. Voegelin, Charles F. 1935. Tübatulabal grammar. University of California Publications in American Archaeology and Ethnology 34. 55–189. Walker, Rachel. 1997. Mongolian stress, licensing and factorial typology. Unpublished ms., University of California, Santa Cruz (ROA-171). Wilkinson, Karina. 1988. Prosodic structure and Lardil phonology. Linguistic Inquiry 19. 325–334.

41

The Representation of Word Stress Ben Hermans

1

Introduction

Since the publication of Hayes (1985), the asymmetries between iambs and trochees have been a central theme in the literature on stress. Two types of asymmetries can be distinguished. One has to do with quantity. It has frequently been claimed that iambs do not allow a heavy syllable in the weak position, and require a heavy syllable in the strong position. Kager (1993) deals with asymmetries of this type on the basis of a theory whose central hypothesis is that feet are built over moras, rather than syllables. The recent literature, however, has shown that these “quantitative asymmetries” are not supported empirically. It is simply not true, for instance, that iambs invariably constrain the occurrence of heavy and light syllables in the way just described. A particularly convincing argument to this effect is given in Altshuler (2009), with respect to Osage. (See also chapter 44: the iambic–trochaic law and chapter 57: quantity-sensitivity.) There is a second class of asymmetries, however, which remains valid. These are “parsing asymmetries,” which have to do with the direction of foot construction in a word. Two authors have argued that such parsing asymmetries can only be explained if the representation of word stress is fundamentally changed. Interestingly, however, they disagree as to how it should be changed. While Gordon (2002) proposes to simplify the representation of word stress by eliminating foot structure completely, Hyde (2001, 2002, 2008) recognizes not only foot structure but, in addition, a new type of structure, the “overlapping foot,” thus complicating the representation of stress in order to account for the asymmetries. In this chapter I consider the ongoing debate about the representation of word stress from the perspective of parsing asymmetries. In §2, after presenting some of the most important asymmetries, I briefly sketch Gordon’s account, which is as simple as it is radical. In his view, asymmetries can be accounted for if foot structure is abolished. Word stress representations contain only gridmarks, as in Prince (1983) and Selkirk (1984). In the spirit of Gordon, then, we might say that feet are superfluous if we want to account for the distribution of stress in the words of the world’s languages. This raises the question of whether feet are necessary at all. Interestingly, if we broaden our scope to include other phenomena as well, the evidence for foot The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0041

The Representation of Word Stress

2

structure becomes overwhelming. In §3 I present some cases from the recent literature which support the existence of feet, thus suggesting that representations with only gridmarks are too impoverished. If foot structure does exist, then how do we account for the distribution of stress in the words of the world’s languages? Is it still reasonable to formulate the relevant constraints in terms of the grid only? Or must they be stated in terms of foot construction, with gridmarks playing only a marginal role? In §4 I present an overview of Hyde’s work, in which the claim is made that the distribution of stress can best be explained in terms of constraints regulating foot construction. Gridmarks only read off some of the basic properties of a word’s foot structure. This is a continuation of the tradition initiated by Liberman and Prince (1977). Other authoritative studies, such as Hayes (1984, 1995), have argued for the same idea, which also led to development of the “bracketed grid” notation (Halle and Vergnaud 1987). There are some important differences between the various “tree-cum-grid” theories, however. In §5 I give a brief overview of one issue where theories seem to differ. This concerns the status of headedness in foot structure. Is a foot inherently headed, even if it is not accompanied by a gridmark? Or is it the case that a foot is inherently headless unless there is a gridmark accompanying it, marking one syllable as the head? In the first approach, some or all of the properties of foot structure can simply be read off the grid. In this view foot structure is primary and the grid secondary. In the second approach, in which the grid is imposed on foot structure, the grid is primary and foot structure secondary.

2

Explaining parsing asymmetries without foot structure

Hyde (2001, 2002) shows that some non-existing systems can easily be derived with generally accepted foot inventories (see chapter 40: the foot). Let us consider three of these cases. The first one is the Australian language Garawa, which can be compared with what Hyde (2002: 329) calls “Anti-Garawa,” an unattested system. (1)

Garawa

Anti-Garawa

x x x (1 2)(3 4)(5 6) x x x (1 2) 3 (4 5)(6 7)

x (1 2)(3 x (1 2)(3

x x 4)(5 6) x x 4) 5 (6 7)

In these representations foot structure is indicated by round brackets and headedness is represented by gridmarks. Garawa can be derived with the following rules. One trochee is built at the left edge. Then trochees are built from right to left. Furthermore, degenerate feet are not allowed. In odd-parity words this creates a lapse following the stressed initial syllable. If we change just two ingredients of this system we derive a non-existing pattern. One iamb is built at the right edge, and then a series of iambs from left to right. We then derive a system in which odd-parity words contain a lapse before the final (stressed) syllable.

3

Ben Hermans

The second example is a pair consisting of the Australian language Pintupi, which can easily be derived by the rules of the theory, and Anti-Pintupi, which can be derived just as easily but does not exist. (2)

Pintupi

Anti-Pintupi

x x x (1 2)(3 4)(5 6) x x x (1 2)(3 4)(5 6) 7

x x x (1 2)(3 4)(5 6) x x x 1 (2 3)(4 5)(6 7)

In Pintupi, trochees are built from left to right. Degenerate feet are not allowed. In odd-parity words this creates a lapse at the right edge of a word. Anti-Pintupi can be derived just as easily, by constructing iambs from right to left, so that the lapse is located at the left edge. Finally, consider Piro, spoken in Brazil and Peru, and Anti-Piro, an impossible system. (3)

Piro

Anti-Piro

x x x (1 2)(3 4)(5 6) x x x (1 2)(3 4) 5 (6 7)

x x x (1 2)(3 4)(5 6) x x x (1 2) 3 (4 5)(6 7)

In Piro, one trochee is built at the right edge. Remaining trochees are built from left to right; degenerate feet are not allowed. In odd-parity words this creates a lapse before the stressed penultimate syllable. Anti-Piro can be derived by changing just two ingredients. One iamb is built at the left edge; remaining iambs are built from right to left. In odd-parity words this creates a lapse after the peninitial syllable. The three non-existent cases have in common that iambic structure refers to the right edge. In Anti-Garawa one iamb is constructed at the right edge, while in Anti-Pintupi and Anti-Piro foot construction starts at the right edge. One might suppose, then, that iambs cannot refer to a word’s right edge. This, however, is not true; languages where iambs are constructed from right to left do exist. One example is Suruwaha, spoken in Brazil (Hyde 2002: 320), which is the mirror image of Araucanian, spoken in Chile and Argentina (Hyde 2002: 320). In this language iambs are constructed from left to right. These systems are illustrated in (4): (4)

Araucanian x (1 2)(3 x (1 2)(3

x 4)(5 x 4)(5

Suruwaha x 6) x 6) 7

x x x (1 2)(3 4)(5 6) x x x x (1)(2 3)(4 5)(6 7)

In Araucanian the final syllable is unparsed in odd-parity words. In Suruwaha the first syllable of odd-parity words is assigned a degenerate foot.

The Representation of Word Stress

4

It seems as if these two languages employ two different strategies to achieve the same goal: a rhythmic pattern in which every other syllable is stressed. To achieve this, Araucanian leaves the final syllable unparsed; if this were not the case, a clash would be created at the right edge, disturbing the rhythmic alternation. In Suruwaha, on the other hand, a degenerate foot is constructed at the left edge. Otherwise, there would be a lapse at the left edge, disturbing the binary rhythmic pattern. If it is true that a binary rhythmic pattern is the primary goal, we might just as well express that target directly, without the mediation of feet. A comparison of Araucanian and Suruwaha, then, suggests that feet might be superfluous; all that seems to matter is rhythmic alternation. All the non-existing systems mentioned above share the same property; in an odd-parity word there is a lapse, entailing that the rhythmic alternation is disturbed. This again suggests that binary rhythm is the only relevant factor, which leads Gordon to adopt the strategy of expressing this directly, without foot structure. Essentially, then, the idea is that the “anti-systems” in (1)–(3) cannot exist because they do not realize the ideal of binary rhythm. The ideal of rhythmic alternation is expressed on the grid, using the following two constraints. (5)

a. b.

*Lapse (Gordon 2002: 502) A string of more than one consecutive stressless syllable may not occur. *Clash (Gordon 2002: 506) A stress domain may not contain adjacent stressed syllables.

Having established that the non-existing systems can be explained on the assumption that they lack the ideal rhythm in odd-parity words, the question arises of how we can account for systems which do exist, even though alternating rhythm is disturbed in odd-parity words. Let us turn first to Pintupi. To account for this language, Gordon makes use of Non-finality, a device that can be motivated independently (Prince and Smolensky 1993; see chapter 43: extrametricality and non-finality for a definition and discussion). In this view, Pintupi has alternating rhythm, but the final syllable is excluded from stress assignment. This becomes clear when we compare Pintupi with another Australian language, Maranungku. (6)

Pintupi x x 1234 x x 1234

Maranungku x 5 {6} x 5 6 {7}

x x 1234 x x 1234

x 56 x x 567

I have indicated the Non-finality requirement imposed on the final syllable with curly brackets (braces). The two representations in (6) demonstrate that there is only one difference between Pintupi and Maranungku. In the former, but not the latter, the final syllable cannot be stressed. This explains why Pintupi has a lapse at the end of the word. We can also see why Anti-Pintupi does not exist. There is no equivalent of Non-finality operating at the left edge; there is no Non-initiality. Therefore,

5

Ben Hermans

an initial lapse cannot be created. To see this consider again Anti-Pintupi and Surahawa. (7)

Anti-Pintupi

Suruwaha

x x x 123456 x x x 1234567

x x x 123456 x x x x 1234567

Anti-Pintupi has an initial lapse in odd-parity words. This lapse is automatically eliminated in favor of a binary rhythmic alternation. A gridmark is therefore inserted on the first syllable. This creates a system that does exist: Suruwaha. Before I go on to discuss the other asymmetries, we should note that, although Araucanian and Suruwaha both exhibit binary rhythm, there is still a difference between them. Recall from (4) that they can easily be distinguished by foot structure; Araucanian builds iambs from left to right, and Suruwaha from right to left. From the point of view of the grid-only approach, the difference resides in the number of gridmarks in odd-parity words; there are fewer marks in Araucanian than in Suruwaha. This can be explained with the use of Alignment constraints, according to Gordon (2002: 498). Every gridmark has to be aligned with the right or left edge, depending on the language, as determined by the constraint (family) Align(x1,L/R). Every syllable separating the gridmark from its designated edge adds a violation. Since this is true for every gridmark, the constraint automatically reduces the number of gridmarks. If Alignment is satisfied, one and one only gridmark is created, located at the left or right edge, depending on the language.1 In order to achieve rhythmic alternation, then, *Lapse must be ranked higher than Align(x1,L/R). I illustrate the minimization effect of Align(x1,L/R) in (8): (8)

Align(x1,L/R) 1234567 ☞ a. x x x *,***,***** (9) 1234567 b. x x x x **,****,****!** (12) 1234567

In Araucanian, Align(x1,L/R) has maximal effect, reducing the number of gridmarks. *Lapse, however, must be satisfied. The result is that the minimal number of Such systems do exist. They are analyzed with the ranking Align(x1,L/R) >> *Lapse. These are languages where, in principle, one stress is created at the left/right edge of a word. In older theories, these languages were accounted for with unbounded feet (e.g. Hayes 1980; Hammond 1984; see Kager 1995 for an overview). Since Prince (1985) there has been general agreement that unbounded feet do not exist. The most common force creating additional stresses in languages that tend to prefer just one stress in each word is Weight-to-Stress (Prince 1983; Prince and Smolensky 1993; Kager 1999). If this constraint is highly ranked, a language has one stress in each word, unless a word contains one or more heavy syllables. For a typology of unbounded systems where heaviness does not play a role, see Gordon (2002). For a typology of unbounded systems where weight does play a role, see Hyde (2001, 2006).

1

The Representation of Word Stress

6

gridmarks is inserted, such that no violation of *Lapse is created. To account for the fact that the last syllable is stressed in even-parity words, Align(x1,R) must dominate Align(x1,L) in Araucanian. This is shown in (9). (9)

Align(x1,R) 123456 ☞ a. x x x **,**** (6) 123456 b. x x x *,***,***!** (9) 123456

Align(x1,L) *,***,***** (9) **,**** (6)

To account for the fact that the number of gridmarks is maximized in odd-parity words in Suruwaha, Gordon proposes AlignEdges, which demands that a syllable located at the edge must have a gridmark (Gordon 2002: 497). In Suruwaha, this constraint must dominate Align(x1,R), which explains why four gridmarks are created in a seven-syllable word, rather than just three, as in Araucanian. Notice that AlignEdges also requires a gridmark on the initial syllable in the even-parity words of Suruwaha. However, in this environment, a gridmark on the first syllable either creates a clash, or a lapse, as shown in (10). (10)

xx x x 123456 ↑ clash

x x x 123456 ↑ lapse

Both *Clash and *Lapse dominate AlignEdges in Suruwaha. This explains why the final, rather than the initial, syllable is stressed in even-parity words. This is the basic system proposed by Gordon. I will now demonstrate that it cannot derive Anti-Garawa, which is given again in (11), but without foot structure. (11)

Anti-Garawa x 123 x 123

x x 456 x x 4567

To guarantee that the final syllable is stressed, AlignEdges must be high-ranked. However, in even-parity words the initial syllable should not be stressed by AlignEdges. This constraint must therefore be lower-ranked than *Clash and *Lapse, just as in Suruwaha. To ensure that in even-parity words the final syllable is stressed, we also have to rank Align(x1,R) above Align(x1,L), again just as in Suruwaha. Interestingly, with the two rankings *Clash, *Lapse >> AlignEdges and Align(x1,R) >> Align(x1,L) it is impossible to derive the pattern in (11), which contains a lapse before the last stressed syllable in odd-parity words. This pattern violates AlignEdges and *Lapse. Inserting a gridmark on the initial syllable solves both violations, giving (12):

7 (12)

Ben Hermans Anti-Garawa resolved (Suruwaha) x x x 123456 x x x x 1234567

Insertion of the initial gridmark creates a system with a binary rhythm. This is a possible system, which is exemplified by Suruwaha. The same reasoning applies to Anti-Piro, whose pattern is given again in (13), without foot structure. (13)

Anti-Piro x x x 123456 x x x 1234567

The presence of stress on the final syllable indicates that AlignEdges is high-ranked, as in Suruwaha. To ensure that in even-parity words the final syllable is stressed, Align(x1,R) must dominate Align(x1,L). With these rankings, it is impossible to derive the stress pattern of odd-parity words in (13). Again, this pattern has a lapse, and violates AlignEdges. Both violations can be eliminated by inserting a gridmark over the first syllable, and by moving the gridmark of the second syllable to the third, as shown in (14). (14)

Anti-Piro resolved (Suruwaha) x x x 123456 x x x x 1234567

Again this pattern is actually attested, and is exemplified by Suruwaha. I have shown in this section that grid-only frameworks can easily explain the asymmetries occurring in binary rhythmic patterns. They do so in a way that is as simple as it is radical. Foot structure is eliminated, and only stresses are preserved, represented as gridmarks. As far as the distribution of stress is concerned, grid-only approaches do rather well. Surely this explains why the grid-only approach is still adopted (see for recent applications Karvonen 2005, 2008; also Gordon 2003: 179, note 4, where he confidently states that “a grid-based theory of stress offers a closer fit to the typology of stress than foot-based metrical theories”). It seems, then, that grid-only approaches are sufficient, and that foot structure is therefore superfluous. Interestingly, however, if we go beyond the distribution of stress, and broaden our scope to other phenomena, we find abundant evidence for feet. In the next section I will present a few arguments in favor of foot structure. Then, in §4, I will investigate what this means for the theory of stress.

The Representation of Word Stress

3

8

Evidence for foot structure

In this section I present evidence for the existence of foot structure. First I demonstrate that, in a sequence of two unstressed syllables, not every syllable behaves identically. Then I show that foot structure is necessary to define the domains within which certain phenomena apply. The third type of evidence for foot structure has to do with weight. (For further discussion of foot structure see chapter 39: stress: phonotactic and phonetic evidence and chapter 40: the foot.)

3.1

Unstressed syllables do not behave identically in lapses

De Lacy (2002, 2007a) shows that in Ayutla Mixtec, spoken in Mexico, there is a close relation between the position of stress and tonal quality. One generalization is that the leftmost high tone receives word stress. This is illustrated by the following examples: (15)

ML’H L’HH LM’H

[kenù’rá] [sù’t jáí] [sùt ja’í]

‘his tobacco’ ‘I will swim’ ‘I will not swim’

If all tones in a word are the same, the first syllable becomes stressed: (16)

’HHH ’LLL ’MMM

[’œínírá] [’œàtùì] [’œcJera]

‘he understands’ ‘my trousers’ ‘his pineapple’

Prince (1983) establishes that patterns of this type pose no problems for the grid-only approach, as they are an instance of what he calls the “default-to-sameside” pattern.2 Interestingly, however, Ayutla Mixtec has yet another pattern, which turns out to be quite problematic. If a word has a sequence of a high tone immediately followed by a low tone, then the high tone must be stressed, even if it is preceded by another high tone. If there is more than one such sequence, then the high tone of the first sequence is stressed. This is illustrated in (17). (17)

H’HL LMH’HL ’HLHL

[lú’lúrà] [vìœcí’ráà] [’œáàœíì?]

‘he is small’ ‘he is not cold’ ‘is not eating’

The low tone attracts stress to the high tone immediately to its left. To explain this pattern, de Lacy proposes that HL is parsed as a trochee, (HL). Thus, in a word with the structure HHL, as in the first example in (17), the parse H(HL) is better than (HH)L. The result is that a high tone immediately preceding a low 2

In systems where “default-to-same-side” obtains, a gridmark is assigned to the syllables with a certain property (in this case the syllables with a high tone); one End-Rule applies at a low level and another End-Rule at a higher level. In Ayutla Mixtec, both End-Rules apply at the left.

9

Ben Hermans

tone receives word stress, because it is the head of a trochee. With a grid-only approach it is difficult to explain these facts. It seems as if an unstressed syllable with a low tone behaves differently from an unstressed syllable with a high tone; the former but not the latter seems to avoid a lapse. By its very nature, the grid structure of an unstressed syllable with a low tone is identical to the grid structure of an unstressed syllable with a high tone. This indicates that a grid-only framework cannot account for these facts.3 I now turn to the second type of evidence for foot structure: the proper characterization of domains.

3.2

The characterization of domains

A stressed syllable and a neighboring unstressed syllable often form a domain within which a phenomenon applies. A grid-only approach is notoriously bad at defining domains of this type, as it can only define a primary stressed position, secondary stressed positions, and unstressed positions. It is impossible to express the fact that some unit creates a bond with another unit.4 An example showing this is provided by Guugu Yimidhirr, a language spoken in Australia (Zoll 2004; Elías-Ulloa 2006). In this language long vowels can only occur in the first and/or the second syllable of a word, but nowhere else. Some illustrative examples, from Elías-Ulloa (2006: 231–232), are given in (18): (18)

a.

b.

c.

’gu(gu ’bu(ra‘jaj ’da(ba‘IalIa‘la ma’gi(l ma’ji(Iu ma’gi(lnaj‘gu ’bu(ra(j ’dji(ra(l‘gal ’bu(ra(j‘bigu

‘language-abs’ ‘water-loc’ ‘ask-red-imp’ ‘branch-abs’ ‘food-purp’ ‘branch-pl-emph’ ‘water-abs’ ‘wife-adess’ ‘water-loc-emph’

In a theory that recognizes feet, it is easy to characterize the domain within which the long vowels can occur; it is the initial foot, which is also the head of the word. This foot can either be a trochee or an iamb, depending on the presence and location of a long vowel. On the other hand, in a grid-only account it is difficult to understand why long vowels are restricted to the first two syllables of the word. This is a consequence of the fact that in this approach the first two syllables cannot be characterized as a domain. The facts illustrated in (18) are therefore 3

Other phenomena showing that not all syllables behave in the same way in a lapse are high vowel deletion in Old English (Dresher and Lahiri 1991) and vowel balance effects in Scandinavian (Bye 1996) and Old Frisian (Smith 2004; Smith and van Leyden 2007). Unfortunately, due to lack of space I cannot discuss these phenomena here. 4 The domain-defining character of the foot is the oldest evidence in favor of its existence (Selkirk 1980). It turns out to be very difficult to find cases where it is absolutely impossible to define a domain with grids only. An interesting example is provided by Pearce (2006), who argues that feet in Kera create the domains for tone association. I suspect, however, that an alternative is possible with gridmarks only, such that (certain) tones tend to anchor to strong positions on the grid. The role of the foot as a domain delineator is systematically eliminated in Majors (1998). She argues that feet do not play a role in stress-dependent harmony.

The Representation of Word Stress

10

problematic for a grid-only framework. Let us now turn to the third type of evidence: syllable weight.

3.3

Evidence from syllable weight

Prince (1983) shows that the grid-only framework can explain the relationship between syllable weight and the position of stress. This seems to suggest that the effect which weight has on the position of stress can be described without making use of foot structure. However, once we broaden our view beyond the distribution of stress in the strict sense, it becomes obvious that weight does provide us with convincing evidence for foot structure. Here I present one case: allomorph selection in Shipibo, spoken in Brazil and Peru.5 Shipibo has two allomorphs meaning ‘again’: /ribi(/ and /ri(ba/. The allomorph /ribi(/ has a long vowel in the second syllable, /ri(ba/ in the first syllable. ElíasUlloa (2006) shows that allomorph selection is determined by foot structure. The language has two different feet: moraic trochees and iambs. The former is preferred; iambs are only built if the construction of moraic trochees is not possible. Furthermore, both moraic trochees and iambs must be bisyllabic. Two other high-ranking constraints are relevant here. The constraint Weight-toStress is inviolable, so a heavy syllable cannot be left unparsed. There is also one Alignment constraint that is high-ranked: the left edge of a word must be aligned with the left edge of a foot. Consider the following forms (from ElíasUlloa 2006: 7), where the relevant allomorphs are underlined: (19)

Allomorph selection in Shipibo a. b. c. d.

(pi-’ri()(ba-kq) (’puta)(ri‘bi()-kq (’puta)(-ma-‘ri()(ba-kq) (’puta)(-‘jama)(-ri‘bi()-kq

‘eat-again-past’ ‘throw-again-past’ ‘throw-caus-again-past’ ‘throw-neg-again-past’

In (19a), /ri(ba/ is selected, allowing all syllables of the word to be parsed into feet. If the other allomorph had been selected, the form /pi-ribi(-kq/ would have been created. This form cannot be correctly parsed into foot structure. One possible parse is (’pi-ri)(‘bi(-kq), but this representation contains an uneven trochee, which is not allowed in Shipibo. Another realization might be pi-(ri’bi()-kq. But this structure violates the Alignment constraint, which is not possible either. In (19b), /ribi(/ is selected; this is again explained by foot structure. If the other allomorph had been selected, a form would have been created that could not be properly parsed, viz. /puta-ri(ba-kq/. The parse (’puta)(-‘ri(ba)-kq is unacceptable, because it contains an uneven trochee. The alternative (’puta)(-‘ri()(‘ba-kq) is also bad, because it contains a monosyllabic iamb, a type of foot that is non-existent in this language. Yet another alternative would be pu(ta-’ri()(‘ba-kq), but this representation violates the requirement that a word should begin with the left edge of a foot. This constraint is very highly ranked in Shipibo. The form that is actually realized, (19b), does not suffer from any of these problems. It is parsed 5

Other cases are reduplication in Kosraean (Kennedy 2005) and tonal spread in Capanahua (Hagberg 1993).

11

Ben Hermans

into appropriate feet and only the final syllable is left unparsed. The same can be said for (19c) and (19d). We see, then, that foot structure explains this instance of allomorph selection. In the grid-only framework the distribution of the two allomorphs remains a puzzle. Consider again (19a), or rather its alternative, where the wrong allomorph is selected. This is the form /pi-ribi(-kq/. In a theory that only has gridmarks it is possible to parse this form as in (20): (20)

x x x x x x x pi - ri bi( - kq

There seems to be no particular reason why this form should be rejected. There is a smooth rhythmic alternation, and the heavy syllable is stressed. In a grid-only framework, then, no constraint is able to eliminate this form. The reason, of course, is that there are no feet in this approach. It therefore lacks the essential ingredients with which the distribution of the two allomorphs in Shipibo can be explained. In this section I have presented three types of evidence for foot structure: the phonology of lapses, the characterization of domains, and the interaction between weight and foot parsing. On the basis of this evidence we can conclude that foot structure does exist. Gordon’s approach might be economical, and also explain asymmetries, but its lack of foot structure makes it inadequate. Consequently, we are forced to explain the asymmetries in a theory of stress that does include foot structure. A priori, this looks like an almost impossible task, because, as we saw in §2, it is precisely the presence of foot structure that makes it so difficult to explain why certain systems do not exist. In the next section I will present a brief overview of the most important aspects of Hyde’s (2001, 2002) work, where an account is offered of the asymmetries in a theory that does include foot structure.

4

Explaining parsing asymmetries with foot structure

One of the central ideas in Hyde (2001, 2002) is that iambs and trochees are derived by word-edge Alignment constraints; there are no separate constraints of the type Foot=Iamb and/or Foot=Trochee. In particular, the following two constraints are important: (21)

a.

b.

Hds-L The left edge of every foot-head is aligned with the left edge of some prosodic word. Hds-R The right edge of every foot-head is aligned with the right edge of some prosodic word.

If Hds-L is high-ranked, then a word is parsed with trochees, while if Hds-R is high-ranked, it is parsed with iambs. This is similar to Gordon’s approach.

The Representation of Word Stress

12

The difference is that in Gordon’s theory gridmarks are subject to Alignment, whereas in Hyde’s approach foot-heads undergo Alignment. I demonstrate the effects of Alignment with the tableau in (22). I use Hyde’s notation; vertical lines indicate headedness. (22)

qqqqqq ☞ a. q q q q q q

Hds-L

Hds-R

**,**** (6)

*,***,***** (9)

*,***,***!** (9)

**,**** (6)

b. q q q q q q

In Hyde’s framework all syllables must be dominated by a foot. There is no such thing as weak layering (Itô and Mester 1992). Absence of weak layering triggers foot construction, and Alignment maximally reduces the number of feet. In principle, then, a trochee can only be created by Hds-L. Likewise, iambs can only be created by Hds-R. From this it follows that there is only a two-way contrast. In classical foot theory there is a four-way contrast: in principle, iambs as well as trochees can be right-aligned as well as left-aligned. The reduction to a two-way contrast is the basis of Hyde’s explanation of the asymmetry problem. Hyde’s theory has two other important ingredients. The first one is a constraint, GridmarkMapping, which requires that feet contain a gridmark, which must occupy some head position. Mapping is illustrated in (23). (23)

Trochees

Iambs

qqqqqq x x x

qqqqqq x x x

The second important ingredient is the idea that feet can overlap. This means that the following configuration is allowed: (24)

Overlapping feet

qqq x Because of the presence of the gridmark in the head position of the first foot, both feet satisfy GridmarkMapping. This is because the gridmark is located in the domain of both feet. Hyde is able to explain the minimization patterns by making use of overlapping feet. Recall that these are the patterns where the odd-parity words have relatively few stresses. (25) illustrates this with Araucanian and its mirror image Nengone (Hyde 2002: 320), a language of New Caledonia.

13 (25)

Ben Hermans Nengone

Araucanian

1234 56 x x x

1 2 3456 x x x

1234 567 x x x

1 23456 7 x x x

Nengone has trochees. Its foot-heads are therefore left-aligned. In odd-parity words a sequence of two adjacent foot-heads is created at the left edge. This is the consequence of maximal satisfaction of Hds-L. Inserting one gridmark in the domain of the two feet satisfies GridmarkMapping. Consequently, only three stresses are generated, even though there are four feet. Araucanian is the mirror image of Nengone. Maximal satisfaction of Hds-R creates iambs, and in odd-parity words there is a sequence of two adjacent footheads at the right edge. The two last feet satisfy GridmarkMapping with only one gridmark. The introduction of overlapping feet, then, makes it possible to account for the minimization patterns. The maximization patterns, where odd-parity words receive more stresses than is necessary to create a rhythmic alternation, are explained with two additional constraints, given in (26). (26)

a.

b.

PrWd-L The left edge of every prosodic word is aligned with the left edge of some foot-head. PrWd-R The right edge of every prosodic word is aligned with the right edge of some foot-head.

I illustrate these constraints with Maranungku and Suruwaha. (27)

Maranungku

Suruwaha

1234 56 x x x

1 2 345 6 x x x

12 34567 x x x x

123 4 5 67 x x x x

In Maranungku, the left edge of a prosodic word must be aligned with a foot-head. A trochee is therefore built at the left edge. The heads of all other feet must be aligned with the right edge of a word. This would normally mean that iambs would have to be built, but Hyde stipulates that two adjacent feet can never dominate two adjacent unstressed syllables. This excludes the following structure:

The Representation of Word Stress (28)

14

Excludedbybystipulation stipulation Excluded

1 234 The implication of this stipulation is that if PrWd-L, which requires a trochee at the left edge, is high-ranked, then the other feet cannot be iambic, even though iambs better satisfy right-alignment. The mirror image is also true, of course; if PrWd-R, which requires an iamb at the right edge, is high-ranked, then the other feet cannot be trochaic, even though trochees better satisfy left-alignment. Maranungku has the ranking PrWd-L >> Hds-R, which derives the representation on the left in (27). This is an instance of maximization, because, in an odd-parity word, as many gridmarks are present as there are feet. Suruwaha has the ranking PrWd-R >> Hds-L, deriving the pattern on the right in (27). Again, in the odd-parity words the number of stresses equals the number of feet. Maranungku has traditionally been analyzed as a language with trochees built from left to right, with a degenerate foot at the right. Suruwaha has been described as its mirror image, with iambs built from right to left and a degenerate foot at the left (cf. (4)). In principle, every foot receives a gridmark, although, due to overlapping feet, this does not necessarily mean that there are as many gridmarks as there are feet. At the right edge, Non-finality might exclude the presence of a gridmark over the last syllable. Furthermore, at the left edge, the first syllable can be subject to a constraint called InitialGridmark (Hyde 2002: 320), which requires the presence of a gridmark on the first syllable of a word. If these constraints are high-ranked, this can lead to a situation where a foot is not accompanied by a gridmark. Pintupi has high-ranking Non-finality, while Garawa has high-ranking InitialGridmark, as illustrated in (29). (29)

Garawa

Pintupi

1234 56 x x x

1 2 3 4 56 x x x

1234 567 x x x

123 4567 x x x

Garawa is like Nengone (cf. (25)) in the sense that feet are attracted to the left by high-ranking Hds-L. There are also two differences. In Garawa, InitialGridmark is high-ranked, so the initial syllable must have stress. The constraint against clashing gridmarks is also high-ranked, excluding an immediately following gridmark in the domain of the second foot. With these two constraints dominating GridmarkMapping, the first foot of the two overlapping feet is stressed, whereas the second foot is not. We thus get a stressless trochee. This is a foot with a head, but without a gridmark. The absence of a gridmark in the second foot creates a lapse immediately after the initial stress.

15

Ben Hermans

Pintupi is similar to Maranungku (cf. (27)), but has high-ranking Non-finality. If the final syllable is a foot-head, then it may not receive a gridmark. This creates a lapse at the right edge. With the system described here it is very easy to explain all the asymmetries mentioned in §2, which are listed again in (30). (30)

Anti-Garawa

Anti-Pintupi

Anti-Piro

x (1 2)(3 x (1 2)(3

x x x (1 2)(3 4)(5 6) x x x 1 (2 3)(4 5)(6 7)

x x x (1 2)(3 4)(5 6) x x x (1 2) 3 (4 5)(6 7)

x x 4)(5 6) x x 4) 5 (6 7)

In Anti-Garawa, one iamb is built at the right edge, and the other iambs are built from left to right, so that, in odd-parity words, a lapse is created before the final stress. In Hyde’s theory it is impossible to derive such a pattern. There are two ways to stress the final syllable of a word: either Hds-R or PrWd-R is high-ranked. With these two systems, it is impossible to create a lapse before the final stress. With high-ranking Hds-R, we derive Araucanian, as already shown in (25). This is a language with a minimization pattern, where in odd-parity words stresses are economically placed, so as to avoid a lapse. With high-ranking PrWd-R we get a system like Suruwaha, whose basic structure is given in (27). This is a maximization pattern, where an extra stress is created so as to avoid a lapse. Anti-Garawa, then, is ruled out, because it cannot be derived. There is also no place for Anti-Pintupi. The particular property of this imaginary system is that there is no stress on the initial syllable. There is only one way to block a stress on a peripheral syllable, Non-finality, which, however, can only block stress on a final syllable. Since there is no equivalent Non-initiality preventing stress from the initial syllable, Anti-Pintupi cannot be derived. Anti-Piro is also impossible. In this imaginary system, one iamb is built at the left edge, creating a fixed stress on the peninitial syllable. To place a fixed stress at the left periphery two constraints are available. Either Hds-L or PrWd-L must be high-ranked. In the former case we derive Nengone, whose basic configurations are shown in (25). Nengone is a minimization pattern, where the minimal number of stresses is economically placed so as to avoid a lapse. With highranking PrWd-L Maranungku is derived, as shown in (27). Maranungku displays a maximization pattern, where an additional stress is inserted so as to avoid a lapse. Thus there is no way to generate a system like Anti-Piro. Hyde’s system is similar to Gordon’s in one sense. Neither uses the classical alignment constraints that refer to feet. In particular, AllFeet-L/R and PrWdL/R (McCarthy and Prince 1993) are eliminated. (31)

a.

b.

AllFeet-L/R The left/right edge of every foot is aligned with the left/right edge of some prosodic word. PrWd-L/R The left/right edge of every prosodic word is aligned with the left/ right edge of some foot.

The Representation of Word Stress

16

In these constraints, Hyde replaces the argument foot by foot-head, as shown in (21) and (26), whereas Gordon replaces it by gridmark. These two changes are almost identical, since a foot-head normally has exactly the same distribution as a gridmark. The crucial difference, of course, is the notion of overlapping feet. We might say that, where Gordon eliminates foot structure entirely, Hyde introduces a new type of structure. Of course, Hyde is aware of the fact that this concept must be motivated on independent grounds, a task which he undertakes in Hyde (2008). Optimality Theory struggles with what Hyde calls the “odd-parity problem.” This problem can be divided into two sub-problems: the “even-only problem” and the “odd-heavy problem.” The introduction of overlapping feet provides a solution which is not available in standard approaches. For reasons of space I will only discuss the first instance of the odd-parity problem. The even-only problem is caused by the fact that, in odd-parity words, Faithfulness constraints are in conflict with two other constraints: FootBinarity (the constraint that penalizes degenerate feet) and Parse-q (the constraint requiring that syllables be dominated by feet). Suppose that the Faithfulness constraints are ranked below FootBinarity and Parse-q. Under this ranking it is better to insert or delete a syllable than to create a violation of either FootBinarity or Parse-q, as shown in the following tableau: (32)

1234567 a. (1 2)(3 4)(5 6) 7 b. (1 2)(3 4)(5 6)(7) ☞ c. (1 2)(3 4)(5 6)(7 8) ☞ d. (1 2)(3 4)(5 6)

Parse-q FootBinarity Max Dep *! *! * *

This tableau shows what happens to a word with an uneven number of syllables. The first candidate, (32a), contains an unparsed syllable, violating Parse-q. (32b) has a final, monosyllabic foot, violating FootBinarity. In (32c), a vowel is inserted. In this way an extra syllable is created, so that at the right edge a binary foot is built. FootBinarity and Parse-q are therefore satisfied, although Dep is violated. Finally, in (32d) a vowel is removed, so that both Parse-q and FootBinarity are again satisfied, although Max is violated. We can see, then, that in an odd-parity word Parse-q and FootBinarity can be satisfied if a syllable is inserted or deleted. From this it follows that there should be languages in which all words contain an even number of syllables. These languages would have the ranking Parse-q, FootBinarity >> Max, Dep. However, no language like this has ever been attested. This illustrates the phenomenon referred to as the even-only problem. Hyde (2008) shows that the even-only problem does not arise in a theory with overlapping feet. In such a theory, it is possible to satisfy FootBinarity and Parse-q without violating Faithfulness. In other words, with overlapping feet, no conflict arises between FootBinarity and Parse-q on the one hand and the two Faithfulness constraints on the other. This is shown in (33):

17 (33)

Ben Hermans 1234567

Parse-q FootBinarity Max Dep

☞ a. 1234567 b.

*! 12345678

c.

*! 123456

The candidate with overlapping feet, (33a), does not violate any constraint. Contrary to this, the two candidates in which the uneven number of syllables has been changed into an even number do violate Faithfulness. The candidate with overlapping feet thus harmonically bounds the two candidates where Faithfulness is violated. This means that the two constraints FootBinarity and Parse-q can never have the effect of changing an underlying form with an uneven number of syllables into a surface form with an even number. Overlapping feet, Hyde concludes, are the solution to the even-only problem. In this section I have shown that asymmetries can be explained with a representation of word stress that includes feet. Iambic and trochaic structure is not the result of the constraints Foot=Iamb and Foot=Trochee, but rather of the alignment of foot-heads. The patterns thus established are subject to GridmarkMapping. Syllables containing a gridmark are phonetically stressed. In representations with overlapping feet there is a tendency to minimize the number of gridmarks, because a sequence of two overlapping feet satisfies GridmarkMapping with one gridmark. Under the pressure of certain constraints (*Clash, as in Garawa, or NonFinality, as in Pintupi) it can happen that a foot ends up without a gridmark. Thus, in Hyde’s framework feet are not necessarily stressed, although they are always headed. In the next section I will briefly point out that Hyde is not very explicit about one property of his theory that is quite innovative. His theory is the only one, as far as I know, that makes use of feet that are headed, even though they are not stressed.

5

Stresslessness is not the same as headlessness

In Hyde’s framework feet are not necessarily stressed. In overlapping feet the number of stresses is a proper subset of the number of feet. Let us consider some instances of overlapping feet, and see how they interact with stress. (34)

Nengone

Araucanian

Garawa

qqq x

qqq x

qqq x

The Representation of Word Stress

18

Nengone has overlapping trochees, with a stressed syllable in the middle. Araucanian has overlapping iambs, also with stress in the middle (cf. (25)). Finally, Garawa has overlapping trochees, with a stress in initial position, as shown in (29).6 These representations indicate that headedness is not expressed by gridmarks. In Hyde’s framework a foot can be an iamb, even though it does not have a gridmark in its final syllable, as in Araucanian; similarly, a foot can be a trochee, even without a gridmark in its initial syllable, as in Nengone and Garawa. This is a unique aspect of Hyde’s theory. Most recent theories claim that headedness is expressed on the grid. Some theories claim that feet must have heads, so that for each foot there must be a gridmark accompanying it. Theories of this type invoke a principle like the Faithfulness Condition, originally formulated in Halle and Vergnaud (1987: 15–16).7 Meanwhile, another theory on the relation between foot structure and headedness has been developed. In this theory, headedness is disconnected from foot structure. In these theories there are two foot types; headed feet, which are accompanied by a gridmark, and headless feet, which are not accompanied by a gridmark. The loose connection between headedness and foot structure is expressed by the Separability Hypothesis (Crowhurst and Hewitt 1995b: 39), which states that feet can be headed or headless. Normally, a foot does have a head, but under the pressure of certain constraints, it can happen that a foot is not able to acquire a head. Proponents of the Separability Hypothesis are Hagberg (1993), Crowhurst and Hewitt (1995a, 1995b), Bye (1996), and Crowhurst (1996). Bye occupies a special position among them, because he assumes that feet can sometimes have two heads; when this happens there are two gridmarks in a single foot.8 It is clear that Hyde’s theory differs from both views just mentioned. On the one hand, it (implicitly) uses the Faithfulness Condition, because all feet are inherently headed. (Heads are not expressed on the grid, but by line structure, as we have already seen; a head has a vertical line, whereas a dependent has a slanted line.) Yet the theory also recognizes the Separability Hypothesis, in the sense that there is a separate mode of representation, the grid, where a gridmark may or may not accompany a foot. Hyde’s theory, therefore, does not recognize headless feet, but it does recognize stressed (headed) feet and unstressed (headed) feet. In this sense, Hyde’s theory is certainly representationally richer than all other recent theories. There is some evidence from Vogul, a language of Siberia, that feet can be headed, even if they are not stressed. In Vogul, stress does not seem to be quantitysensitive (Vaysman 2009). The main stress is realized on the initial syllable, and there are secondary stresses on every other syllable thereafter. If, however, the final syllable is a target for secondary stress, this stress is not realized. Syllable quantity is irrelevant for this distribution. Only syllables with long vowels are heavy: words with a heavy syllable have exactly the same stress patterns as words with only light syllables. These regularities are illustrated in (35). 6

The mirror image of Garawa (overlapping iambs with stress on the final syllable) does not exist. This system could only arise with a constraint requiring stress on the final syllable, together with high-ranking *Clash. However, in Hyde’s system there is no constraint requiring stress on the final syllable. Therefore, two overlapping iambs at the right edge will always have stress in the middle (the minimization pattern). 7 Halle and Vergnaud’s Faithfulness Condition should not be confused with the Faithfulness constraints of OT. The former is a condition on the relation between a foot and its head. 8 Level stress in Scandinavian dialects necessitates this representation, according to Bye (1996).

Ben Hermans

19 (35)

Stress in Vogul (Vaysman 2009: 207–242) No stress on the last syllable (short vowel in the first syllable) ’sam-e-nHl ‘his/her/its eye (abl)’ ’at-e-nHl ‘his/her/its smell (abl)’ b. Alternating stress (short vowel in the first syllable) ’sam-a‘:anHl ‘their (dual) eyes’ ’aki-‘:anHl ‘their (dual) uncles’ c. No stress on the last syllable (long vowel in the first syllable) ’joor-e-nHl ‘his/her/its strength (abl)’ ’aat-e-nHl ‘his/her/its hair (abl)’ d. Alternating stress (long vowel in the first syllable) ’saa:rap-‘e-nHl ‘his/her/its axe (abl)’ ’tootap-‘e-nHl ‘his/her/its chest (abl)’ a.

These facts seem to suggest that Vogul has trochees which are assigned without reference to quantity, i.e. quantity-insensitive trochees. A closer look at the language, however, reveals that this hypothesis cannot be maintained. Some affixes have a number of allomorphs. Vaysman convincingly shows that the selection of the allomorphs is determined by the prosody. The moraic trochee plays a decisive role in this process. One example is the morpheme ‘your (sg)’, which has no fewer than four allomorphs. After stems ending in a consonant, either [-an] or [-anHn] is selected, and after stems ending in a vowel, either [-n] or [-nHn]. The choice is narrowed down further by properties of the stem’s prosodic structure. If the initial syllable of the stem has a short vowel (that is, if it is a light syllable), then [-n] or [-an] is selected; if, however, the stem’s initial syllable is heavy, or if the stem is bisyllabic, then [-nHn] or [-anHn] is selected. These facts are illustrated in (36). (36)

Allomorphy in Vogul (Vaysman 2009: 207–242) a.

CVC-stem sam-an ‘your put-an ‘your pos-an ‘your b. CV(C)CVC-stem pasan-anHn ‘your api:-anHn ‘your isnas-anHn ‘your c. CVVC-stem saam-anHn ‘your puut-anHn ‘your oos-anHn ‘your 9 d. CVCV-stem ala-n ‘your pici-n ‘your oma-n ‘your

9

(sg) eyes’ (sg) ice-crusts’ (sg) lights’ (sg) table’ (sg) grandson’ (sg) windows’ (sg) corners’ (sg) cauldrons’ (sg) sheep’ (sg) roofs’ (sg) nests’ (sg) female relatives’

There are no CV-stems in Vogul, i.e. monosyllabic stems ending in a vowel.

The Representation of Word Stress e.

20

CVVCV-stem saali-nHn ‘your (sg) reindeer’ iici-nHn ‘your (sg) evenings’ oopa-nHn ‘your (sg) paternal grandfathers’

Vaysman argues that allomorph selection in Vogul can be explained if it is assumed that the language has moraic trochees. It has a type of foot, in other words, where the head is on the left, and where the number of moras is exactly two. Using moraic trochees we can see that an allomorph is selected in such a way that all syllables of a word are parsed. I demonstrate this in (37) with C-final stems, using Hyde’s notation. (37)

Exhaustive parsing in Vogul CVC-stem F q q sam-an

CV(C)CVC-stem F F q q q q pa san -an Hn

CVVC-stem F F q q q saam -an Hn

If the allomorphs were distributed differently, parsing could not be exhaustive, as shown in (38). (38)

Illicit non-exhaustive parsing in Vogul CVC-stem F q q q sam-an Hn

CV(C)CVC-stem F q q q pa san -an

CVVC-stem F q saam-an

If [-anHn] were added to a CVC-stem, only the first two syllables could be parsed in a moraic trochee; the third syllable would have to remain stray, since no moraic trochee can be built on the monomoraic syllable. Similarly, if [-an] were added to a bisyllabic stem or to a monosyllabic stem with a long vowel, the suffix could not be parsed in a foot, because after a stem with a fully fledged moraic trochee it is impossible to give a light syllable its own moraic trochee. If the language has moraic trochees, then how can we account for the fact that the distribution of stress seems to indicate otherwise? Recall, that, as far as the distribution of stress is concerned, Vogul seems to have syllabic trochees, a type of foot that is assigned without taking quantity into consideration. Vaysman proposes that the moraic trochees of Vogul can only be accompanied by a gridmark (stress) if the immediately preceding syllable is not stressed. Formally, this is an instance of clash resolution: in order to satisfy *Clash, no stress (gridmark) is assigned to the head of a trochee if it is immediately preceded by a stressed syllable. I illustrate this in (39).

Ben Hermans

21 (39)

F

F

q q q q sa ma :a nHl x x

F

F

q q q joo re nHl x

In [’sam-a‘:anHl], both trochees can be assigned a gridmark, since the two stresses are separated by an unstressed syllable. In [’joor-e-nHl], on the other hand, no gridmark can be assigned to the second trochee, because no syllable separates the two foot-heads. The Vogul facts are quite important. On the grounds of allomorph selection, we know that the language must have moraic trochees, i.e. feet in which the syllable on the left is the head. Without a head in this position it would be impossible to characterize the Vogul foot as a trochee, rather than an iamb. Yet not all trochees have stress in this language. This can only mean that feet are always headed, even if they are not accompanied by a gridmark. This is precisely what is assumed by Hyde.

6

Conclusion

The parsing of a word by iambs differs in a number of ways from trochaic parsing. According to Kager (2007), this asymmetry is one of the great puzzles of metrical theory. Two proposals have been put forward to solve the problem. Gordon (2002) proposes to solve the asymmetry by eliminating its cause: foot structure. If there are no iambs or trochees, then we do not expect one type of foot to mirror the other’s behavior. With foot structure eliminated, we are left with gridmarks only. Alignment constraints, interacting with constraints requiring rhythmic alternations, can account for the attested systems, and can also explain why certain systems are impossible. With respect to the distribution of stresses in the words of the world’s languages, grid-only theories seem to be sufficient, and feet therefore seem to be superfluous.10 Nevertheless, feet are necessary. This becomes clear when we take a look at phenomena that go beyond the distribution of stress. There is a wide variety of phenomena leading us to conclude that it is unacceptable to eliminate foot structure from the representation of word stress. Hyde attempts to explain the parsing asymmetries with a theory that does include foot structure. He eliminates the classical constraints on foot form from the theory, proposing instead that iambs and trochees are derived by alignment of foot-heads to the word’s edge. The pattern of feet thus created is subject to interpretation by the grid. Here the notion of overlapping feet plays a major role. With this device it is possible to explain the minimization pattern in odd-parity 10

Van der Hulst (1984) draws a similar conclusion, arguing that grid-only frameworks can do anything a foot-based theory can do. Yet van der Hulst does not eliminate foot structure. He proposes that main stress is generated by foot structure, whereas secondary stresses are created by the grid. In his view, this explains why the phenomena that are related to main stress differ systematically from the type of phenomena that are related to secondary stresses.

The Representation of Word Stress

22

words. Overlapping feet can be motivated on independent grounds: overlapping feet solve the odd-parity problem. In Hyde’s theory, the grid is mostly subservient to foot structure. Feet are always inherently headed, with or without a gridmark.

REFERENCES Altshuler, Daniel. 2009. Quantity-insensitive iambs in Osage. International Journal of American Linguistics 75. 365–398. Bye, Patrik. 1996. Scandinavian “level stress” and the theory of prosodic overlay. Nordlyd 24. 23–62. Crowhurst, Megan J. 1996. An optimal alternative to conflation. Phonology 13. 409–424. Crowhurst, Megan J. & Mark Hewitt. 1995a. Directional footing, degeneracy, and alignment. Papers from the Annual Meeting of the North East Linguistic Society 25. 47–61. Crowhurst, Megan J. & Mark Hewitt. 1995b. Prosodic overlay and headless feet in Yidiny. Phonology 12. 39–84. de Lacy, Paul. 2002. The interaction of tone and stress in Optimality Theory. Phonology 19. 1–32. de Lacy, Paul. 2007a. The interaction of tone, sonority, and prosodic structure. In de Lacy (2007b), 281–307. de Lacy, Paul (ed.) 2007b. The Cambridge handbook of phonology. Cambridge: Cambridge University Press. Dresher, B. Elan & Aditi Lahiri 1991. The Germanic foot: Metrical coherence in Old English. Linguistic Inquiry 22. 251–286. Elías-Ulloa, José A. 2006. Theoretical aspects of Panoan metrical phonology: Disyllabic footing and contextual syllable weight. Ph.D. dissertation, Rutgers University (ROA-804). Gordon, Matthew. 2002. A factorial typology of quantity-insensitive stress. Natural Language and Linguistic Theory 20. 491–552. Gordon, Matthew. 2003. The phonology of pitch accents in Chickasaw. Phonology 20. 173–218. Hagberg, Lawrence R. 1993. An autosegmental theory of stress. Ph.D. dissertation, University of Arizona. Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA: MIT Press. Hammond, Michael. 1984. Constraining metrical theory: A modular theory of rhythm and destressing. Ph.D. dissertation, University of California, Los Angeles. Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Hayes, Bruce. 1984. The phonology of rhythm in English. Linguistic Inquiry 15. 33–74. Hayes, Bruce. 1985. Iambic and trochaic rhythm in stress rules. Proceedings of the Annual Meeting, Berkeley Linguistics Society 11. 429–446. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hulst, Harry van der. 1984. Syllable structure and stress in Dutch. Dordrecht: Foris. Hyde, Brett. 2001. Metrical and prosodic structure in Optimality Theory. Ph.D. dissertation, Rutgers University (ROA-476). Hyde, Brett. 2002. A restrictive theory of metrical stress. Phonology 19. 313–359. Hyde, Brett. 2006. Towards a uniform account of prominence-sensitive stress. In Eric Bakovio, Junko Itô & John McCarthy (eds.) Wondering at the natural fecundity of things: Essays in honor of Alan Prince, 139–183. Santa Cruz: Linguistics Research Center, University of California, Santa Cruz. Available at http://repositories.cdlib.org/lrc/prince/8. Hyde, Brett. 2008. The odd-parity parsing problem. Unpublished ms., Washington University (ROA-971).

23

Ben Hermans

Itô, Junko & Armin Mester. 1992. Weak layering and word binarity. Unpublished ms., University of California, Santa Cruz. Kager, René. 1993. Alternatives to the iambic-trochaic law. Natural Language and Linguistic Theory 11. 381–432. Kager, René. 1995. The metrical theory of word stress. In John A. Goldsmith (ed.) The handbook of phonological theory, 367–402. Cambridge, MA & Oxford: Blackwell. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kager, René. 2007. Feet and metrical stress. In de Lacy (2007b), 195–227. Karvonen, Daniel. 2005. Word prosody in Finnish. Ph.D. dissertation, University of California, Santa Cruz. Karvonen, Daniel. 2008. Explaining Nonfinality: Evidence from Finnish. Proceedings of the West Coast Conference on Formal Linguistics 26. 306–314. Kennedy, Robert. 2005. The binarity effect in Kosraean reduplication. Phonology 22. 145–168. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. Majors, Tivoli J. 1998. Stress-dependent harmony: Phonetic origins and phonological analysis. Ph.D. dissertation, University of Texas, Austin. McCarthy, John J. & Alan Prince. 1993. Generalized alignment. Yearbook of Morphology 1993. 79–153. Pearce, Mary. 2006. The interaction between metrical structure and tone in Kera. Phonology 23. 259–286. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19–100. Prince, Alan. 1985. Improving tree theory. Proceedings of the Annual Meeting, Berkeley Linguistics Society 11. 471–490. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Selkirk, Elisabeth. 1980. Prosodic domains in phonology: Sanskrit revisited. In Mark Aronoff & Mary-Louise Kean (eds.) Juncture, 107–129. Saratoga: Anma Libri. Selkirk, Elisabeth. 1984. Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press. Smith, Laura C. 2004. Cross-level interactions in West Germanic phonology and morphology. Ph.D. dissertation, University of Wisconsin, Madison. Smith, Neil V. & Klaske van Leyden. 2007. The unusual outcome of a level-stress situation: The case of Wursten Frisian. North-Western European Language Evolution 52. 31–66. Vaysman, Olga. 2009. Segmental alternations and metrical theory. Ph.D. dissertation, MIT (ROA-1011). Zoll, Cheryl. 2004. Positional asymmetries and licensing. In John J. McCarthy (ed.) Optimality Theory in phonology: A reader, 365–378. Cambridge, MA & Oxford: Blackwell.

42

Pitch Accent Systems Harry van der Hulst

1

Introduction

This chapter deals with the typology of word prosodic systems and, specifically, discusses the notion of “pitch accent (language),” asking whether there is such a class of pitch accent languages distinct from “stress languages” and “tone languages.” Several issues will turn out to be crucial. Firstly, there is the issue of recognizing (or not) a notion of accent which could be said to underlie both pitch accent and “stress” (or indeed stress accent), and perhaps other phenomena which are not frequently referred to as accentual (such as phonotactic asymmetries). Secondly, there is the question as to whether we wish to distinguish between pitch as a non-distinctive and thus perhaps strictly phonetic property (found in phonetic implementation) and pitch as the exponent of a phonological category (namely tone). Thirdly, there is the possibility of having tone, stress, and accent (in various combinations) “side by side” within the same language, which raises the question of how these notions interact in any given language. The structure of this chapter is as follows. In §2 I will introduce the basic notions and definitions. §3 will briefly discuss examples of languages that have been referred to as pitch accent languages, where accent is apparently realized in terms of non-distinctive pitch. In §4, I examine cases in which tone realization or tone distribution has been said to depend on accent (or stress), a class of languages that is also often included in the pitch accent type. §5 and §6 focus on the different ways in which alleged pitch accent languages have been analyzed, with or without using the notion “accent.” In §7, I define the notions accent and stress as distinct phonological entities and suggest that stress languages may or may not be accentual. In §8 I offer some conclusions.

2 2.1

Accent, tone, and stress: Definitions and usage Accent and stress

For many languages, researchers have reported word-level “prominence,” associated with a specific syllable in the word, which is called “stress” (an English The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0042

Harry van der Hulst

2

term) or “accent” (the term used in, for example, French or German) (see also chapter 41: the representation of word stress). In literature in English on the subject, both “stress” and “accent” have been used for word-level prominence, which has lead to a good deal of confusion, in particular because there are writers who use the terms for different things. Cutler (1984), for example, regards “stress” as a property of words and “accent” as a property of sentences. There is thus a need for clarity on how these two terms are used.

2.2

Accent and its cues

On closer scrutiny, the informal notion of “prominence” can be divided into two distinct phenomena. On the one hand, we have the location of the prominent syllable (e.g. penultimate; final if the final syllable has a long vowel, otherwise penultimate; etc.) and on the other hand, there are phonetic (and phonotactic) cues that signal the location of the prominent syllable (chapter 39: stress: phonotactic and phonetic evidence). In one (fairly old) terminological tradition, the locational aspect of prominence is referred to as accent. The characterization of the accent (location) is essentially sequential (or syntagmatic); only one syllable within the relevant domain can have this property. This is what Martinet (1960) and Garde (1968) refer to as the contrastive or culminative function of “accent,” a term mainly used by Trubetzkoy (1939). The realizational aspect of prominence is, in a sense, paradigmatic (cf. van Coetsem 1996): there are various (not necessarily incompatible) phonetic and phonotactic means for cueing the accent. Some languages may favor one specific cue (e.g. pitch or duration), but several cues may conspire to manifest the accent. This division of “prominence” correlates with traditional terminological systems such as musical accent vs. dynamic accent or (with much the same meaning) pitch accent (systems) vs. stress accent (systems) (see Fox 2000: ch. 3 for an excellent general review of the notion accent; also van Coetsem 1996; van der Hulst 1999b, 2010b). In each case, the modifier of the head noun (“accent”) says something about the way in which the accent is “manifested” or “realized.” In this chapter I will focus on relationships that involve accent and pitch, whether used distinctively (in terms of contrastive tones) or non-distinctively. However, I will also consider the relationship between accent and stress.

2.3

Word prosodic types

While in some languages pitch is a property of words, all languages use pitch features within an intonational system, a system that aligns “sentences” with a melody that can be defined in terms of pitch events that mark boundaries of (syntactic or prosodic) units as well as the informational packaging of the utterance with reference to the notion “focus” (Bolinger 1982; Gussenhoven 2004; chapter 32: the representation of intonation). At the same time there are languages in which pitch is a property of “words.” Within this group of languages we commonly find the labels in (1b) and (1c). The label “stress” in (1a) is reserved for languages that need no specification of pitch at the word level, although, like all other languages, they use pitch for intonation purposes. (1)

a. b. c.

Stress (or stress accent) Pitch accent Tone

Pitch Accent Systems

3

There is, however, a great deal of controversy concerning the use of the terms tone and pitch accent, and, for that matter, the term stress.1 Hyman (2001, 2006, 2009) makes a case for treating systems that we label stress and tone as “prototypes,” meaning that languages that belong to one or the other (or both) type(s) display one or more specific defining properties.2 “Pitch accent,” according to Hyman, is not a prototype, but rather a label for a large class of hybrid systems that mix “tone” and “stress” properties in various ways, or systems that are clearly tonal, although displaying various restrictions on the distribution of tones. In effect, Hyman regards the notion accent as unnecessary, whether as a formal mechanism in analysis or as a prosodic type. Other researchers (such as Gussenhoven, e.g. 2004) who also reject the idea of “pitch accent languages” nonetheless recognize the notion of accent as an analytic device. In this chapter these views will be discussed and compared to views that attribute a fundamental role to the notion accent.

2.4

Definitions and use of tone

A traditional way of defining the notion tone is in terms of “distinctive use of pitch.” Thus, if a language uses pitch to distinguish different otherwise identical morphemes, pitch has a phonological or contrastive (distinctive) status. The following frequently quoted definition captures what is perhaps the canonical use of distinctive pitch: A tone language may be defined as a language having lexically significant, contrastive, but relative pitch on each syllable (Pike 1948: 3).

If tones are distinctive on all syllables (possibly like properties such as frontness, height, or roundness) we can say that the distribution of tones is unrestricted. Most researchers, however, agree that there is no reason to limit the term tonal language to cases in which the distribution of tones is entirely unrestricted (see chapter 45: the representation of tone). Presumably, all tonal systems show restrictions resulting from tonal spreading or assimilation (Schuh 1977; Hyman 2007), from using a limited set of tonal melodies which are properties of morphemes rather than of syllables (Leben 1971; Goldsmith 1976b; Halle and Vergnaud 1982), from the avoidance of sequences of identical tones (dissimilatory or OCP effects), or indeed from relations between tone distribution and accent (or “stress”) (see §4). Also, it is not uncommon to find that the full range of contrasts is not found in affixes (as opposed to roots or stems) (chapter 104: root–affix asymmetries). Finally, initial or final syllables may fail to bear tonal contrast (sometimes to leave room for intonational tones or for other, perhaps “perceptual,” reasons; chapter 98: speech perception and phonology).3 Since it would be unwise 1

Typological studies of word prosodic systems are numerous: e.g. Trubetzkoy (1939); Hockett (1955); Garde (1968); Meeussen (1972); Goldsmith (1976a, 1988); Greenberg and Kaschube (1976); Hyman (1977, 1978, 1981, 2006, 2009); Lockwood (1982); Clements and Goldsmith (1984); Beckman (1986); Clark (1987, 1988); Haraguchi (1988); Hollenbach (1988); van der Hulst and Smith (1988); Mock (1988); Wright (1988); Hayes (1995); van der Hulst (1999a, 2010c); de Lacy (2002); Duanmu (2004). 2 Here Hyman avoids the term “stress accent,” presumably because he no longer (cf. Hyman 1977) recognizes the label “pitch accent” as a useful one and thus essentially wants to eliminate the notion accent altogether. 3 Suárez (1983: 52) observes that in Huichol and Mazahua there is no tone contrast on the last two syllables or the last syllable, respectively. In these languages, inherent lexical tones are removed to free up space for intonational tones.

Harry van der Hulst

4

to maintain the strictness of Pike’s definition (according to which perhaps there is not a single tonal language), van der Hulst and Smith (1988) quote the much more liberal definition of Welmers (1973: 2): A tone language is a language in which both pitch phonemes and segmental phonemes enter into the composition of at least some morphemes.4

Note the use of the term pitch phoneme (chapter 11: the phoneme), which suggests that Welmers requires that pitch is used contrastively, a crucial point to which I return below. This definition includes languages in which there are tonal contrasts in certain positions, or even in only one position, in some morphemes. With this broader definition, tonal languages can be ranked on a scale of tonal density (Gussenhoven 2004), which indicates how many word positions have how much tonal contrast. In a sense, such a scale indicates the relative functional load of tone properties. Stretching Gussenhoven’s notion, we could say that relative density arises not only in the syntagmatic dimension (depending on how many positions display tonal restrictions), but also in the paradigmatic dimensions (depending on the number of contrastive options per position): (2)

Tonal density matrix T1 T2 T3

+ + + x

+ + + x

+ + + x

+ + + x

+ + + x

+ + + x (tone bearing-units)

Each potential minus would indicate a restriction on the distribution of a distinctive tone. However, no matter how dramatic the restrictions, as long as there is tonal contrast (i.e. distinctive use of pitch), phonological tones must be specified in the lexical entries. The smallest tonal system would have two tones, H and L. More extensive systems would add an M tone, or possibly two different M tones (high mid and low mid). In addition, systems can have contour tones (rise, fall, etc.) (chapter 45: the representation of tone).

2.5

Culminativity and obligatoriness

Another frequently cited term in this context is “restricted tone language,” introduced by Schadeberg (1973) and Voorhoeve (1973). This term, too, would seem to indicate a scale of restrictiveness, although Voorhoeve introduced it in the context of Bantu languages whose tonal system is so severely restricted (up to one H per word in a H/L system) that he suggested that an accentual analysis might be considered (chapter 114: bantu tone). Indeed, adding syntagmatic and paradigmatic restrictions on the distribution of tone together, one could see that a language, despite having a H/L contrast, while allowing at most one H tone per word could easily permit an accentual analysis in which the H “tone” is regarded as the predictable pitch cue of an accent, even in a case where there is no indication of any additional independent cues for this accent. 4

Strictly speaking this excludes cases in which a language has tonal affixes without having affixes or other morphemes that combine tone and segmental properties.

Pitch Accent Systems

5

But what is “accent,” and how is it formally represented? Hyman (2009) formulates two necessary properties of what he calls stress, which I will take as a point of departure for establishing what might be seen as characteristics of accent, if these notions are going to be distinguished. One “property” is that each “word” can have stress or accent at most once (only one syllable can be stressed or accented) and, additionally, each word must have it at least once. These two properties, following Hyman (2006, 2009) can be referred to as: (3)

a. b.

Culminativity (at most one) Obligatoriness (at least one)

Let us now ask whether the two properties in (3) must be regarded as necessary properties of accent. An issue that goes to the heart of what is often seen as problematic for the notion “pitch accent” is that languages which allegedly have a pitch accent system, and thus accent, sometimes have (lexical) words that appear to be unaccented (see the discussion of Tokyo Japanese in §6). This, however, is only problematic if obligatoriness is stipulated to be a necessary property of accent. We could investigate a more liberal interpretation of accent, in which unaccented words are permitted in an accentual language. This, of course, has important consequences, because it opens the door to using the presence vs. absence of accent as a contrastive option and thus to analyzing alleged tonal languages that have a H–L contrast as fully accentual languages, with H as the exponent of accent and L as the lack of accent. We might then also question whether culminativity is a necessary requirement for describing accent. If culminativity is not required, even “H/L” languages that allow multiple H “tones” could be analyzed as fully accentual. Allowing words to have multiple accents separates the notions stress and accent even more dramatically than just giving up obligatoriness for accent. Still, if accent is not the same thing as stress, there is no a priori reason for believing that any properties of the latter need to be true of the former. I return to these issues in §5.3.

2.6

Representational issues

Answers to the question of whether or not the properties in (3) are definitional of accent have repercussions for, or are implicit in, the manner in which accents are formally represented. In one type of approach the relevant syllables are marked with an “accent mark,” as is common in dictionaries and in autosegmental theory (e.g. the “star” of Goldsmith 1976a), or in terms of a segmental feature, as in the phonological theory of Chomsky and Halle (1968). In this “lexicographic” approach there is no commitment to culminativity or obligatoriness. A different formal approach is to provide the string of syllables with a headed tree structure, as proposed in various versions of metrical theory (Liberman and Prince 1977) and dependency phonology (Anderson and Ewen 1987) (see also chapter 40: the foot). Metrical structures have one designated terminal unit, the head of the word, which counts as the (primary) “stress.” This notation (assuming that all syllables must be grouped in one structure) implies culminativity, but not necessarily obligatoriness, because it does not follow from the notation that each word must be provided with a metrical tree.

6

Harry van der Hulst

However, rather than seeing asterisks and trees as competing mechanisms, we should entertain the idea that they are complementary, in that the former represent accents, while the latter represent stress. This point is acknowledged by Anderson and Ewen (1987), who, in addition to headed tree structures, also use asterisks to indicate what we might call “potential heads.”5 I will return to this point in §7.

2.7

Problems with the notion “pitch accent”

We have been considering a use of the term accent as an abstract mark of a position that can be cued by various phonetic properties, “stress” being one of them. Beckman (1986) refers to languages that are not stress accent languages, as “non-stress accent” languages (thus avoiding the term “pitch accent language”). This, of course, is compatible with the idea that in many non-stress languages pitch is the most salient property of accent. Van der Hulst (1999a, 2010b) points out that if we maintain the term “pitch accent language” we might also expect to find languages that can be labeled as “duration accent” languages (if duration is the only cue). On this view, pitch accent languages are languages in which accent is (mainly) cued by phonetic pitch. There are, in fact, various factors that make the use of this term problematic. One factor is, obviously, that people may simply define the term differently. For example, as we will see in §4, tonal contrast is often limited to specific syllables in the word; cases of this sort have been analyzed by identifying a notion “accent,” with association of tones being dependent on this accent. While in this case, the presence of tone can be said to function as a cue of accent, the cue is not a phonetic, but rather a phonological, fact (namely the phonotactic distribution of tones). The fact that the possibility of tonal contrast may signal the accent location is part of a much more general pattern, found in many languages, where accented syllables display contrastive or structural options that are exclusive to a particular syllable (see Downing 2010; van der Hulst 2010b).6 Pursuing the terminological path that we started out on above, we might refer to such cases in which tonal contrast is limited to the accented syllable as tone accent (or tonal accent) languages, rather than pitch accent languages. It is apparently the case that accented syllables can be referred to by the phonology as well as by the phonetic implementation system. In fact, accents can be referred to by other grammatical modules as well, e.g. the intonation system. Does this mean that we can refer to English as an “intonation accent” language? Languages cannot be put in a single box as far as cues for accent are concerned. Tonal accent systems, then, differ from pitch accent systems if we agree that in the latter pitch is not used distinctively. However, some writers (e.g. Downing 2010) use the term “pitch accent” for any system in which pitch properties (whether distinctive or not) enter into a relationship with accent or stress. This would include not only what is referred to here as a pitch accent or tone accent language, but also another class of languages which have both tone and accent, in which accent (or “stress”) is assigned with reference to tone. Downing’s use of the term pitch accent is thus much broader than the one considered above. 5

Another formal notation (also proposed in Liberman and Prince 1977) is the metrical grid, which does not imply culminativity. See chapter 41: the representation of word stress for extensive discussion. 6 This relates to the notion of positional faithfulness; cf. Beckman (1998).

Pitch Accent Systems

7

Finally, we return to Hyman’s (2006, 2009) use of the relevant terminology. It would seem that he agrees that there are systems in which pitch could be analyzed as a predictable phonetic cue of a notion accent, but he argues that systems of that sort can always be analyzed as tonal.7 He refers to Gussenhoven’s analysis of Nubi, a language in which each word has precisely one syllable with high pitch. Gussenhoven argues that Nubi presents a case that can be analyzed as a pitch accent language or even as a stress language, but adds that it is also possible to propose a tonal analysis. If a tonal analysis is chosen, it follows that the fact that high pitch in Nubi is culminative and obligatory is “accidental” – Nubi is simply at the far end of a continuum of tonal languages in which the distribution of tones is restricted in various ways. It is important to realize that Hyman (2006, 2009; chapter 45: the representation of tone), like Gussenhoven (2004) (in line with the approach initiated in Pulleyblank 1986; cf. §5), adopts a definition of tone that is even more liberal than that of Welmers (see Hyman 2001 for the introduction of this definition): “A language with tone is one in which an indication of pitch enters into the lexical realization of at least some morphemes.” For these authors, then, the notion tone clearly no longer entails “tonal contrast” (i.e. distinctivity). For this reason, they maintain that a language like Nubi, although it could be analyzed as a pitch accent system, can also be tonal.

2.8

Intonational pitch accents

Before we examine some cases of (alleged) pitch accent systems, let us consider one other use of the term pitch accent. The term is also used in the intonation literature where, following Bolinger (1982), intonational events that associate to phrasal accents (usually called phrasal “stresses”) are called pitch accents. In the autosegmentalmetrical tradition of Liberman (1975), Bruce (1977), Pierrehumbert (1980), Goldsmith (1981), Gussenhoven (2004), and Ladd (2008) (see also chapter 32: the representation of intonation; chapter 50: tonal alignment), intonational pitch accents are phonological tones (H, L or some combination). The reason for this is that in many intonational systems that have been studied within this model, there are tonal contrasts at the intonational level, because different tones or tone combinations have different meanings. However, if in some language each phrasal accent associates with the same pitch event, it would be perfectly possible to analyze that pitch event as a direct phonetic interpretation of the phrasal accent without postulating an intervening phonological tone.8

2.9

The issue of distinctivity

Analyses within the autosegmental-metrical tradition are not, however, much concerned with distinctivity (or indeed with a distinction between “phonological” and “phonetic” phenomena), and all phrase level pitch phenomena are usually analyzed in terms of “tones” (mirroring Hyman’s general use of tones at the word level, which also ignores distinctivity). 7

This is, in fact, how he uses the term in Hyman (1977). It may be the case that languages that have been described or listed as word-level pitch accent systems may be phrasal pitch accent systems. Since the patterns listed for words are often based on elicitation of citation forms, we cannot be sure that the observed word prosodic properties are word level or phrase level. See van Zanten et al. (2010) and Gordon (forthcoming). 8

8

Harry van der Hulst

It could be argued that definitional decisions are, paradoxically, not the crucial issue. Does it really matter whether we “call” Nubi a tone language or a pitch accent language or even a stress language? What is of importance is how specific systems are analyzed and which theoretical tools are used. However, we must also be aware of the bigger issue regarding how we see “phonology” as distinct from and interacting with “phonetic interpretation or implementation.” A traditional stance would be to maintain that using a formal object “H” in the phonology entails that this unit has a contrastive function within the linguistic system (chapter 2: contrast). If pitch is distinctive, we are dealing with phonological entities such as /H/ and /L/, etc. If one sets up the system of phonetic implementation by translating a non-tonal property X (e.g. accent) into a phonetic property “H” which is implemented in terms of relative F0, we seem to be dealing with [H] (rather than with /H/).9

3

Some (alleged) pitch accent systems

In this section I provide references to languages that have been analyzed as pitch accent systems or that have played an important role in the treatment of systems that have pitch or tonal cues correlating with accent.

3.1

A tour around the world

The chapters in van der Hulst et al. (2010b) offer a survey of word accentual systems in the world’s languages. I refer here to the chapters on languages in the Americas (van der Hulst et al. 2010a; Rice 2010; Wetzels and Meira 2010) for many examples of languages that have been described as realizing accent exclusively or mainly in terms of pitch. Several additional examples can be found in the chapters on Papuan languages (van Zanten and Dol 2010), Asian languages (Schiering and van der Hulst 2010), and European languages (van der Hulst 2010a), specifically Caucasian languages (see Kodzasov 1999). Even though these surveys do not prove that the category of pitch accent languages is a genuine prosodic type, it is not without significance that so many systems have been identified with obligatory and culminative (and non-distinctive) high pitch.

3.2

Basque and Japanese

The languages discussed in the following two sections differ from the previous cases in that explicit reference to unaccented words, i.e. lack of obligatoriness, is required. Yet in both cases it would seem that the alleged accents have distributional properties that are very similar to those of stress (accent), which supports a pitch accent analysis.10 9

Cf. Clements (2001, 2009), who defends a broader justification for recognizing phonological features than only distinctivity. If a phonetic property is in some sense “salient,” this would, in his view, justify postulating a phonological feature. 10 Another case which is similar to these two languages is Korean, which, in its many dialects, displays a rich variety that is reminiscent of the Japanese situation in particular; see Fukui (2003) and for a summary Schiering and van der Hulst (2010).

Pitch Accent Systems

9

3.2.1 Basque The dialects of Basque present a great diversity of word prosodic systems (see Hualde 1999). Gussenhoven (2004: ch. 9) presents an analysis of Northern Bizkaian Basque with reference to the Gernika and Lekeito dialects. Both have accented and unaccented roots, the former being in the minority. There are inflectional and derivational suffixes that are accented or pre-accenting. In Lekeito, if a word has an accent, this accent always ends up on the penultimate syllable. In Gernika, the leftmost (non-final) accent prevails; this is the more common case in Basque dialects. In Lekeito, unaccented words are grouped with an accented word to their left or right, whereas sequences of unaccented words together form a single domain. Each such domain either has an accent (if it contains an accented word) or is unaccented. Unaccented domains receive a default final accent in certain syntactic positions, namely at the end of the sentence or before the finite verb. Each accent, whether lexical or default, is associated with a HL pitch accent. The left edge of the accentual domain is marked by a LH boundary sequence; a high plateau is found between the boundary H and the H of the pitch accent. Systems of this sort seem obvious candidates for accentual analyses, which of course begs the question of whether they must be analyzed accentually. One argument that could be made for an accentual approach is that in the various dialects we note a variety of accent locations (ranging from lexical to rule-governed) which is very reminiscent of the distribution of stress in “stress accent languages.” The second argument again involves the fact that pitch is non-distinctive in Basque dialects. Note that in Basque, unaccented words are provided with default accent, at least in some cases.

3.2.2 Japanese We also find a broad array of word prosodic systems among the dialects of Japanese (cf. Haraguchi 1977). An overarching property of all systems is the relevance of pitch at the level of the “word,” or, as some researchers prefer to put it, the “accentual domain.” An interesting overview in the context of autosegmental theory of dialectal differences is offered by Haraguchi (1977, 1988), who divides Japanese dialects into two broad categories: pitch accent systems and unaccented systems. Cross-classifying with this dichotomy, he suggests a “universal” inventory of melodies (H, L, HL, LH, and LHL), from which a system may select at most one or two. In addition to the choice of one or more melodies, the differences among dialects depend on: (4)

a. b.

The location of accent/H: fixed or free.11 The spreading of H: no spreading, rightward or leftward.

Thus, in Tokyo Japanese, the H tone spreads leftwards (leaving an initial mora low, possibly due to a boundary L tone that comes with the left). We will focus on the pitch aspect of Tokyo Japanese in §6. The system of Tokyo Japanese is such that the constituents of words (stems, affixes) can be accented or unaccented (or, in the case of affixes, pre-accented). When more than one accent is present in the accentual domain (which can be larger than the word and therefore needs careful definition; Gussenhoven (2004) calls it the a-domain), the first (or leftmost 11

In §6 we will discuss the way accents are distributed in Tokyo Japanese, which is partly lexical and partly rule-based.

Harry van der Hulst

10

accent) predominates, i.e. will attract the high pitch/tone. If no accent is present, the high pitch occurs on the last (rightmost) syllable (and spreads from there). This “First/Last” pattern constitutes a system that is reminiscent of so-called unbounded stress systems (Hayes 1995). In fact, Haraguchi (1988) notes that three of the possible unbounded patterns occur in Japanese dialects (see also chapter 120: japanese pitch accent). (5)

a.

Systems with unaccented words First/First12 First/Last Last/First Last/Last

Kumi Tokyo, Osaka — Hirosaki

b.

Systems without unaccented words First

Fukuoka

Last



Note that systems without unaccented words have no default clause. Haraguchi (1977, 1988) also recognizes unaccented systems, i.e. systems in which no word is accented. He mentions Sendai, Miyakonojo, and Kagoshima. In such systems the tonal melody is associated either from left to right or from right to left in his analysis: (6)

Systems with only unaccented words First Last

For these systems, tones are associated to words in terms of association conventions that make no reference to accents, but rather the word edges. These same conventions are invoked for unaccented words in accentual languages (as in (5a)), which implies that in such systems tones are associated partly to accents and partly directly (i.e. without “intervening” accents). In all dialects that use just one melody, the question can again be raised whether this “melody” is a phonological entity or entirely due to phonetic interpretation. Haraguchi (1988) does not consider this issue, but it could be argued, as before, that only dialects that have more than one word melody are truly tonal.

3.3

Bantu languages

Many Bantu languages are commonly described as having both tone and accentual properties, while a few (such as Swahili) have lost tone, and retain only “stress” (chapter 114: bantu tone). Bantu word prosodic systems have always been of special interest in the debate regarding the appropriate analysis for languages that have both significant word-level pitch movement and indications that accent plays a role as well; see Schadeberg (1973), Voorhoeve (1973), Goldsmith (1976a, 1988), Hyman (1978, 1981, 1982, 1989), Clements and Goldsmith (1984), Odden (1988), and especially Downing (2010). The accentual analysis of Bantu languages was promoted by Goldsmith (1976b, 1988), although the approach has a long history (see the introduction in Clements and Goldsmith 1984 for a historical perspective). 12

This can be glossed as: “Associate a tone with the first accented syllable, or, if no accent is present, with the first syllable.”

Pitch Accent Systems

4

11

Systems with accent and tone

Although the focus of this chapter is on pitch correlates of accent, we must be aware of the fact that in systems that display both tone and accent several relations between these two phenomena are possible (Hyman 1977; van der Hulst and Smith 1988; Fox 2000; de Lacy 2002; Wetzels and Meira 2010; among others): (7)

a. b. c.

Accent and tone are independent. Accent is dependent on tone. Tone is dependent on accent.

De Lacy (2002) proposes a system of constraints and accounts for the different relations in terms of different rankings. In this section I will focus on the systems in which tone is dependent on accent; for a discussion of the other two cases, see van der Hulst (2010c). In §2.7, I used the term tonal accent systems for systems in which the distribution of tone is determined by accent, but we need to be more precise on exactly what kind of relationships may exist. The distribution of distinctive tones can be restricted for a variety of reasons (see §2). While the factors that lead to restrictions in a specific system may be unrelated to the notion “accent” (which may or may not be independently present in the language in question), there comes a point where the tonal system is so restricted that an analysis is possible in which a specific syllable can be identified as “accented” and, as such, function as the domain for the association of the tonal distinctions. If the notion of accent was already present on independent grounds, the common tendency towards reduction of tonal contrast in unaccented syllables may have been a factor in the emergence of a restricted tonal system, in addition to other factors which may have played a role. However, the processes that lead to restrictions may also “accidentally” give rise to an accentual interpretation. Since languages in which accent and tone interact are sometimes included in the class of pitch accent languages, these cases merit attention in this chapter. The effect of accent on tonal contrast can be twofold. It may lead to reduction and eventually neutralization of an underlying contrast (chapter 80: mergers and neutralization). This is what is called here accent-driven reduction. It is commonly claimed that the elimination of tones in certain positions in Mandarin Chinese (chapter 107: chinese tone sandhi) is caused by the fact that tonal contrast can only be maintained in words with accent; see Yip (1980, 2002), Wright (1983), and Duanmu (2000), for analyses and references. A similar case can be found in the Ijo languages (Williamson 1988), where only the first word in a “tone group” retains its underlying tones. In both cases, unaccented words lose their lexical tones (which show up if the words are in an accented position). In these two examples we are dealing with accent at the compound or phrasal level, and thus with neutralization of all tones belonging to words that are not in an accented position. Reduction of tonal contrast within polysyllabic morphemes may lead to restructuring, such that tones formerly associated to unaccented syllables either disappear entirely or are attracted to one particular syllable, the accented syllable. In either case, the end result is that tonal contrast only occurs on the accented syllable. When a restricted tone system is analyzed with reference to a notion of accent, we have accent-driven tonal distribution, and the system can be called a

12

Harry van der Hulst

tonal accent system. A question that arises in these cases is whether the accented syllable is cued merely by its attraction of tonal contrast, or, additionally, by other “stress-like” cues. I will consider this issue in §5. I consider here some examples from Suárez (1983), as well as from Yip (2002), in their surveys of Meso/Middle American languages. Isthmus Zapotec has two tones which associate to the accented syllable and from there spread rightwards. “Pre-stress” syllables are low-toned. Suárez also mentions Northern Pame and Yaitepec Chatino as languages that have a tonal contrast only in the syllable that is said to be “stressed” (in both cases the “final” syllable, presumably of the stem). This can be compared to Huautla Mazatec, where every syllable can have contrastive tone. In between, we find cases where the contrast on certain non-accented syllables is limited. In Palantla Chinantec, for example, there is no tonal contrast on post-stress syllables. Van der Hulst and Smith (1988) cite the case of San Juan Copola Trique, which illustrates how restricted tonal distribution can arise historically (cf. Hollenbach 1988; Yip 2002). In the Otomanguean family in general, we find a continuum of reduction of tonal contrast and, interestingly, an increase in tonal contrasts on the accented syllable. A case where accent has only mildly influenced tonal contrast is found in Cajonos Zapotec (Nellis and Hollenbach 1980). Of the four underlying tones H, L, HL, and M, only M is disallowed in unaccented syllables. In this case, then, we do not have a tone accent system, but simply a tone and accent system, with accent-driven reduction. Among the languages in which the distribution of tone is dependent on accent, there is a subclass of cases in which tonal contrast is only found on or near accented syllables, not because tones have been neutralized in other positions, but simply because a tonal contrast historically developed only in this position. In these cases, the accented syllable, in addition to being an attractor for tonal association, has clear stress-like cues. Hence languages of this kind are both stress accent and tonal accent languages, with the proviso that the tone does not always associate directly to the accented syllable, but sometimes on a syllable near it (although this also depends on the details of the analysis). Two well-known cases of this sort are some of the Scandinavian languages and Serbo-Croatian. For discussions of the Scandinavian type I refer to Bruce (1999) and Gussenhoven (2004) (see also chapter 97: tonogenesis). For Serbo-Croatian, see e.g. Inkelas and Zec (1988).13 We must note that the co-occurrence of stress accent and a lexical pitch contrast enforces a tonal analysis of the latter. If the accent was not manifested in any other way than forming an anchor for lexical pitch, it could be argued that the opposition is one between accented words and unaccented words.

5 5.1

The accent debate Accents or no accents

We have so far discussed two possible interactions between accent and pitch or tone: 13

In his chapter on central Franconian tones, Gussenhoven (2004: ch. 12) discusses the emergence and representation of a tonal distinction that is very similar to the Scandinavian distinction; see also Gussenhoven and Bruce (1999) and Hermans (1994). We also find a similar contrast (due again to different historical factors) in Scottish Gaelic; see MacAulay (1992: 234–236).

Pitch Accent Systems (8)

a. b.

13

Pitch is dependent on accent (pitch accent systems; §3). Tone is dependent on accent (accent-dependent reduction and distribution; §4).

The dividing line between the two types is distinctivity. If pitch is non-distinctive, i.e. if there is no tonal contrast, the system uses pitch to cue accent. But if there is tonal contrast, tones are involved. The Bantu systems mentioned in the preceding section have been analyzed as involving accent and tone. However, the question of whether the occurrence of tone contrast on one specific syllable requires a notion of accent cannot be taken for granted, even when tonal association seems to be limited to an “accent-like” position. Consider the case in which the alleged accented syllable has no independent property apart from being the locus of tonal contrast. One could then say that there really is no accent at all, and instead assume that the tones, being specified as a property of morphemes, associate to their specific locus directly, without first assigning an accent that attracts the tones. In this case we would accept that accent rules and tone association rules fall under the umbrella of a general theory of positional identification, and that the principles for positional identification are similar, if not the same, for both accent placement and tone association. (9)

a.

b.

Indirect (accentual) approach Step one: Accent goes to position X. Step two: Tones go to accent. Direct approach Step one: Tones go to position X.

If the direct approach is taken, the category of tonal accent systems reduces to tonal systems which are then further differentiated in terms of different principles of association (LR, RL, positional). Below we will see that the direct tonal approach can also be applied in systems that have unpredictable (i.e. lexically specified) loci for accents. To what degree should tone placement and accent placement be allowed to overlap? If, for example, a tonal contrast occurs on the final syllable if it is closed, and otherwise on the penultimate syllable, should we say that there is a quantitysensitive accent rule and that tones are attracted to the accent, or should we make the tonal association rules quantity-sensitive? The earlier literature on systems in which tone contrast is limited to specific syllables reflects the view that the theory of accent placement should not be duplicated in a theory of tone placement, so that in these cases accent is usually seen as placing a role in tonal association. On the other hand, Haraguchi (1977, 1988, 1991), as we saw in §3.2, makes a sharp distinction between tones that associate to accents and tones that associate directly to tone-bearing units at edges. The latter case involves only strict directional association in his analysis (from right to left, or from left to right). But, if peripheral tone-bearing units can be “extra-tonal,” we can expand the set of cases in which tonal association can be direct. However, we do not expect direct tonal association to be dependent on syllable weight distinctions. Hence, if tones are attracted to positions that reflect weight criteria one would be inclined to associate tones to accent which are assigned in a weight-sensitive fashion.

14

Harry van der Hulst

Given the inevitable overlap between accent placement and direct tonal association, Pulleyblank (1986) launched an attack on the use of accents and suggested replacing accents by tones. This approach, discussed in the next section, has since become dominant.

5.2

Giving up accents

The direct tone approach was promoted by Pulleyblank (1986) mainly for various African tonal systems and by Poser (1984) for Tokyo Japanese in particular. The most important of Pulleyblank’s arguments against the use of stars (cf. Blevins 1993: 238) are as follows. Firstly, using stars and tone makes the system overly rich, in that we now predict rules referring to stars and to tones, and to both at the same time. Secondly, the inherent culminative nature of stars can also be found in systems that are arguably tonal and non-accentual, i.e. the asymmetry between accent and non-accent finds a counterpart in systems in which H tone contrasts with zero (ending up as default L). Another argument that could be mentioned is that accent (if equated with stress) is a property of syllable, whereas stars sometimes need to be assigned to moras. Finally, as we have already mentioned, the existence of unaccented words in accentual systems, or indeed of words with multiple accents which all surface, can be regarded as problematic. Pulleyblank applied the direct tone approach to a variety of cases, not only cases in which the position of the tone is predictable, but also those where the former accent location is lexically specified; it was subsequently adopted in much other work (Clark 1988; Hyman 1989). We note again that this move entailed the use of phonological features for non-distinctive, i.e. predictable properties. Even though the location of the alleged tone could be a lexical, unpredictable property, the phonetic nature of the entity (high pitch) would nonetheless be predictable.14 The abandonment of stars implies, firstly that the systems discussed in §4, where H tone is restricted (perhaps up to the point of being culminative), but not obligatory, are now analyzed as tonal. However, a further-reaching conclusion is that “straightforward” pitch accent systems (discussed in §3), where high pitch is both obligatory and culminative, are also analyzed as tonal, despite the fact that pitch is not distinctive. This may or may not be considered a (conceptual) problem (cf. Clements 2001, 2009). Another issue is of course that we have rules for tonal association which duplicate the theory of accent which is independently needed for non-tonal accent systems. Abandoning accent cannot make the Scandinavian (and Serbo-Croatian) case purely tonal, since, as mentioned, in these cases we need the notion of stress (accent), independently of the tonal specifications.

5.3

In defense of accents

If accents are rejected for pitch accent and restricted tone languages, the term “accent” can simply be abandoned in favor of the term “stress” (for stress accent languages). Hyman (2007) adopts this position, reducing the typology of word 14

This might suggest a “compromise” position in which “accents” are regarded as unspecified tonal “root nodes.” In an approach which adopts a wider use of accents as possible ingredients of stress accent systems, this idea could not be maintained.

Pitch Accent Systems

15

prosodic systems to tone languages and stress languages. In this section I will focus on the use of accents in “tonal” systems and suggest a different approach, one which maximizes the use of accents at the expense not just of non-contrastive “tones,” but also of (allegedly) contrastive tones. The issue here does not revolve around languages that have obligatory and culminative high pitch such as Nubi. Here, the case for accent could be considered uncontroversial, if we believe that culminativity and obligatoriness are necessary for an analysis using accent (which essentially means that accent and stress are the same thing). Rather, let us focus on languages in which H tones violate one or both of these two constraints. I will argue that languages of this sort can also be analyzed as accentual (and thus non-tonal), if obligatoriness and culminativity, while perhaps being typical or even necessary for stress, are not required for accent. These points were anticipated in §2.5. Let us first consider the type of case in which one syllable per word is either H or L, meaning that H is culminative, but not obligatory. In an accent-cum-tone analysis we would postulate an accent, and from there we have several options, depending on how we characterize the tonal contrast (H/L, H/zero, zero/L). But there is also another option. We can also analyze the contrast as accent vs. no accent (with accent giving rise to phonetic high pitch and low pitch as default). This means that we can analyze these alleged H/L systems as pitch accent systems as long as we “allow” that accentual languages have a class of unaccented words. Secondly, even when a “H/L” system allows multiple (non-adjacent) “H tones,” this does not necessarily enforce a tonal analysis. If neither criterion proposed by Hyman (2007) for stress applies to accent, there is no reason why a word could not have more than one accent. Concluding, if we push the use of accents to its limits (at the expense of using tones), this implies allowing unaccented words (violating obligatoriness) and multiple accents (violating culminativity). In this liberal view on accent, only languages that have more than a binary pitch contrast are necessarily tonal; in addition, we find languages in which culminativity and obligatoriness of accent is independently required (as in the case discussed in §4). One could say that “H/L” systems are the real pivotal cases, where, as linguists (or as language learners), we have a choice between an accentual and a tonal analysis. There may be certain diagnostics that will tip the balance to either an accentual or a tonal analysis, and these need to be made explicit. More work is called for in this area. An accentual approach is favored when the distribution of accent falls squarely within a theory of accent placement that is independently needed for stress accent and other types of accentual languages. This, perhaps, makes an accentual analysis of those languages in which the alleged accents need to be assigned to moras undesirable (cf. the case of Somali; Biber 1981; Hyman 1981; Banti 1988). Another diagnostic pointing to tones is the need to refer to floating tones, on the assumption that the notion “floating accent” is suspect. Thirdly, it could be argued that tonal spreading processes might suggest tone, but implementational mechanisms can also be held responsible for pitch extending over several syllables. A fourth potential way to discriminate between accent and /H/ tone would be to look at the details of phonetic implementation. One could conceivably argue that the phonetic pitch target of phonological categories like /H/ is more specifically defined than the pitch target of accents. Lastly, an accentual analysis could account

Harry van der Hulst

16

for cases in which we need rules that delete apparent accents in clash, or other rules that refer to accents, irrespective of their pitch or “tonal” correlates. McCawley (1978) suggested that in some cases one might want to say that a system is accentual first, and then becomes tonal in the course of the derivation. The question is, however, whether the tonal end of the derivation is still part of the phonology or part of the phonetic implementation. In this section I have suggested that accentual systems should be “allowed” to have unaccented words or multiple accented words, or even both. This seems to imply that obligatoriness and culminativity are not necessary properties of accent and that the case in which accents are both obligatory and culminative is just one of four possibilities; see §7.

6

The case of Tokyo Japanese

Tokyo Japanese is a language that is often mentioned as a prime example of a pitch accent system, but differs from both Nubi and Somali, while apparently sharing properties with each. Every word is said to have high pitch, but, at the same time, some words are accented and others are non-accented. Let us first consider the basic facts; references to various types of analyses are offered below. In Tokyo Japanese, nouns have a specific pitch contour, which in some but not all cases involves LHL. In those words that have the full LHL pattern, the L occurs on the initial mora. This mora is followed by a high plateau, which may drop to low at some point. After the drop, remaining syllables are low. In some words the initial L is missing, and in other words the final L. Thus we find the following patterns, taking trisyllabic nouns to illustrate the possibilities: (10)

a.

HLL qqq inoti ‘life’

b.

LHL qqq kokoro ‘heart’

c.

LHHL qqq(-q) atama ‘head’

d.

LHHH qqq(-q) sakana ‘fish’

This system can be and has been analyzed in many different ways; here we will specifically focus on accounting for the difference between (10c) and (10d). For (10a)–(10c) we have three options; depending on which one is chosen, various approaches can be suggested for class (10d): (11)

(10a)–(10c)

a. Accent → /H/ → [H] b. /H/ c. Accent → [H]

(10d) (i) default accent

(ii) /H/ to last q /H/ to last q

(iii) implementation implementation implementation

In (11a), the accent-cum-tone analysis, the (10d) case would be lexically unaccented. Since such words surface with an apparent H tone throughout (except for the initial mora), one could consider assigning a default final accent (11.i), which then triggers an H tone. This analysis encounters a problem, however. Words that have no lexical accent must be identifiable as such in the phonetic interpretation because there is a phonetic difference between (10c) and (10d). Roughly speaking,

Pitch Accent Systems

17

(10c) is LHH and (10d) is LHM, with the stem-final “H” in the latter not quite as high as the other Hs in both examples. The two types of words also have different effects on following words (or “accentual phrases”) inside the Intermediate Phrase: (10c) causes downstep, (10d) does not; cf. Haraguchi (1988); Pierrehumbert and Beckman (1988); Gussenhoven (2004). Alternative (ii), which would use the H tone assignment rule in (12), resolves this issue, because it can be argued that a H tone on an accented syllable and a H tone on an unaccented syllable are interpreted differently (cf. (5a)): (12)

Assign /H/ to the first accent or, if there is no accent, to the final syllable.15

The difference between (10c) and (10d) could also be described if the pitch properties of the latter class are entirely accounted for in terms of phonetic implementation (11.iii), since this system could respond to the presence vs. absence of an accent. In the second (tone-only) approach, (10d) should be accounted for by method (iii), since method (ii), available in principle, would wrongly conflate (10c) and (10d), as there is no longer an accent to differentiate between them. Finally, in the third method, (11c) (accent only), both classes must be differentiated in the phonetic implementation: accent is interpreted as high pitch, while lack of accent is interpreted differently, although also in terms of elevated pitch. I have briefly discussed three different approaches to a system such as that of Tokyo Japanese nouns, namely those mentioned in (11). All three approaches have been defended in the literature in one form or another. The tone accent approach (although often called “pitch accent approach”), (11a), comes closest to the analysis offered in McCawley (1968). Lexically, the language is accentual, but in the course of the derivation (presumably at the word level) tone is added, and from that point on the language is tonal. This approach was adopted as part of the autosegmental analysis of languages like Japanese and other monomelodic systems (cf. Goldsmith 1976b; Haraguchi 1977, 1988). The tone-only approach, (11b), has been advocated by Meeussen (1972), Poser (1984), Pulleyblank (1986), Clark (1987), and Pierrehumbert and Beckman (1988). Lockwood (1983) is a clear representative of (11c), the pitch accent analysis. To what extent do these linguists recognize the possibilities in (11), other than the one that they propose for Japanese, as valid for other languages? Clark (1988) rejects (11a) as a theoretical option, but claims that (11c) represents an independent possibility, alongside (11b). She makes a distinction between restricted tonal systems, i.e. (11b), and metrical pitch accent systems, i.e. (11c). The difference between the two types is claimed to be that only metrical pitch accent systems have the characteristics that we also find in non-tonal accent languages with respect to accent location (e.g. influence of syllable weight) and other phonetic cues that occur as the manifestation of accent. In her restricted tonal languages, the alleged accent is simply a tone at every level of representation (Clark 1988: 52). An argument for analyzing Tokyo Japanese as tonal would be the fact that we have words like (10d), distinct from (10c). In a tonal analysis, this difference is expected, since words do not have to have a tone. But in an accentual analysis, 15

Here I have added “first” to the rule because, if a word ends up having more than one accent, it is always the leftmost accent that attracts the H tone.

Harry van der Hulst

18

the class of unaccented languages has been seen as unexpected (see also Duanmu 2004). I have shown, however, that accentless words are not an embarrassment if accents need not be obligatory. Let us now ask how the high pitch profile in class (10d) could be analyzed as not resulting from a /H/ tone (supplied by default, i.e. (11.ii)), but rather as emerging in the phonetic implementation (i.e. (11.iii)). In the approach of Pierrehumbert and Beckmann (1988) and Gussenhoven (2004) it is assumed that there are morphemes with lexical accents as well as morphemes that lack accents. Lexical accents are then associated with an H*L “pitch accent.” Up to this point, this essentially follows the accent-cum-tone approach (i.e. (11a)). The high pitch pattern of unaccented words (e.g. 10d) is due to an H “boundary tone.” The claim is that the left edge of “words” is predictably assigned a LH boundary sequence. The L part of this boundary sequence is responsible for the low initial mora of words that do not have initial accent, and the H part is responsible for the high pattern of unaccented words. This H tone associates to the second mora and pitch lowers from there towards the end of the word, explaining why a word with a final accent and an unaccented word have a different high profile. In accented words, the final syllable is realized in terms of a high target for its H*L pitch accent, while an unaccented word’s final syllable does not have a H target, but merely reflects the interpolation of the H boundary tone (which is on the left) toward the end of the word (where we find the boundary L of the next word, or, if the word is utterance-final, an utterance L boundary tone): (13)

a.

x {( ta ta ta ta )} L H _ _ H LL

b. {( ta ta ta ta )} L H _

_

L

Clearly, this analysis does not require a default accent rule for unaccented words (11.i), nor does it appeal to a default pitch accent (11.ii). Before we close this section, let us ask whether this analysis must be regarded as an accent-cum-tone approach or whether it can also be interpreted an accentonly. The fact that there are symbols like “H” and “L” in this approach does not mean that these entities are “lexical” in any sense. I suggest that the pitch accents can be seen as phonetic entities, hence [HL] rather than /HL/. The tonal entities are part of the vocabulary of the implementation system. These entities combine with the other tonal entities that are introduced at the post-lexical level, belonging to the intonational system. Intonational entities themselves may or may not be phonological. Boundary tones that predictably associate with certain types of boundaries without expressing any specific semantics, are, likewise, phonetic entities, e.g. [L] or [H]. Phonetic implementation operates on the representations that the grammar supplies. When it comes to the specification of pitch, the following entities are minimally relevant: tones (lexical or intonational), “accents,” and

Pitch Accent Systems

19

prosodic boundaries. We can, if accents have high pitch, first assign [H] to accent and then do the actual implementation. The same applies to boundaries; we can assign a [H] to the left boundary of a certain type of prosodic phrase. Strictly speaking, we only do this in order to make the implementation rules refer to only one type of entity (namely tonal entities, whether phonetic or phonological) instead of to three different types of entities (tones, accents, and boundaries). In any event, it would seem that the pitch profiles of Tokyo Japanese words do not require reference to word-level tones.

7

Accent and stress

A discussion of pitch accent systems forms part of the broader discussion of word prosodic systems. However, having made reference in the preceding sections to a view that recognizes both accent and stress as independent notions, this section will briefly discuss their properties and interaction. We might entertain the idea that the alleged accents in Tokyo Japanese are simply “marks” which are to be compared to syllable weight. If this comparison holds, we might refer to the accents as “diacritic weight marks” and in that case there is no reason for every word to have one such mark, just as languages that have a contrast between CV (light) and CVX (heavy) syllables typically do not necessarily demand that each word has a “heavy syllable.” Nor, for that matter, do we expect words to have only one “accent,” since words also can have more than one heavy syllable. This interpretation of “accents” explains the occurrence of unaccented words and multiple accented words in specific systems. A problem with this approach is that weight diacritics have characteristics that are more reminiscent of “stress” than of heavy syllables, notably predictability. This can be illustrated by taking a closer look at the accentual system of Tokyo Japanese. See chapter 120: japanese pitch accent, where it is shown that the Tokyo Japanese accent rule is very similar to the Latin-style English accent rule. We now have a new problem. If the Tokyo Japanese accents are like weight, (a) why is their distribution predictable by rule, and (b) why is the rule so similar to the typical “stress” placement rules? And why are there accentual systems in which accent is culminative and/or obligatory? To resolve these issues, van der Hulst (2009, 2010c) proposes to account for accent and “rhythm,” which traditional metrical theories conflate in one representation, in two different modules. The accentual module accounts for the location of so-called primary accent or primary “stress” in systems where this location shows influence of lexical factors (exceptions, morphological classes, etc.), while the rhythmic module associates words with metrical structures. This separation of tasks allows a simpler version of the metrical system, which, as van der Hulst shows, cannot handle all varieties of primary accent locations in bounded systems and is simply not designed to deal with accent locations in unbounded systems. The theory of accent that has been suggested is “liberal,” in that accent is required neither to be culminative nor obligatory. While this allows four different kinds of pitch accentual systems, it might be argued that we now predict four kinds of any sort of accent system, whatever the cues for accent. Focusing on the specific case of stress accent languages, Hyman (2009) argues that in such systems “stress” is always culminative and obligatory.

20

Harry van der Hulst

We can explain the culminativity and obligatoriness of stress by developing a proper understanding of what is meant by “stress.” Instead of saying that the metrical module account, for the rhythmic structure of words, we could simply say that it accounts for stress, thus taking the term to stand for the overall metrical structure of words. In this view, metrical structure is placed on the same level as pitch, i.e. as a word-level property that is assigned to words with reference to accents (if present), which in this capacity are, as previously stated, pre-specified metrical heads. The difference is that, while pitch is an exponent of accent (and thus is absent if there is no accent), metrical structure is a parametric choice that is made for the language as a whole. If a word has an accent, this accent determines the manner in which the metrical structure is associated to the word. If there is no accent, the metrical structure resorts to a default mode of association. This means that languages can have stress without accent (when stress is fully automatic, and often variable),16 and accent without stress (in which case accent has cues such as pitch).

8

Conclusions

In this chapter we have considered the phenomenon of pitch accent, which has necessarily entailed a detailed discussion of the notion “accent.” I have focused on analytical issues, i.e. on how definitions of basic notions such as tone, accent, and stress allow or disallow certain types of analysis. Alongside the idea that lexical relevance or salience of pitch is a sufficient condition for tone, we have considered a more conservative view, which insists on distinctivity. Whereas the former view can essentially do away with pitch accent as a prosodic type, the latter view is compelled to adopt this notion in cases where pitch is not distinctive. I then showed that even systems in which pitch appears to function distinctively can be analyzed in terms of accents, if accents are neither required to be obligatory nor culminative. There is thus a class of systems that is ambiguous between a tonal and an accentual analysis. In summary, the two opposing views in this debate are those that maximize the use of tone (giving up distinctivity as a necessary criterion) and those that maximize the use of accents (which are neither necessarily obligatory nor culminative). By developing a specific notion of accent, we then considered the relationship between accent and non-pitch properties covered by the umbrella term “stress,” making the perhaps obvious connection between stress and rhythmic or metrical structure. This view is further developed in van der Hulst (2010c). Let us finally observe that the status of word-level pitch properties is not entirely unique. All the distinctions that we can establish for relationships between accent and pitch can also be established for accent and properties such as duration and vowel quality. Note that in these domains we do not encounter the claim that any word level occurrence of duration or vowel quality automatically entails the phonological categories “length” and “tense.” This, then, presents an asymmetry in the assessment of what is considered to be phonological: why speak of tone (instead of accent) if pitch is not used distinctively (and is thus a predictable cue 16

A case in point would be Indonesian stress; cf. Odé and van Heuven (2004).

Pitch Accent Systems

21

of accent) if, at the same time, cases in which accent is cued by non-distinctive duration or vowel quality are not analyzed as involving lexical specification of length or of non-distinctive vowel features?

ACKNOWLEDGMENTS I would like to thank Carlos Gussenhoven, Marc van Oostendorp, Keren Rice, and two anonymous reviewers for valuable comments on earlier drafts of this chapter.

REFERENCES Anderson, John M. & Colin J. Ewen. 1987. Principles of dependency phonology. Cambridge: Cambridge University Press. Banti, Giorgio. 1988. Two Cushitic systems: Somali and Oromo nouns. In van der Hulst & Smith (1988), 11–49. Beckman, Jill N. 1998. Positional faithfulness. Ph.D. dissertation, University of Massachusetts, Amherst. Beckman, Mary E. 1986. Stress and non-stress accent. Dordrecht: Foris. Biber, Douglas. 1981. The lexical representation of contour tones. International Journal of American Linguistics 47. 271–282. Blevins, Juliette. 1993. A tonal analysis of Lithuanian nominal accent. Language 69. 237–273. Bolinger, Dwight L. 1982. Intonation and its parts. Language 58. 505–532. Bruce, Gösta. 1977. Swedish word accents in sentence perspective. Lund University Department of Linguistics Working Papers 12. 219–228. Bruce, Gösta. 1999. Word tone in Scandinavian languages. In van der Hulst (1999b), 605–633. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clark, Mary. 1987. Japanese as a tone language. In Takashi Imai & Mamoru Saito (eds.) Issues in Japanese linguistics, 53–106. Dordrecht: Foris. Clark, Mary. 1988. An accentual analysis of the Zulu noun. In van der Hulst & Smith (1988), 51–79. Clements, G. N. 2001. Representational economy in constraint-based phonology. In T. A. Hall (ed.) Distinctive feature theory, 71–146. Berlin & New York: Mouton de Gruyter. Clements, G. N. 2009. The role of features in speech sound inventories. In Raimy & Cairns (2009), 19–68. Clements, G. N. & John A. Goldsmith (eds.) 1984. Autosegmental studies in Bantu tone. Dordrecht: Foris. Coetsem, Frans van. 1996. Towards a typology of lexical accent: Stress accent and pitch accent in a renewed perspective. Heidelberg: Carl Winter. Cutler, Anne. 1984. Stress and accent in language understanding and production. In Dafydd Gibbon & Helmut Richter (eds.) Intonation, accent and rhythm, 77–90. Berlin & New York: Mouton de Gruyter. de Lacy, Paul. 2002. The interaction of tone and stress in Optimality Theory. Phonology 19. 1–32. Downing, Laura J. 2010. Accent in African languages. In van der Hulst et al. (2010b), 381–428. Duanmu, San. 2000. The phonology of Standard Chinese. Oxford: Oxford University Press. Duanmu, San. 2004. Tone and non-tone languages: An alternative to language typology and parameters. Language and Linguistics 5. 891–924.

22

Harry van der Hulst

Fox, Anthony. 2000. Prosodic features and prosodic structure: The phonology of suprasegmentals. Oxford: Oxford University Press. Fromkin, Victoria A. (ed.) 1978. Tone: A linguistic survey. New York: Academic Press. Fukui, Rei. 2003. Pitch accent systems in Korean. In Shigeki Kaji (ed.) Cross-linguistic studies of tonal phenomena: Historical development, phonetics of tone, and descriptive studies, 275–286. Tokyo: ICLAA. Garde, Paul. 1968. L’Accent. Paris: Presses Universitaires de France. Goldsmith, John A. 1976a. Autosegmental phonology. Ph.D. dissertation, MIT. Goldsmith, John A. 1976b. An overview of autosegmental phonology. Linguistic Analysis 2. 23–68. Goldsmith, John A. 1981. English as a tone language. In Didier L. Goyvaerts (ed.) Phonology in the 1980s, 287–308. Ghent: E. Story-Scientia. Goldsmith, John A. 1988. Prosodic trends in the Bantu languages. In van der Hulst & Smith (1988), 81–93. Gordon, Matthew. Forthcoming. Disentangling stress and pitch accent: toward a typology of different prosodic levels. In Harry van der Hulst (ed.) Word accent: Theoretical and typological issues. Berlin & New York: Mouton de Gruyter. Greenberg, Joseph H. & Dorothea Kaschube. 1976. Word prosodic systems: A preliminary report. Working Papers on Language Universals 20. 1–18. Gussenhoven, Carlos. 2004. The phonology of tone and intonation. Cambridge: Cambridge University Press. Gussenhoven, Carlos & Gösta Bruce. 1999. Word prosody and intonation. In van der Hulst (1999b), 233–271. Halle, Morris & Jean-Roger Vergnaud. 1982. On the framework of autosegmental phonology. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 65–82. Dordrecht: Foris. Haraguchi, Shdsuke. 1977. The tone pattern of Japanese: An autosegmental theory of tonology. Tokyo: Kaitakusha. Haraguchi, Shdsuke. 1988. Pitch accent and intonation in Japanese. In van der Hulst & Smith (1988), 123–150. Haraguchi, Shdsuke. 1991. A theory of stress and accent. Dordrecht: Foris Publications. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hermans, Ben. 1994. The composite nature of accent, with case studies of the Limburgian and Serbo-Croatian pitch accent. Ph.D. dissertation, Free University, Amsterdam. Hockett, Charles F. 1955. A manual of phonology. Baltimore: Waverly Press. Hollenbach, Barbara. 1988. The asymmetrical distribution of tone in Copala Trique. In van der Hulst & Smith (1988), 167–182. Hualde, José Ignacio. 1999. Basque accentuation. In van der Hulst (1999b), 947–994. Hulst, Harry van der. 1999a. Word accent. In van der Hulst (1999b), 3–115. Hulst, Harry van der (ed.) 1999b. Word prosodic systems of the languages of Europe. Berlin & New York: Mouton de Gruyter. Hulst, Harry van der. 2009. Brackets and grid marks or theories of primary accent and rhythm. In Raimy & Cairns (2009), 225–245. Hulst, Harry van der. 2010a. Word accent systems in the languages of Europe. In van der Hulst et al. (2010b), 429–508. Hulst, Harry van der. 2010b. Word accent: Terms, typologies and theories. In van der Hulst et al. (2010b), 3–54. Hulst, Harry van der. 2010c. Word accentual systems. Unpublished ms., University of Connecticut. Hulst, Harry van der & Norval Smith. 1988. Autosegmental studies on pitch accent. Dordrecht: Foris. Hulst, Harry van der, Keren Rice & W. Leo Wetzels. 2010a. Word accent systems in the languages of Middle America. In van der Hulst et al. (2010b), 249–312.

Pitch Accent Systems

23

Hulst, Harry van der, Rob Goedemans & Ellen van Zanten (eds.) 2010b. A survey of word accentual systems in the language of the world. Berlin & New York: Mouton de Gruyter. Hyman, Larry M. 1977. On the nature of linguistic stress. In Larry M. Hyman (ed.) Studies in stress and accent, 37–82. Los Angeles: Department of Linguistics, University of Southern California. Hyman, Larry M. 1978. Tone and/or accent. In Donna Jo Napoli (ed.) Elements of tone, stress and intonation, 1–20. Washington: Georgetown University Press. Hyman, Larry M. 1981. Tonal accent in Somali. Studies in African Linguistics 12. 169–203. Hyman, Larry M. 1982. Globality and the accentual analysis of Luganda tone. Journal of Linguistic Research 2–3. 1–40. Hyman, Larry M. 1989. Accent in Bantu: An appraisal. Studies in the Linguistic Sciences 19. 115–134. Hyman, Larry M. 2001. Tone systems. In Martin Haspelmath, Ekkehard König, Wulf Oesterreicher & Wolfgang Raible (eds.) Language typology and language universals: an international handbook, vol. 2, 1367–1380. Berlin & New York: Mouton de Gruyter. Hyman, Larry M. 2006. Word-prosodic typology. Phonology 23. 225–257. Hyman, Larry M. 2007. Universals of tone rules: 30 years later. In Tomas Riad & Carlos Gussenhoven (eds.) Tones and tunes, vol. 1: Typological studies in word and sentence prosody, 1–34. Berlin & New York: Mouton de Gruyter Hyman, Larry M. 2009. How (not) to do phonological typology: The case of pitch-accent. Language Sciences 31. 213–238. Inkelas, Sharon & Draga Zec. 1988. Serbo-Croatian pitch accent: The interaction of tone, stress, and intonation. Language 64. 227–248. Kodzasov, Sandro. 1999. Caucasian: Daghestanian languages. In van der Hulst (1999b), 995–1020. Ladd, D. Robert. 2008. Intonational phonology. 2nd edn. Cambridge: Cambridge University Press. Leben, William R. 1971. Suprasegmental and segmental representation of tone. Studies in African Linguistics. Supplement 2. 183–200. Liberman, Mark. 1975. The intonational system of English. Ph.D. dissertation, MIT. Published 1979, New York: Garland. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. Lockwood, D. G. 1982. Parameters for a typology of word accent. Lacus Forum 9. 231–240. Lockwood, D. G. 1983. Tone in a non-substantive theory of language. Lacus Forum 10. 131–140. MacAulay, Donald (ed.) 1992. The Celtic languages. Cambridge: Cambridge University Press. Martinet, André. 1960. Eléments de linguistique générale. Paris: Librairie Armand Colin. McCawley, James D. 1968. The phonological component of a grammar of Japanese. The Hague & Paris: Mouton. McCawley, James D. 1978. What is a tone language? In Fromkin (1978), 113–131. Meeussen, A. E. 1972. Japanese accentuation as a restricted tone system. Papers in Japanese Linguistics 1. 267–270. Mock, Carol. 1988. Pitch accent and stress in Isthmus Zapotec In van der Hulst & Smith (1988), 197–223. Nellis, Donald G. & Barbara Hollenbach. 1980. Fortis versus lenis in Cajonos Zapotec phonology. International Journal of American Linguistics 46. 92–105. Odden, David. 1988. Predictable tone systems in Bantu. In van der Hulst & Smith (1988), 225–251. Odé, Cecilia & Vincent van Heuven (eds.) 1994. Phonetic studies of Indonesian prosody. University of Leiden: Vakgroep Talen en Culturen van Zuidoost-Azië en Oceanië. Pierrehumbert, Janet B. 1980. The phonology and phonetics of English intonation. Ph.D. dissertation, MIT. Pierrehumbert, Janet B. & Mary E. Beckman. 1988. Japanese tone structure. Cambridge, MA: MIT Press.

24

Harry van der Hulst

Pike, Kenneth L. 1948. Tone languages. Ann Arbor: University of Michigan Press. Poser, William J. 1984. The phonetics and phonology of tone and intonation in Japanese. Ph.D. dissertation, MIT. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19–100. Pulleyblank, Douglas. 1986. Tone in Lexical Phonology. Dordrecht: Reidel. Raimy, Eric & Charles Cairns (eds.) 2009. Contemporary views on architecture and representations in phonology. Cambridge, MA: MIT Press. Rice, Keren. 2010. Accent in the native languages of North America. In van der Hulst et al. (2010b), 155–248. Schadeberg, Thilo C. 1973. Kinga: A restricted tone system. Studies in African Linguistics 4. 23–47. Schiering, René & Harry van der Hulst. 2010. Word accent systems in the languages of Asia. In van der Hulst et al. (2010b), 509–614. Schuh, Russell G. 1977. Tone rules. In Fromkin (1978), 221–256. Selkirk, Elisabeth. 1996. The prosodic structure of function words. In James L. Morgan & Katherine Demuth (eds.) Signal to syntax: Bootstrapping from speech to grammar in early acquisition, 187–213. Mahwah, NJ: Lawrence Erlbaum. Suárez, Jorge A. 1983. The Mesoamerican Indian languages. Cambridge: Cambridge University Press. Trubetzkoy, Nikolai S. 1939. Grundzüge der Phonologie. Göttingen: van der Hoeck & Ruprecht. Translated 1969 by Christiane A. M. Baltaxe as Principles of phonology. Berkeley & Los Angeles: University of California Press. Voorhoeve, Jan. 1973. Safwa as a restricted tone system. Studies in African Linguistics 4. 1–21. Weidert, Alfons. 1981. Tonologie: Ergebnisse, analysen, vermutungen. Tübingen: Niemeyer. Welmers, William E. 1973. African language structures. Berkeley: University of California Press. Wetzels, W. Leo & Sergio Meira. 2010. A survey of South American stress systems. In van der Hulst et al. (2010b), 313–380. Williamson, Kay. 1988. Tone and accent in :j•. In van der Hulst & Smith (1988), 253–278. Wright, Martha. 1983. A metrical approach to tone sandhi in Chinese dialects. Ph.D. dissertation, University of Massachusetts, Amherst. Wright, Martha. 1988. A level-based model for pitch accent languages. In van der Hulst & Smith (1988), 295–316. Yip, Moira. 1980. The tonal phonology of Chinese. Ph.D. dissertation, MIT. Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press. Zanten, Ellen van & Philomena Dol. 2010. Word stress and pitch accent in Papuan languages. In van der Hulst et al. (2010b), 113–154. Zanten, Ellen van, Ruben Stoel & Bert Remijsen. 2010. Stress types in Austronesian languages. In van der Hulst et al. (2010b), 87–112.

43 Extrametricality and Non-finality Brett Hyde

1

Introduction

Extrametricality and non-finality are often treated as two terms that refer to the same principle of final stress avoidance, the former being implemented in a rule-based framework and the latter being implemented in a constraint-based framework. The labels and native frameworks of the two approaches, however, do not constitute the full extent of their differences. As Prince and Smolensky (1993) explain, extrametricality focuses on the parsability of prosodic constituents, while nonfinality focuses on the position of stress peaks. Extrametricality is concerned primarily with dominance relations, and non-finality is concerned primarily with prominence relations. Liberman and Prince (1977) introduced extrametricality in their foundational work on metrical stress theory to capture the apparent exclusion of certain English suffixes from the domain of stress rules.1 Recognizing the potentially wide range of applications, Hayes (1980) proposed the general formulation for extrametricality rules in (1), where the initial or final constituent of a particular domain is designated as extrametrical. (1)

1___ ]D 5 C → [+extrametrical] / 2 6 3D[___ 7

The result of extrametricality is essentially invisibility to the application of subsequent rules. When a constituent is designated as extrametrical, it is excluded from the domain of rules that might incorporate it into higher levels of prosodic structure. An extrametrical segment cannot be associated with a mora; for example, an extrametrical syllable cannot be footed, and an extrametrical foot cannot be included in a prosodic word. 1

Liberman and Prince introduce the notion of extrametricality to account for the apparent invisibility to stress rules of final -y in English: “From our point of view, -y functions as a kind of ‘extrametrical’ syllable; it simply does not take part in the metrical calculation” (Liberman and Prince 1977: 293). Later in the same paragraph: “-y is effectually hors de combat in the basic determination of metrical structure.” The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0043

2

Brett Hyde

As part of his general approach, Hayes proposed four restrictions on extrametricality. The first, constituency, ensures that only constituents – segments, syllables, feet, affixes, and so on – can be designated as extrametrical. Peripherality restricts extrametrical constituents to the edges of a domain, while edge markedness prefers that they occur at the right edge. Finally, non-exhaustivity ensures that extrametricality cannot exhaust the domain of a rule, preventing it from applying altogether. Prince and Smolensky (1993) incorporated similar restrictions into non-finality when they presented it as a replacement for extrametricality as part of their initial work on Optimality Theory. As (2) indicates, non-finality only applies at the edge of a domain (peripherality), and it only applies at the right edge in particular (edge markedness). The stress peaks that must avoid the right edge are prosodic categories (constituency) that are the heads of larger categories. (2)

Head-based non-finality No head Cat1 of a Cat2 occurs in final position in Cat3 (where Cat1, Cat2, and Cat3 are prosodic categories).

The effect of non-finality constraints is to prevent prominent categories – the heads that represent stress peaks – from occurring at the right edge of a domain. Nonfinality might prevent the head moras of syllables from occurring at the right edges of feet, for example, or head syllables of feet from occurring at the right edge of a prosodic word. Although it is usually a simple matter to distinguish non-finality from extrametricality, some approaches do exhibit characteristics of both. This is especially true of approaches that target relationships between final constituents and entries on the metrical grid, the classical device for representing stress patterns (see chapter 41: the representation of word stress). As (3) indicates, the nonfinality constraints of Hyde (2003) prohibit stress peaks – grid entries – in final position, but they specify a particular final constituent that stress must avoid. (3)

Grid-based non-finality No Cat1-level grid entry occurs over the final Cat2 of Cat3 (where Cat1, Cat2, and Cat3 are prosodic categories).

Under the grid-based approach, a non-finality constraint might prevent foot-level grid entries (secondary stress) from occurring over the final mora of a foot, for example, or prosodic word-level entries (primary stress) from occurring over the final foot of a prosodic word. The grid-based non-finality approach is like the head-based approach, then, in that it focuses on stress peaks, but it is similar to an extrametricality approach in that it excludes a particular final element from associating with some structure (in this case, certain levels of the metrical grid). A similar mixture of characteristics can be found in approaches that are typically considered extrametricality approaches. Since the grid-based account of Prince (1983) lacked feet, for example, the effect of extrametricality was to prevent syllables from mapping to the metrical grid – in other words, from associating with a stress peak – rather than to prevent them from being footed.

Extrametricality and Non-finality

3

As we shall see below, extrametricality and non-finality are among the most well-motivated principles in phonological theory, with support coming from several different lines of evidence. Perhaps the most compelling, however, is that they can be usefully applied in an unusually broad range of contexts. §2 and §3 examine phenomena involving final syllables and final feet, respectively, two types that can be handled equally well by extrametricality or non-finality. §4 examines phenomena involving final moras, a strength of non-finality approaches, and §5 examines effects involving final consonants, a strength of extrametricality approaches. In §6, I review some of the classic arguments marshaled in support of extrametricality and discuss the extent to which they also support nonfinality. Finally, §7 reviews some of the arguments for the asymmetry in edge specifications (edge markedness).

2

Final syllables

One of the most well-known uses for extrametricality and non-finality is avoidance of stress on final syllables. The most compelling examples are languages where a binary pattern is perturbed at a word’s right edge so that an anticipated final stress either arrives early or is absent altogether. An anticipated final stress arrives early, for example, in “iambic reversal” languages such as Southern Paiute (Sapir 1930, 1949), Axininca Campa (Payne et al. 1982), and Aguaruna (Payne 1990; Hung 1994). In the Aguaruna examples below, alternation of unstressed and stressed syllables from left to right would place the final stress on the ultima in even-parity forms, but it actually emerges on the penult.2 (4)

Final stress avoidance in Aguaruna a. b. c. d.

i’Œi‘naka i’Œina‘kana ŒaI’kina‘Iu‘mina ŒaI‘kina‘Iumi‘naki

‘pot (nom)’ ‘pot (acc)’ ‘your basket (acc)’ ‘only your basket (acc)’

An anticipated final stress is absent altogether in the iambic pattern of languages like Hixkaryana (Derbyshire 1985), Carib (Hoff 1968), and Choctaw (Nicklas 1972, 1975). In the Choctaw examples below, alternation of unstressed and stressed syllables would position the final stress on the ultima in even-parity forms, but the ultima and the penult both emerge without a stress. The examples are combinations of /pisa/ ‘to see’, /Œi-/ ‘you (obj)’, /-Œi/ ‘causative’, and /-li/ ‘I (subj)’. (5)

Final stress avoidance in Choctaw a. b. c.

Œi’pisa Œi’pisali Œi’pisa’Œili

An extrametricality approach would produce the Aguaruna and Choctaw patterns by making word-final syllables extrametrical and then constructing iambic feet 2

Hung (1994) infers the position of stress from the absence of vowel reduction processes. Her account is based on Payne’s (1990) description.

4

Brett Hyde

from left to right. With the final syllable extrametrical, the last two syllables in an even-parity form cannot form an iambic foot, so the expected final stress fails to appear. The difference between the two languages would be that Aguaruna tolerates degenerate feet – and can parse the penult as a degenerate foot after iambic footing is no longer possible – but Choctaw does not. Since Aguaruna can parse the penultimate syllable as a degenerate foot, as (6) illustrates, the expected final stress shifts to the penult. (6)

Aguaruna extrametricality analysis i.Œi.na.ka

extrametricality → i.Œi.na〈ka〉



parsing (i.’Œi)(‘na)〈ka〉

Since Choctaw cannot parse the penult as a degenerate foot, however, as (7) illustrates, the expected final stress is absent altogether. (7) Choctaw extrametricality analysis Œi.pi.sa.li →

extrametricality Œi.pi.sa〈li〉



parsing (Œi.’pi)sa〈li〉

For additional, and more detailed, extrametricality analyses of final stress avoidance, see Halle and Vergnaud (1987) and Hayes (1995). A non-finality approach produces the same patterns, although a bit more directly, simply by prohibiting stress at the right edge of the word. Head-based nonfinality, where heads represent stress, prohibits the head syllable of a foot from occurring in final position. Grid-based non-finality, where grid entries represent stress, prohibits a foot-level gridmark from occurring over the final syllable. In either case, prohibiting final stress effectively prohibits a final iambic foot. The difference between Aguaruna and Choctaw is in the options that they employ to avoid a final iamb. As (8) illustrates, Aguaruna employs a final trochaic foot, shifting the expected final stress to the penult. Notice that the non-finality analysis does not necessarily require underparsing like the extrametricality analysis. (In (8) and examples throughout, the expression “X ›› Y” indicates that X is more harmonic than Y or that some constraint, in this case non-finality, prefers X to Y.) (8)

Aguaruna non-finality analysis non-finality (i.’Œi)(‘na.ka) ›› (i.’Œi)(na.‘ka)

In contrast, as (9) illustrates, Choctaw prefers to leave its final two syllables unparsed in order to avoid a final iamb.3 (9)

Choctaw non-finality analysis non-finality (Œi.’pi)sa.li ›› (Œi.’pi)(sa.’li)

3

An alternative to leaving the final two syllables unparsed is to parse them into a stressless foot: (Œi.’pi)(sa.li). See Hyde (2002) for discussion.

Extrametricality and Non-finality

5

See Hyde (2002, 2003) for more detailed grid-based non-finality analyses of final stress avoidance, and McCarthy and Prince (1993) and Kenstowicz (1995) for more detailed head-based non-finality analyses. In this section, then, we have seen that extrametricality and non-finality provide equally effective analyses for the avoidance of stress on final syllables. Both approaches account for cases where a binary stress pattern is perturbed at the right edge, whether the final stress arrives early or is absent altogether.

3

Final feet

Another important use of extrametricality and non-finality has been to prevent primary stress from occurring over a word-final foot. In the clearest examples of the phenomenon, the primary stress is the penultimate stress, the presence of a secondary stress further to the right being the clearest indication that there is a final foot that primary stress might have occupied. Consider Banawá (Buller et al. 1993; Everett 1996, 1997) and Paumari (Everett 2003). In Banawá, consonant-initial words have a trochaic pattern, and vowelinitial words have an iambic pattern. In both the trochaic pattern and the iambic pattern, however, the primary stress is the penultimate stress. The secondary stress that follows indicates that there is a final foot that primary stress might have occupied if it had been drawn as far to the right as possible. (10)

Primary stress in Banawá a. b. c.

a’bari‘ko ‘metu’wasi‘ma ‘tina’rifa‘bune

‘moon’ ‘find them’ ‘you are going to work’

The primary stress is also the penultimate stress in the consistently iambic Paumari, indicating the presence of a final foot that primary stress has avoided. (11)

Primary stress in Paumari a. b. c. d.

ka’baha‘ki ‘aha’kaba‘ra a‘t hana’rari‘ki bi‘kana‘t hara’ravi‘ni

‘to get rained on’ ‘dew’ ‘sticky consistency’ ‘to cave in, to fall apart quickly’

It is a relatively simple matter to produce the Banawá and Paumari patterns with either extrametricality or non-finality. In the extrametricality approach, a wordfinal foot is designated as extrametrical, excluding it from the prosodic word. As (12) illustrates, when a right-headed prosodic word is constructed, it positions the primary stress over the penultimate foot, rather than the final. (12)

extrametricality

word layer

( x ) (x )( x )( x )→(x )( x )〈( x )〉 → ( x )( x )〈( x )〉 ti na ri fa bu ne ti na ri fa bu ne ti na ri fa bu ne

6

Brett Hyde

The non-finality approach produces a similar result, although it does not require that final feet be excluded from the prosodic word. Head-based non-finality avoids primary stress on final feet by prohibiting head feet from occurring in final position. Grid-based non-finality prohibits a prosodic word-level gridmark from occupying the final foot. non-finality

(13)

( x ) ( x ) (x )( x )( x ) (x )( x )( x ) ti na ri fa bu ne ›› ti na ri fa bu ne In either case, the primary stress and the foot associated with it are pushed back from the right edge. As a result the primary stress is the penultimate stress, and the associated foot the penultimate foot. It should be noted at this point that many of the examples cited in the literature on primary stress avoiding final feet are not as compelling as those discussed above. Hayes (1995) presents several languages as potential examples of foot extrametricality: Bedouin Arabic (Blanc 1970), Cayuga (Chafe 1977; Foster 1982; Michelson 1988), Delaware (Goddard 1979, 1982), Eastern Ojibwa (Piggott 1980, 1983), and Palestinian Arabic (Kenstowicz and Abdul-Karim 1980; Kenstowicz 1983). McCarthy (2003) points out, however, that these are not especially clear cases, because no secondary stress has been reported in a position associated with the supposed extrametrical foot. While McCarthy’s point overreaches a bit – Piggott (1983) reports post-tonic secondary stresses in Ojibwa, and patterns of reduction and non-reduction suggest post-tonic feet in Delaware – it is true of many of the traditional examples. As Banawá and Paumari demonstrate, however, the avoidance of stress on final feet is still one of the important functions performed by extrametricality and non-finality.

4

Final moras

The avoidance of final moras can make stress sensitive to the weight of syllables generally or to the weight of domain-final syllables (see chapter 57: quantitysensitivity). As we shall see below, non-finality offers a relatively straightforward analysis in such cases. §4.1 demonstrates how avoidance of syllable-final moras promotes general weight-sensitivity, §4.2 how avoidance of prosodic word-final moras promotes sensitivity to the weight of prosodic word-final syllables, and §4.3 how avoidance of foot-final moras promotes rhythmic lengthening. §4.4 examines the difficulties confronting extrametricality analyses.

4.1 General weight-sensitivity A fairly common type of weight-sensitivity is the type where stress avoids light syllables. It has been addressed, for example, using the Obligatory Branching Parameter (Hayes 1980) of classical metrical theory and the Peak-Prominence (Prince and Smolensky 1993) and Stress-to-Weight (Hammond and Dupoux 1996; Lorentz 1996) constraints of Optimality Theory. As Hyde (2006, 2007b) demonstrates,

Extrametricality and Non-finality

7

non-finality offers an alternative: when syllable-final moras cannot be stressed, stress cannot occupy a monomoraic syllable. Head-based non-finality prohibits stress on syllable-final moras by prohibiting the head mora of a foot from being final in a syllable. Grid-based non-finality prohibits foot-level gridmarks from occupying syllable-final moras. The clearest examples of stress avoiding light syllables generally are found in quantity-sensitive unbounded stress systems. Murik (Abbott 1985) and Aguacatec (McArthur and McArthur 1956) are typically used to exemplify the default-to-sameside variety.4 As (14) illustrates, Murik avoids stressing a light syllable whenever possible. If a heavy syllable is available, Murik stresses the leftmost. Murik stresses a light syllable, also the leftmost, only in those cases where heavy syllables are absent. Note that in Murik heavy syllables are syllables with long vowels. All others are light. (14)

Murik forms a. b. c. d.

’LL ’LLL LLL’H LL’HL

’damag ’dakhanqmp anHnpha’7e(t h numa’7o(go

‘garden’ ‘post’ ‘lightning’ ‘woman’

As (15) illustrates, Aguacatec stresses the rightmost heavy syllable when one is available. Otherwise, it stresses the rightmost light syllable. As in Murik, heavy syllables in Aguacatec are syllables with long vowels. (15)

Aguacatec forms a. b. c. d.

L’L LL’L L’H ’HL

ka?’pen Œinhoj’lih-ts ?in’ta( ’mi(tu?

‘day after tomorrow’ ‘they search for me’ ‘my father’ ‘cat’

The type of weight-sensitivity found in unbounded stress systems emerges when avoidance of syllable-final moras takes precedence over directional orientation. In Murik, as (16) illustrates, non-finality in the syllable takes precedence over leftward orientation. Stress appears over the leftmost heavy syllable, rather than a light syllable, even if it does not occur exactly at the left edge. (16)

Non-finality preferences in the syllable x x [ [ [[ [ ›› [ [ [[ [ nu . ma . 7oo . go nu . ma . 7oo . go

4

Languages presented as default-to-same-side systems often are not completely convincing in this classification. Since individual forms never contain more than one heavy syllable in Murik, for example, the significance of being the leftmost is less than clear. There is a similar problem with the classification of Aguacatec. McArthur and McArthur do not demonstrate the pattern for forms with more than one heavy syllable. For a more thorough discussion of non-finality’s role in both defaultto-same-side and default-to-opposite-side systems, see Hyde (2006).

Brett Hyde

8

Similarly, the Aguacatec pattern would emerge when non-finality in the syllable takes precedence over rightward orientation. Moraic non-finality constraints applied to the syllable domain, then, have the same effect as Obligatory Branching, Peak-Prominence and Stress-to-Weight. One point that favors the non-finality approach over the others is that nonfinality constraints are motivated by their usefulness in a much wider range of contexts – the avoidance of stress on final syllables (§2) and feet (§3), for example – many of which have nothing to do with syllable weight.

4.2 Weight-sensitivity in word-final syllables In this section, we examine the situation where stress is sensitive to the weight of prosodic word-final syllables only. To make stress sensitive to the weight of prosodic word-final syllables, all that is necessary is to require that prosodic wordfinal moras be stressless. When word-final moras cannot be stressed, stress can occupy a heavy final syllable, but it cannot occupy a light final syllable. One situation where stress is sensitive to the weight of prosodic word-final syllables only arises in syllabic trochee systems. Consider the case of Wergaia (Hercus 1986), where heavy syllables are syllables with long vowels (typically limited to initial position), syllables with diphthongs (limited to initial or final position), and closed syllables. As (17) illustrates, Wergaia stress is largely weight-insensitive. It falls automatically on every odd-numbered syllable counting from the left, except the final syllable. Stress falls on final syllables only if they are odd-numbered and heavy, as in (17f) and (17g). It avoids final syllables if they are light, as in (17d) and (17e). (17) Avoidance of final light syllables in Wergaia a. b. c. d. e. f. g. h. i.

’LL ’HL ’LH ’HLL ’LHL ’LL‘H ’LL‘H ’LL‘LL ’LL‘LH

’wuru ’Ia(ri ’Iarau ’ma(bila ’daguIga ’buna‘Õug ’waJa‘gai ’buna‘mala ’wureg‘wuraI

‘mouth’ ‘oak tree’ ‘wild turkey’ ‘to tell lies’ ‘to punch someone’ ‘broad-leaved mallee’ ‘catfish’ ‘fine-leaved mallee’ ‘speaking together, gabbing’

In Hyde’s (2007b) grid-based non-finality approach, the Wergaia pattern emerges when it is more important that foot-level gridmarks avoid prosodic word-final moras than it is that feet contain a stressed syllable. When the final syllable of an odd-parity form is light, stress cannot occur on the final syllable without occurring on the final mora, so the final foot emerges without a stress, as in (18). (18)

Moraic non-finality preferences in the prosodic word ( x )( ) ( x )( x) [[ [ [ ›› [[ [ [ maa . bi . la maa . bi . la

Extrametricality and Non-finality

9

When the final syllable is heavy, however, stress can occupy the final syllable without occupying the final mora, so the final foot emerges with a stress, as in (19). (19)

( x )( x ) [ [ [[ wa . Ja . gai

The same result can be produced in a more standard structural framework when non-finality in the prosodic word takes precedence over the constraints that require syllables to be parsed into feet. Odd-parity forms with a light final syllable would emerge with the final syllable unparsed and stressless. Odd-parity forms with a heavy final syllable would emerge with the final syllable parsed and stressed. The result can also be produced with head-based non-finality by prohibiting the head mora of a foot from being final in the prosodic word.5

4.3 Rhythmic lengthening Non-finality can be used not only as a simple detector of syllable weight – the use focused on in §4.1 and §4.2 – but also as a trigger to increment syllable weight. When stress would fall on an underlyingly light syllable, non-finality can force the syllable to become heavy on the surface. Rhythmic lengthening is an example of this effect. It results from avoidance of stress on foot-final moras or syllablefinal moras. There are two types of rhythmic lengthening: iambic lengthening and trochaic lengthening. The former adds a mora to the stressed syllable of an iamb; the latter adds a mora to the stressed syllable of a trochee. The iambic type appears to occur more frequently than the trochaic type (Hayes 1985, 1987, 1995; Kager 1993; chapter 44: the iambic–trochaic law), but both are well attested. Iambic lengthening can be found in Carib (Hoff 1968), for example. As (20) illustrates, Carib lengthens even-numbered syllables counting from the left, but not the final syllable, producing a fairly typical iambic pattern. (20)

Iambic lengthening in Carib a. b. c. d.

tonoro kurijara woturoporo woturopotake

→ → → →

tono(ro kuri(jara wotu(ropo(ro wotu(ropo(take

‘large bird’ ‘canoe’ ‘cause to ask’ ‘I shall ask’

Trochaic lengthening can be found in Chimalapa Zoque (Knudson 1975), a dual stress language based on trochaic footing. In Chimalapa Zoque, stress occurs on the initial syllable and the penult, with the stress on the penult being primary. As (21) illustrates, every stressed syllable must be heavy on the surface. When an underlyingly light syllable is stressed, the syllable is made heavy by lengthening its vowel. 5

An alternative approach is to rely on a foot minimality requirement to distinguish between light and heavy final syllables. This is essentially the approach adopted by Hayes (1995). As Hyde (2007b) points out, however, such an approach produces the same type of weight-sensitivity in non-final syllables, as well, where it is, unfortunately, unattested.

Brett Hyde

10 (21)

Trochaic lengthening in Chimalapa Zoque a. b. c. d.

’kosa? ‘hu’kutq ‘wqti hu’kutq ‘witu?paj’nqksq

→ ’ko(sa? → ‘hu(’ku(tq → ‘wq(ti hu’ku(tq → ‘wi(tu?paj’nqksq

‘scold (imp)’ ‘fire’ ‘big fire’ ‘he is coming and going’

Under a non-finality approach, rhythmic lengthening is just a special case of stress avoiding light syllables. To avoid stressing a light syllable, which would mean stressing a domain-final mora, the vowel of the syllable lengthens, making it heavy. Consider first the situation where non-finality prohibits stress on the final moras of feet (Kager 1995; Hyde 2007b), a head-based approach by prohibiting the head mora of the foot from being final in the foot and a grid-based approach by prohibiting foot-level gridmarks from occupying the foot-final mora. In this situation, stress must avoid light foot-final syllables, making it necessary for such syllables to lengthen if they are going to carry a stress. When avoidance of stress on foot-final moras takes precedence over the prohibition against mora insertion, the result is iambic lengthening, as (22) illustrates. (22)

Moraic non-finality preferences in the foot x x [ [[ ›› [ [ ( CV . CVV ) ( CV . CV )

Avoidance of foot-final moras cannot, however, produce lengthening in trochees. Since there is no danger of a stress occupying the final mora in a trochaic foot, lengthening would be gratuitous. Now consider the situation where stress avoids syllable-final moras. In this situation, stress must avoid a light syllable whether it is final in the foot or initial. When it takes precedence over prohibitions against mora insertion, then, as (23) illustrates, avoidance of syllable-final moras produces lengthening in both iambs and trochees. (23)

Non-finality preferences in the syllable a.

Iambic foot x x [ [[ ›› [ [ ( CV . CVV ) ( CV . CV )

b.

Trochaic foot x x [[ [ ›› [ [ ( CVV . CV ) ( CV . CV )

The non-finality approach meets the primary burden for an account of rhythmic lengthening, in that it produces both iambic lengthening and trochaic lengthening, but it has an additional advantage in that it predicts the greater frequency of lengthening among iambic systems. Non-finality in the syllable and non-finality

Extrametricality and Non-finality

11

in the foot both produce iambic lengthening, but only non-finality in the syllable produces trochaic lengthening. Since there are two sources of pressure for lengthening in iambic feet but only one for lengthening in trochaic feet, we would expect lengthening to occur with greater frequency among iambic systems, all else being equal. For further discussion of rhythmic lengthening and related issues, see chapter 44: the iambic–trochaic law.

4.4 The obstacle to an extrametricality approach As Hayes (1995) observes, there are significant obstacles from a structural standpoint to the implementation of mora extrametricality. For an extrametricality approach to produce the types of effects discussed in §4.1–§4.3 it must uniquely exclude the extrametrical mora from some higher prosodic structure. It is not clear, however, how moras can be excluded from higher prosodic structure in a way that produces the desired effects without abandoning syllable integrity (Prince 1976) or preventing extrametrical moras and their associated segments from being syllabified. For example, in the Wergaia case discussed in §4.2, word-final moras are invisible to stress assignment. This prevents stress from falling on light final syllables but still allows it to occupy heavy final syllables. There are only two ways in which this type of effect might be produced under an extrametricality approach. The first is to assume that the final mora is extrametrical and that extrametrical moras cannot be footed. When the final syllable is light, as in (24a), excluding the final mora from the foot level effectively excludes the final syllable, rendering it unstressable. When the final syllable is heavy, however, as in (24b), excluding the final mora does not entirely exclude the final syllable, allowing it to support stress. (24)

a.

Light final syllable (

x) [ [ 〈[〉 [CV] [CV] [CV]

b. Heavy final syllable ( x )( x) [ [ [〈[〉 [CV] [CV] [CVC]

The problem is most obvious in (24b). Uniquely excluding the final mora of a heavy syllable from the final foot means that the foot must split the syllable in violation of syllable integrity. Hayes explicitly rejects this possibility. Abandoning syllable integrity would make it possible for stress to occur on codas (Hayes 1995), and it would make it possible for multiple stresses to occur within a single syllable (Hyde 2007a). The second option is to assume that the final mora is extrametrical and that extrametrical moras cannot be syllabified. When the final mora is unsyllabified in CV-final words, as in (25a), no syllable can be built on the final vowel, so no stress is possible in this position. When the final mora is unsyllabified in CVVor CVC-final words, however, as in (25b), there is still a mora on which a final syllable can be constructed. Though what would otherwise form a heavy syllable only forms a light syllable, the syllable can still be footed, allowing stress to occur in final position.

12 (25)

Brett Hyde a. Final CV i.

Preceding consonant stray ( x ) [ [ 〈[〉 [CV] [CV] CV

b.

Final CVC ( x )( x) [ [ [ 〈[〉 [CV] [CV] [CV] C

ii. Preceding consonant as coda ( x ) [ [ 〈[〉 [CV] [CVC] V Hayes does not actually consider this second option and, therefore, does not reject it explicitly. Its rejection is implied, however, by his assumption that extrametricality never prevents syllabification. Unfortunately, he only justifies the assumption as it relates to extrametrical consonants, but there do seem to be good reasons for applying it to extrametrical moras as well. The analysis presents some fairly obvious problems in CV-final words. When the final mora and its associated vowel remain unsyllabified, there are essentially two options for dealing with any consonants that would otherwise be part of the final vowel’s onset. First, they might be left stray, as in (25a.i), in which case they would be subject to Stray Erasure and deleted (see §6.3). The result would be a language where final CVC and CVV sequences are preserved and their vowels stressed but final CV sequences have their consonants deleted and their vowels left stressless. To my knowledge, such an outcome is unattested. Second, preceding consonants might be incorporated into the preceding syllable as a coda, as in (25a.ii). The result would be a language where final CVV and CVC sequences always have their consonants syllabified as onsets and their vowels stressed but final CV sequences always have their consonants syllabified as codas and their vowels left stressless. To my knowledge, this outcome is also unattested.6 There is an additional, primarily theoretical, reason for rejecting extrasyllabic moras, not only in the particular situation under consideration, but in all situations. Moras are unique in that the primary motivation for including them in the prosodic hierarchy in the first place is to provide an effective representation of syllable weight, a function that cannot be performed outside the syllable. Neither of the options that might be used to achieve the desired results under an extrametricality approach, then, appears to be viable.

5

Final consonants

Evolved from proposals by Mohanan (1979) and Hayes (1980), traditional consonant extrametricality rules prevent word-final consonants from having moraic status and, therefore, from contributing to the weight of final syllables. The result is that final syllables that end in a consonant are lighter than we would otherwise 6

Similar arguments can be made against proposals that involve extrasyllabic moras acting as a sort of prosodic licenser for otherwise stray segments in order to protect them from deletion through Stray Erasure (Downing 1993; Everett 1996). Although such licensing has only been employed at the left edge, there would seem to be nothing to prevent it applying at the right edge, as well, leading to the results illustrated in (25).

Extrametricality and Non-finality

13

expect. As (26) illustrates, final CV and CVV are unaffected. Final CVC, CVVC, and CVCC, however, are all lighter than they would be otherwise. Final CVC, normally bimoraic, emerges as monomoraic and counts as light. Final CVVC and CVCC, normally trimoraic, emerge as bimoraic and count as heavy. (26)

Weight contrasts under consonant extrametricality a.

b.

Light syllables i. [ CV

ii.

[ CV〈C〉

Heavy syllables i. [[ CVV

ii.

[[ CVV〈C〉

iii.

[[ CVC〈C〉

Among the languages that have been argued to exhibit consonant extrametricality are English (Hayes 1982), various dialects of Arabic (McCarthy 1979; Hayes 1995), Ancient Greek (Steriade 1988), Spanish (Harris 1983), and Estonian (Hint 1973; Prince 1980). Examples from Estonian are provided in (27). (27) Final syllables in Estonian a. b. c. d.

’kava‘latt ’pahe‘mait ’pimestav ’pimes‘tavale

‘cunning’ ‘worse (part pl)’ ‘blinding’ ‘blinding (ill sg)’

Like Wergaia, Estonian automatically stresses every odd-numbered syllable except the final syllable. Final syllables are stressed only if they are heavy, as in (27a) and (27b). When a final syllable is light, as in (27c) and (27d), it is unstressed. Since final CVV, CVVC, and CVCC are always stressed, they must pattern together in counting as heavy. Since final CV and CVC are always stressless, they must pattern together in counting as light. This is exactly the division predicted by consonant extrametricality. Since moras are not stress peaks, non-finality cannot directly prohibit moras from associating with final consonants. Non-finality can only affect a final consonant’s moraic status by referring to a stress peak that coincides with moras. The success of a non-finality approach, however, depends crucially on the representation of stress peaks. Under head-based non-finality, no stress peak coincides with moras generally. A mora coincides with a stress peak only if it is a head mora, and banning head moras from final position does not ban all moras.7 In contrast, 7

As an anonymous reviewer points out, the claim that head-based non-finality cannot prevent final consonants from being moraic depends on the assumption that moras – unlike the higher prosodic categories – do not have heads. If moras have head segments, as argued by de Lacy (1997), then final consonants might be prevented from being moraic by prohibiting head segments from being final in the prosodic word. There are several arguments against this approach, however, one of which is that segments are not constituents of moras in the usual sense. It is often the case that multiple moras are associated with single segments. In such cases, not only would each mora have exactly the same single constituent, but it would also have exactly the same head. Neither situation is tolerated at higher prosodic levels, even in fairly permissive theories, like Hyde (2002), that allow prosodic categories to share constituents.

Brett Hyde

14

assuming that moras map to the base-level of the grid, there are stress peaks that coincide with moras generally under grid-based non-finality. By prohibiting base-level gridmarks from occurring over prosodic-word final consonants, nonfinality can prevent final consonants from associating with moras. To illustrate, when it is more important for final consonants to avoid associating with base-level gridmarks than it is for coda consonants to be moraic, final consonants will give up their moraic status to avoid associating with base-level gridmarks. Final CVC syllables emerge as monomoraic and light, and final CVVC and CVCC syllables emerge as bimoraic and heavy, resulting in the same weight distinctions among final syllables as those created by consonant extrametricality. (28)

Consonantal non-finality preferences a.

x [ CVC

››

xx [[ CVC

b.

xx [[ ›› CVVC

xx x [[[ CVVC

c.

xx [[ CVCC

xx x [[[ CVCC

››

Given its parsability focus, then, the extrametricality analysis is the most straightforward for cases like Estonian. Since it makes final consonants invisible to the process of mora assignment, consonant extrametricality produces the desired weight distinctions in a fairly direct fashion. While it is also possible to provide a nonfinality analysis, it is only possible to do so with a grid-based approach. For additional discussion of this and other issues concerning final consonants, see chapter 36: final consonants.

6

The classic arguments

We turn now to some of the classic arguments marshaled in support of extrametricality and briefly consider whether or not they also provide support for non-finality. Below we consider three of extrametricality’s traditional uses: establishing trisyllabic stress windows, helping to capture generalizations about the stress patterns of different lexical classes, and helping to provide a general account of the deletion of unsyllabifiable segments.

6.1 Eliminating ternary foot templates In many languages, a word’s final three syllables form a domain that is crucial in creating the appropriate stress pattern. The most direct option for creating such a domain – establishing it with a trisyllabic foot – has the disadvantage of making

Extrametricality and Non-finality

15

it necessary to expand the foot inventory beyond the well-motivated binary templates to include less well-motivated ternary templates. As Hayes (1980) demonstrates, a less direct extrametricality approach allows us to maintain the smaller inventory. It allows the theory to create trisyllabic windows using a binary foot followed by an unparsed syllable. Consider the stress pattern of Latin. In Latin words of at least three syllables, stress falls on either the antepenult or penult, depending on the weight of the latter. If the penult is heavy, it is stressed; otherwise, the antepenult is stressed. (29)

Trisyllabic stress window in Latin a. b. c. d.

L’HH LH’HH L’LLH L’HLH

a’mi(kus mone(’ba(mus ko’mitium do’mestikus

‘friend (nom sg masc)’ ‘warn (1pl imperf indic act)’ ‘the election site in the forum (nom sg)’ ‘domestic (nom sg masc)’

Without extrametricality, the Latin pattern requires the quantity-sensitive ternary template (Ä L q) to establish the appropriate trisyllabic domain at the right edge. When the penult is light, the template is used to construct a ternary foot at the right edge of the word, resulting in antepenultimate stress. When the penult is heavy, however, the template allows only a binary foot, resulting in penultimate stress. (30)

Ternary foot analysis a. b.

do.mes.ti.kus mo.ne(.ba(.mus

→ →

parsing do(’mes.ti.kus) mo.ne((’ba(.mus)

Extrametricality makes the ternary template unnecessary, allowing the trisyllabic domain to be formed with an unparsed syllable and a maximally disyllabic foot. The unparsed syllable is the result of syllable extrametricality. The maximally disyllabic foot is produced with the quantity-sensitive template (Ä L). If the penult is light, as in (31a), the template allows for a disyllabic foot at the right edge. In combination with the extrametrical syllable, the result is stress on the antepenult. If the penult is heavy, however, as in (31b), the template only allows for a monosyllabic foot, resulting in stress on the penult. (31)

Extrametricality analysis a. b.

do.mes.ti.kus mo.ne(.ba(.mus

→ →

extrametricality do.mes.ti〈kus〉 mo.ne(.ba(〈mus〉

→ →

parsing do(’mes.ti)〈kus〉 mo.ne((’ba()〈mus〉

As Prince and Smolensky (1993) demonstrate, a head-based non-finality approach can also construct trisyllabic domains from a binary foot and an unparsed syllable. When it is more important for the head foot to avoid final position than it is for the head foot to occur as far to the right as possible, the desired pattern emerges.

Brett Hyde

16 (32)

Head-based non-finality analysis a. b.

non-finality do(’mes.ti)kus ›› do.mes(’ti.kus) mo.ne((’ba()mus ›› mo.ne(.ba((’mus)

With the head foot pushed back from the right edge by non-finality, a disyllabic foot positions stress on the antepenult when the penult is light, and a monosyllabic foot positions it on the penult when the penult is heavy. In the case of Latin, extrametricality and head-based non-finality have a very similar effect. They both result in the final syllable being left unparsed. The similarity arises because the stress peak that must avoid final position in the non-finality analysis happens to be a foot, the head foot of the prosodic word. If the head foot must be the rightmost foot but cannot be final, then the final syllable must remain unfooted, the very situation demanded when a final syllable is made extrametrical. Two points should be kept in mind, however. The first is that grid-based nonfinality is unable to produce this same result. Since stress peaks do not double as prosodic constituents, grid-based non-finality cannot require that final syllables remain unfooted.8 Second, as Hyde (2008) points out, even head-based nonfinality does not offer a general approach to trisyllabic stress windows. It is unable to produce the stress window of Macedonian (Comrie 1976), for example. An alignment-based analysis actually provides a more successful general approach. For a discussion of ternary stress intervals more generally, not just those limited to word edges, see chapter 52: ternary rhythm.

6.2 Similarities between lexical classes In many languages, one class of lexical items exhibits one stress pattern, while a different class exhibits a slightly different pattern. In many cases, the difference can be reduced to an extrametricality effect that one class exhibits and the other does not. Once the extrametricality effect is recognized, the similarities between the patterns become apparent, and it is possible to address both with a more unified approach. English (Hayes 1982), Spanish (Harris 1983), and Yawelmani (Archangeli 1984) are among the languages where extrametricality has played an important role in this context. English is used to illustrate below. At first glance, English verbs and nouns seem to have very different stress patterns. In verbs, the position of stress depends on the shape of the final syllable. If the ultima is CVV, CVVC, or CVCC, the ultima is stressed. If the ultima is CV or CVC, the penult is stressed.

8

As an anonymous reviewer points out, whether or not grid-based non-finality can prevent the final syllable from being footed depends on the particular structures that are assumed to be the constituents of feet. If feet are actually built on base-level gridmarks, rather than syllables, preventing the final syllable from mapping to a base-level gridmark would also prevent it from being footed. The gridbased non-finality approach presented here, however, assumes that metrical structure and prosodic structure are independent, so that feet are built on syllables. Under this approach, the failure of a final syllable to map to the grid would not prevent it from being footed.

Extrametricality and Non-finality (33)

17

English verbs o’bey a’tone tor’ment

de’velop as’tonish

In nouns, the position of stress depends on the weight of the penult. If the penult is heavy, stress appears on the penult. Otherwise, it appears on the antepenult.9 (34)

English nouns a’genda e’litist Ari’zona

A’merica ’discipline ’labyrinth

As Hayes (1982) demonstrates, the difference between verbs and nouns is that they show the effects of two different types of extrametricality. The verb pattern is influenced by consonant extrametricality, the evidence being the characteristic weight distinctions among final syllables (see §5). The noun pattern is influenced by syllable extrametricality, the evidence being the presence of a trisyllabic stress window (see §6.1). Once we allow for the two different types of extrametricality, the correct patterns emerge for both verbs and nouns when we use the quantity-sensitive binary template (Ä L) to construct a foot at the right edge. In verbs, the (Ä L) template positions stress on the penult when the ultima emerges as light, once the effects of consonant extrametricality have been taken into account. It positions stress on the ultima when the ultima emerges as heavy. (35)

English verbs and consonant extrametricality a. [ [ [ de . ve . lo〈p〉



(x ) [ [ [ de . ve . lo〈p〉



( x ) [[ [[ tor . men〈t〉

b. [[ [[ tor . men〈t〉

Once syllable extrametricality excludes final syllables from the foot layer in nouns, the same (Ä L) template positions stress on the antepenult when the penult is light. It positions stress on the penult when the penult is heavy. (36)

English nouns and syllable extrametricality a.



( x ) A . me . ri 〈ca〉



(x) a . gen 〈da〉

A . me . ri 〈ca〉 b. a . gen 〈da〉 9

This generalization applies to English nouns with a stressless final syllable. Nouns with final stress must be treated differently.

Brett Hyde

18

Extrametricality, then, allows us to extract the aspects of the English verb and noun patterns that differ, in order to capture the similarities in a single general stress rule. The analysis consists of two independently motivated extrametricality rules, the source of the differences, and a single, general stress rule, the source of the similarities. If extrametricality were unavailable, we would be forced to incorporate its effects directly into separate stress rules for verbs and nouns, making both that much more complicated and obscuring the similarities between the patterns. It is not a straightforward matter to reproduce the extrametricality analysis in this case with a non-finality analysis. As mentioned in §6.1, head-based nonfinality can produce the type of stress window found in Latin and in English nouns, but grid-based non-finality cannot. As mentioned in §5, however, grid-based nonfinality can reproduce the consonant extrametricality effect seen in English verbs, but head-based non-finality cannot. Although non-finality could, in principle, help to capture similarities between the stress patterns of different lexical classes, then, its success depends very much on the facts of the particular case.

6.3 Licensing segments Itô (1986) puts extrametricality to a use that is quite different from those discussed thus far. In the types of effects discussed above, extrametricality makes a domain-final constituent invisible to rules that create prosodic structure. Itô, however, uses extrametricality to make domain-final segments invisible to Stray Erasure (Harris 1983), a rule that deletes unsyllabified segments. The result is a theory of syllabification that relies on general, rather than idiosyncratic, deletion rules. As a simple illustration, consider deletions that occur as part of the syllabification process in Diola Fogny (Sapir 1965). Diola prefers not to syllabify obstruents as codas. As seen in (37a)–(37c), a medial obstruent that would otherwise be syllabified as a coda ends up being deleted instead. The preference to avoid obstruent codas seems to be thwarted at the right edge of the word, however, as seen in (37d). Final obstruents are not deleted, even though they cannot be syllabified as anything other than a coda. (37) Obstruent deletion in Diola Fogny a. b. c. d.

letkuÁaw uÁukÁa kobkoben kuJilak

→ → → →

lekuÁaw uÁuÁa kokoben kuJilak

‘they won’t go’ ‘if you see’ ‘yearn, long for’ ‘the children’

Extrametricality accounts for the different treatment of final and medial obstruents. In the lexical phonology, Diola’s coda condition prevents obstruents from being syllabified if they would syllabify as codas. Stray Erasure then deletes any segment that remains unsyllabified and has not been designated as extrametrical. Since medial consonants cannot be designated as extrametrical – due to the Peripherality restriction – medial obstruents that fail to syllabify are always deleted, as in (38a). Since final consonants can be designated as extrametrical, however, final obstruents are invisible to Stray Erasure and escape deletion, as in (38b), even

Extrametricality and Non-finality

19

though they are not attached to a syllable. The extrametrical consonant is syllabified later in the post-lexical phonology where the coda condition does not apply. syllabification (lexical)

(38) a. b.

extrametricality

stray erasure

syllabification (post-lexical)

[le]t[ku][Áaw] → [le]t[ku][Áaw] → [le][ku][Áaw] → [le][ku][Áaw] [ku][Ji][la]k → [ku][Ji][la]〈k〉 → [ku][Ji][la]〈k〉 → [ku][Ji][lak]

In this case, then, extrametricality accounts for an asymmetry in the deletion of medial and final obstruents, making it possible to avoid an idiosyncratic deletion rule that targets medial consonants specifically. Since it does not seem to be connected to stress peaks, at least not in any direct way, it is not immediately clear how a non-finality approach could replicate this type of segmental licensing effect.

7

The edge asymmetry

Shortly after extrametricality’s introduction, it became clear that the vast majority of phenomena that might be analyzed in terms of extrametricality occur at the right edge of the relevant domain. It is for this reason that Hayes (1980) proposed the Edge Markedness restriction on extrametricality. As its name implies, the asymmetry is more absolute under the non-finality approach. In addition to the distributional evidence, two arguments have emerged to support this more absolute view. First, initial extrametricality and “non-initiality” do not have the same strong phonetic and rhythmic motivations as their right edge counterparts. Second, the inclusion of initial extrametricality or “non-initiality” in the grammar results in a significant decline in the accuracy of typological predictions.

7.1 Rhythmic and phonetic evidence In searching for potential phonetic motivations, Lunden (2007) connects final stress avoidance to phonetic final lengthening. Since phonetic lengthening also occurs in initial syllables, we might expect stress to avoid initial syllables as well. Upon closer inspection, however, this expectation rests on very shaky ground. While the characteristics of final lengthening are more compatible with the characteristics of stresslessness, the characteristics of initial lengthening are in fact more compatible with the characteristics of stress. First, consider the typical characteristics of final and initial lengthening. Oller (1973) and Wightman et al. (1992), amongst numerous others, report that final lengthening typically affects all rhyme segments to some degree, is often associated with decline in amplitude and devoicing, and is often cumulative when multiple prosodic boundaries coincide. In contrast, Oller (1973) and Keating et al. (2003), amongst others, report that initial lengthening is typically limited to the initial segment, is often associated with longer voice onset time and aspiration, and is less typically cumulative when multiple boundaries coincide. Now consider the typical characteristics of stressed syllables. Lieberman (1960), Beckman (1986), and Gordon (2002), amongst others, report that stressed syllables often exhibit increased duration in the rhyme, increased intensity in the rhyme, and fortition, lengthening, or aspiration of the onset. The fact that stressed

Brett Hyde

20

syllables often have a longer rhyme might make them seem more compatible with final lengthening. The fact that intensity declines in the rhyme under phonetic final lengthening but increases under stress, however, suggests that this is really not the case. The increased intensity in the rhyme and the strengthening of the onset makes stress more compatible with initial lengthening. Based on a parallel phenomenon in music (Gabrielsson 1987, 1993), Hyde (2009) suggests that different types of tempo changes at prosodic boundaries might account for the different characteristics of initial and final lengthening. Initial lengthening is the result of a strong attack and acceleration to medial tempo, while final lengthening is the result of a deceleration from medial tempo. An initial acceleration results in strengthening of initial segments and increased intensity in initial syllables, characteristics consistent with stress. A final deceleration results in declining intensity in final rhymes, a characteristic consistent with stresslessness.

7.2 Stress typologies The second line of evidence against initial extrametricality and “non-initiality” is that they result in a decline in the accuracy of typological predictions (Hyde 2002; Altshuler 2009). Consider, for example, the iambic patterns of Aguaruna and Choctaw, discussed in §2. They emerge when rightward binary alternation of unstressed and stressed syllables is perturbed at the right edge of even-parity forms in order to avoid final stress. Aguaruna avoids final stress by shifting it one syllable to the left, and Choctaw avoids it simply by not assigning it. (39)

a.

Unattested qÄÄqÄq qÄqÄqÄq

b. Aguaruna qÄqÄÄq qÄqÄqÄq

c.

Unattested qqÄqÄq qÄqÄqÄq

d. Choctaw qÄqÄqq qÄqÄqÄq

Although the trochaic mirror images of these patterns are both unattested, they would be predicted to occur if leftward binary alternation of unstressed and stressed syllables could be perturbed at the left edge in order to avoid initial stress. Among the attested binary patterns in general, final stress avoidance is often a reason to perturb binary alternation, but initial stress avoidance is not. Including a principle of initial stress avoidance in the grammar, then, would only result in the prediction of unattested patterns. The only requirement for initial syllables that produces attested patterns is a requirement that initial syllables be stressed. For example, in the trochaic Passamaquoddy (LeSourd 1993) and Garawa (Furby 1974) patterns, an initial stress requirement perturbs leftward binary alternation at the left edge. (40)

a. Passamaquoddy ÄqÄqÄq ÄÄqÄqÄq

b. Unattested qÄqÄqÄ qÄqÄqÄÄ

c. Garawa ÄqÄqÄq ÄqqÄqÄq

d. Unattested qÄqÄqÄ qÄqÄqqÄ

Extrametricality and Non-finality

21

Not coincidentally, given the repulsion of stress by final syllables, the iambic mirror images of these patterns are both unattested. Both final stresslessness and initial stress, then – the two aspects of the asymmetry suggested by the phonetic and rhythmic considerations discussed above – are confirmed by the typology of binary stress patterns.

7.3 Potential counterexamples While the vast majority of extrametricality and non-finality effects have been found at the right edges of prosodic domains, a few languages have been argued to exhibit extrametricality effects at the left edge. In most such cases, however, alternative analyses are readily available. Halle and Vergnaud (1987), for example, attribute the unstressability of initial vowels in Western Aranda to initial segment extrametricality. Subsequent research, however, has resulted in a number of alternative analyses of Western Aranda and similar languages, analyses that do not require initial extrametricality or noninitiality. Typically, they require the left edge of an appropriate prosodic structure to align with a consonant, preventing initial vowels from being included in that structure and, therefore, from being stressed. In Goedemans (1996), for example, the left edges of feet must align with a consonant. This prevents the initial vowel from being footed and, therefore, from being stressed. In Hyde (2007a), it is the left edges of head syllables that must align with a consonant; in Downing (1998), it is the left edges of prosodic words. Smith’s (2002) approach simply requires stressed syllables to have onsets. As a second example, in the stress patterns of Winnebago (Miner 1979; Hale and White Eagle 1980) and Kashaya (Oswalt 1961, 1988; Buckley 1992) the primary stress in a form is the leftmost stress, but it typically does not appear until the third syllable. Since this ternary interval is characteristic of both even- and oddparity forms, the most straightforward analysis is to establish a trisyllabic stress window at the left edge of the word. An initial extrametricality approach could establish the stress window by making the initial syllable extrametrical and then constructing a maximally disyllabic foot just to the right of the initial syllable. This is not necessarily strong evidence for initial extrametricality, however. As mentioned in §6.1, there are alternative approaches to trisyllabic stress windows in the literature, some of them addressing a greater variety of windows than is possible with extrametricality.

8

Summary

Extrametricality and non-finality have much in common. Both deal with peripheral positions in a domain, both deal primarily with the right edge of the domain, and both often result in final stresslessness. An important difference, however, is that extrametricality focuses on constituent parsability, while non-finality focuses on the position of stress peaks. Extrametricality rules typically prevent some domain-final constituent from being parsed into higher prosodic structure; non-finality constraints typically prevent a stress peak from occurring in some domain-final position. While they have been used to address many of the same phenomena, the difference in focus ensures that they do not address all types with equal success.

22

Brett Hyde

In §2 and §3, we saw that extrametricality and non-finality provide equally effective analyses of situations where stress avoids larger final constituents like syllables and feet. In situations where stress is avoided on final syllables, an expected final stress either arrives early or is absent altogether. An extrametricality analysis achieves the desired effect by excluding the final syllable from the foot layer, a non-finality analysis by prohibiting head syllables in final position or by prohibiting foot-level gridmarks over final syllables. In situations where primary stress avoids final feet, the primary stress emerges as the penultimate stress. An extrametricality analysis excludes the final foot from the prosodic word; nonfinality either prohibits head feet in final position or prohibits prosodic-word level gridmarks over final feet. In contrast, extrametricality and non-finality do not perform equally well in accounting for phenomena involving smaller final constituents. In §4, we saw how the avoidance of stress on word-final moras makes stress sensitive to the weight of word-final syllables, how the avoidance of stress on foot-final moras results in iambic lengthening, and how the avoidance of stress on syllable-final moras results in general weight-sensitivity, iambic lengthening, and trochaic lengthening. In these cases, a non-finality analysis is much more straightforward than an extrametricality analysis. With its stress peaks focus, non-finality can prohibit stress on domainfinal moras directly. With its parsability focus, however, extrametricality can only prohibit stress on domain-final moras by excluding them from some higher prosodic structure, a requirement that seems impossible to implement without either violating syllable integrity or requiring moras to remain unsyllabified. In §5, we saw how the failure of final consonants to contribute to syllable weight affects the stressability of final syllables. Extrametricality achieves the desired result directly by making final consonants invisible to mora assignment. A grid-based non-finality approach achieves the desired result indirectly by prohibiting moralevel gridmarks – and, thus, the moras associated with them – from occurring over final consonants. A head-based non-finality approach, however, appears to be unable to capture the effect at all. In §6, we examined some of the classic arguments for extrametricality, focusing on trisyllabic stress windows and segmental licensing, and we considered the possibility of non-finality approaches. While head-based non-finality offers analyses for some types of trisyllabic windows, grid-based non-finality does not. Recent alternative proposals for a general approach to stress windows, however, make non-finality’s limitations in this area less problematic. With respect to segmental licensing, it is not clear that a non-finality approach is even possible. Finally, §7 outlined the evidence for the edge asymmetry in extrametricality and non-finality formulations. First, the types of effects analyzable in terms of extrametricality or non-finality occur almost exclusively at right edges. Second, phonetic and rhythmic considerations motivate stresslessness in final positions, but they actually motivate stress in initial position. Third, the inclusion of initial extrametricality or non-initiality in the grammar negatively impacts the accuracy of typological predictions.

REFERENCES Abbott, Stan. 1985. A tentative multilevel multiunit phonological analysis of the Murik language. Papers in New Guinea Linguistics 22. 339–373.

Extrametricality and Non-finality

23

Altshuler, Daniel. 2009. Quantity-insensitive iambs in Osage. International Journal of American Linguistics 75. 365 –398. Archangeli, Diana. 1984. Extrametricality in Yawelmani. The Linguistic Review 4. 101–120. Beckman, Mary E. 1986. Stress and non-stress accent. Dordrecht: Foris. Blanc, Haim. 1970. The Arabic dialect of the Negev Bedouins. Proceedings of the Israeli Academy of Science and Humanities 4. 112–150. Buckley, Eugene. 1992. Theoretical aspects of Kashaya phonology and morphology. Ph.D. dissertation, University of California, Berkeley. Buller, Barbara, Ernest Buller & Daniel L. Everett. 1993. Stress placement, syllable structure, and minimality in Banawá. International Journal of American Linguistics 59. 280 –293. Chafe, Wallace L. 1977. Accent and related phenomena in the Five Nations Iroquois languages. In Larry M. Hyman (ed.) Studies in stress and accent, 169–181. Los Angeles: Department of Linguistics, University of Southern California. Comrie, Bernard. 1976. Irregular stress in Polish and Macedonian. International Review of Slavic Linguistics 1. 227–240. de Lacy, Paul. 1997. Prosodic categorisation. M.A. thesis, University of Auckland. (ROA-236.) Derbyshire, Desmond C. 1985. Hixkaryana and linguistic typology. Dallas: Summer Institute of Linguistics & University of Texas at Arlington. Downing, Laura J. 1993. Unsyllabified vowels in Aranda. Papers from the Annual Regional Meeting, Chicago Linguistic Society 29. 171–185. Downing, Laura J. 1998. On the prosodic misalignment of onsetless syllables. Natural Language and Linguistic Theory 16. 1–52. Everett, Daniel L. 1996. Prosodic levels and constraints in Banawá and Suruwaha. Unpublished ms., University of Pittsburgh (ROA-121). Everett, Daniel L. 1997. Syllable integrity. Proceedings of the West Coast Conference on Formal Linguistics 16. 177–190. Everett, Daniel L. 2003. Iambic feet in Paumari and the theory of foot structure. Linguistic Discovery 2(1). 22–44. Foster, Michael. 1982. Alternating weak and strong syllables in Cayuga words. International Journal of American Linguistics 48. 59–72. Furby, Christine. 1974. Garawa phonology. (Pacific Linguistics A37.) Canberra: Australian National University. Gabrielsson, Alf. 1987. Once again: The theme from Mozart’s “Piano sonata in A major” (K. 331). A comparison of five performances. In Alf Gabrielsson (ed.) Action and perception in rhythm and music, 81–103. Stockholm: Royal Swedish Academy of Music. Gabrielsson, Alf. 1993. The complexities of rhythm. In Thomas J. Tighe & W. Jay Dowling (eds.) Psychology and music: Understanding of melody and rhythm, 94–120. Hillsdale, NJ: Lawrence Erlbaum. Goddard, Ives. 1979. Delaware verbal morphology: A descriptive and comparative study. New York: Garland. Goddard, Ives. 1982. The historical phonology of Munsee. International Journal of American Linguistics 48. 16–48. Goedemans, Rob. 1996. An optimality account of onset-sensitive stress in quantityinsensitive languages. The Linguistic Review 13. 33 –47. Gordon, Matthew. 2002. A phonetically driven account of syllable weight. Language 78. 51–80. Hale, Kenneth & Josie White Eagle. 1980. A preliminary metrical account of Winnebago accent. International Journal of American Linguistics 46. 117–132. Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA: MIT Press. Hammond, Michael & Emmanuel Dupoux. 1996. Psychophonology. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, vol. 1, 274 –297. Salford: ESRI. Harris, James W. 1983. Syllable structure and stress in Spanish: A nonlinear analysis. Cambridge, MA: MIT Press.

24

Brett Hyde

Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Published 1985. New York: Garland. Hayes, Bruce. 1982. Extrametricality and English stress. Linguistic Inquiry 13. 227–276. Hayes, Bruce. 1985. Iambic and trochaic rhythm in stress rules. Proceedings of the Annual Meeting, Berkeley Linguistics Society 11. 429 –446. Hayes, Bruce. 1987. A revised parametric metrical theory. Papers from the Annual Meeting, North East Linguistics Society 17. 274 –289. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hercus, Luise A. 1986. Victorian languages: A late survey. (Pacific Linguistics B77.) Canberra: Australian National University. Hint, Mati. 1973. Eesti Keele Sõnafonoloogia I. Talinn, Estonia: Eeste NSV Teaduste Akadeemia. Hoff, Berend J. 1968. The Carib language: Phonology, morphonology, morphology, texts and word index. The Hague: Martinus Nijhoff. Hung, Henrietta. 1994. The rhythmic and prosodic organization of edge constituents. Ph.D. dissertation, Brandeis University (ROA-24). Hyde, Brett. 2002. A restrictive theory of metrical stress. Phonology 19. 313 –359. Hyde, Brett. 2003. Nonfinality. Unpublished ms., Washington University in St Louis (ROA-633). Hyde, Brett. 2006. Towards a uniform account of prominence-sensitive stress. In Eric Bakovio, Junko Itô & John McCarthy (eds.) Wondering at the natural fecundity of things: Essays in honor of Alan Prince, 139–183. Santa Cruz: Linguistics Research Center, University of California, Santa Cruz. http://repositories.cdlib.org/lrc/prince/8. Hyde, Brett. 2007a. Issues in Banawá prosody: Onset sensitivity, minimal words, and syllable integrity. Linguistic Inquiry 38. 239 –285. Hyde, Brett. 2007b. Non-finality and weight-sensitivity. Phonology 24. 287–334. Hyde, Brett. 2008. Alignment continued: distance-sensitivity, order-sensitivity, and the Midpoint Pathology. Unpublished ms., Washington University in St Louis (ROA-998). Hyde, Brett. 2009. The rhythmic foundations of Initial Gridmark and Nonfinality. Papers from the Annual Meeting, North East Linguistics Society 38(1). 397–410. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts, Amherst. Kager, René. 1993. Alternatives to the iambic–trochaic law. Natural Language and Linguistic Theory 11. 381–432. Kager, René. 1995. Review article of Hayes (1995). Phonology 12. 437–464. Keating, Patricia, Taehong Cho, Cécile Fougeron & Chai-Shune Hsu. 2003. Domain-initial articulatory strengthening in four languages. In John Local, Richard Ogden & Rosalind Temple (eds.) Phonetic interpretation: Papers in laboratory phonology VI, 145–163. Cambridge: Cambridge University Press. Kenstowicz, Michael. 1983. Parametric variation and accent in the Arabic dialects. Papers from the Annual Regional Meeting, Chicago Linguistic Society 19. 205 –213. Kenstowicz, Michael. 1995. Cyclic vs. non-cyclic constraint evaluation. Phonology 12. 397–436. Kenstowicz, Michael & Kamal Abdul-Karim. 1980. Cyclic stress in Levantine Arabic. Studies in the Linguistic Sciences 10(2). 55–76. Knudson, Lyle M. 1975. A natural phonology and morphophonemics of Chimalapa Zoque. Papers in Linguistics 8. 283 –346. LeSourd, Philip S. 1993. Accent and syllable structure in Passamaquoddy. New York: Garland. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. Lieberman, Philip. 1960. Some acoustic correlates of word stress in American English. Journal of the Acoustical Society of America 32. 451–454. Lorentz, Ove. 1996. Length and correspondence in Scandinavian. Nordlyd 24. 111–128.

Extrametricality and Non-finality

25

Lunden, Anya. 2007. Weight, final lengthening and stress: A phonetic and phonological case study of Norwegian. Ph.D. dissertation, University of California, Santa Cruz (ROA-833). McArthur, Henry & Lucille McArthur. 1956. Aguacatec (Mayan) phonemes within the stress group. International Journal of American Linguistics 22. 72–76. McCarthy, John J. 1979. On stress and syllabification. Linguistic Inquiry 10. 443 –465. McCarthy, John J. 2003. OT constraints are categorical. Phonology 20. 75 –138. McCarthy, John J. & Alan Prince. 1993. Prosodic morphology I: Constraint interaction and satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. Michelson, Karin. 1988. A comparative study of Lake-Iroquoian accent. Dordrecht: Kluwer. Miner, Kenneth L. 1979. Dorsey’s Law in Winnebago-Chiwere and Winnebago accent. International Journal of American Linguistics 45. 25 –33. Mohanan, K. P. 1979. Word stress in Hindi, Malayalam, and Sindhi. Paper presented at MIT, as reported by Hayes (1980). Nicklas, Thurston Dale. 1972. The elements of Choctaw. Ph.D. dissertation, University of Michigan, Ann Arbor. Nicklas, Thurston Dale. 1975. Choctaw morphophonemics. In James M. Crawford (ed.) Studies in Southeastern Indian Languages, 237–250. Athens: University of Georgia Press. Oller, D. K. 1973. The effect of position in utterance on speech segment duration in English. Journal of the Acoustical Society of America 54. 1235–1247. Oswalt, Robert L. 1961. A Kashaya grammar (Southwestern Pomo). Ph.D. dissertation, University of California, Berkeley. Oswalt, Robert L. 1988. The floating accent of Kashaya. In William Shipley (ed.) In honor of Mary Haas, 611– 622. Berlin: Mouton de Gruyter. Payne, David. 1990. Accent in Aguaruna. In Doris L. Payne (ed.) Amazonian linguistics: Studies in lowland South American languages, 161–184. Austin: University of Texas Press. Payne, David, Judith Payne & Jorge Sanchez Santos. 1982. Morfología, fonología, y fonética del asheninca del Apurucayali. Pucallpa, Peru: Instituto Lingüístico de Verano. Piggott, Glyne L. 1980. Aspects of Odawa morphophonemics. New York: Garland. Piggott, Glyne L. 1983. Extrametricality and Ojibwa stress. McGill Working Papers in Linguistics 1. 80 –117. Prince, Alan. 1976. “Applying” stress. Unpublished ms., University of Massachusetts, Amherst. Prince, Alan. 1980. A metrical theory for Estonian quantity. Linguistic Inquiry 11. 511–562. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19 –100. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Sapir, Edward. 1930. Southern Paiute, a Shoshonean language. Proceedings of the American Academy of Arts and Sciences 65. 1–296. Sapir, Edward. 1949. The psychological reality of phonemes. In David G. Mandelbaum (ed.) Selected writings of Edward Sapir in language, culture, and personality, 46–60. Berkeley: University of California Press. Sapir, J. David. 1965. A grammar of Diola-Fogny. Cambridge: Cambridge University Press. Smith, Jennifer. 2002. Phonological augmentation in prominent positions. Ph.D. dissertation, University of Massachusetts, Amherst. Steriade, Donca. 1988. Greek accent: A case for preserving structure. Linguistic Inquiry 19. 271–314. Wightman, Colin W., Stefanie Shattuck-Hufnagel, Mari Ostendorf & Patti J. Price. 1992. Segmental durations in the vicinity of prosodic phrase boundaries. Journal of the Acoustical Society of America 91. 1707–1717.

44

The Iambic–Trochaic Law Brett Hyde

1

Introduction

In the development of metrical stress theory, several influential approaches (Hayes 1985, 1987, 1995; McCarthy and Prince 1986; Prince 1990) have employed the Iambic– Trochaic Law (ITL) to provide extralinguistic grounding for an account of the differences between iambic and trochaic stress systems (see also chapter 39: stress: phonotactic and phonetic evidence; chapter 40: the foot; chapter 41: the representation of word stress). The ITL, given in (1), is a statement about the naturalness of two types of rhythmic groupings in two different contexts. According to the ITL, sequences of elements that contrast in intensity most naturally divide into groups with trochaic prominence, and sequences of elements that contrast in duration most naturally divide into groups with iambic prominence. (1)

The Iambic–Trochaic Law (Hayes 1985, 1987) a. b.

Elements contrasting in intensity naturally form groupings with initial prominence. Elements contrasting in duration naturally form groupings with final prominence.

For approaches to metrical stress based on the ITL, this difference in naturalness is responsible for the duration-related differences found in iambic and trochaic stress patterns. The ITL is based on a long tradition of experimental investigation into the perception of rhythmic grouping (Bolton 1894; Woodrow 1909). In the typical experiment, participants are asked to group a sequence of artificially created alternating sounds. The sounds alternate either in intensity, as in (2a), or in duration, as in (2b). (2)

a. b.

... o O o O o O o O o O o O o O o O o O o ... ... – — – — – — – — – — – — – — – — – — – ...

The outcome, under certain circumstances, is that participants tend to divide intensity alternations into groups where the more intense element appears first, The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0044

The Iambic–Trochaic Law

2

as in (3a), and they tend to divide duration alternations into groups where the longer element appears second, as in (3b).1 The ITL is essentially a statement of these results. (3)

a.

Intensity contrasts: Left-prominent groupings . . . [O o][O o][O o][O o][O o][O o][O o][O o][O o] . . .

b.

Duration contrasts: Right-prominent groupings . . . [– —][– —][– —][– —][– —][– —][– —][– —] . . .

Though the ITL is an extralinguistic principle, it seems to be reflected in the stress patterns of numerous languages, suggesting, at least initially, that it plays an important role in shaping them. For example, many trochaic languages are like Cahuilla (Seiler 1965, 1967, 1977; Seiler and Hioki 1979). They exclude heavy syllables from disyllabic feet, ensuring that durational contrasts never arise in a foot with trochaic prominence. In Cahuilla, heavy syllables are CVV and CV?. (4)

Exclusion of H from disyllabic feet in Cahuilla (’LL)(‘L) (’LL)(‘LL) (’H)(‘L) (’H)(‘LL) (’L)(‘H)(‘L) (’L)(‘H)(‘LL)

(’taxmu)(‘?at) (’taka)(‘liŒem) (’pa?)(‘li) (’qa()(‘niŒem) (’su)(‘ka?)(‘ti) (‘nesun) (’ka)(‘vi()(‘Œi-wen)

‘song’ ‘one-eyed ones’ ‘the water (obj)’ ‘palo verde (pl)’ ‘the deer (obj)’ ‘I was surprised’

Many iambic languages are like Hixkaryana (Derbyshire 1985). They lengthen the vowel of stressed syllables, if necessary, to ensure that feet with iambic prominence always contain durational contrasts. In Hixkaryana, heavy syllables are CVV and CVC. (5)

Iambic lengthening in Hixkaryana a.

(L’L)(’H)L (khæ’næ)(’nQh)nD b. (L’L)(L’H)L (mQ’hæ)(næ’nQh)nD c. (’H)(L’L)L (’Dw)(tD’hD)næ d. (’H)(L’L)LL (’tDh)(ku’rji)hDnæ

→ → → → → → → →

(L’H)(’H)L (khæ’næ()(’nQh)nD (L’H)(L’H)L (mQ’hæ()(næ’nQh)nD (’H)(L’H)L (’Dw)(tD’hD()næ (’H)(L’H)LL (’tDh)(ku’rji()hDnæ

‘I taught you’ ‘you taught him’ ‘to the village’ ‘to Tohkurye’

This chapter reviews the strengths and weaknesses of ITL approaches to metrical stress, and examines some of the most promising alternatives. We shall see that the ITL does not actually offer an adequate foundation for an account 1

The effect emerges in the range of one half to five beats per second. (The syllable rate of “ordinary conversational speech” is typically toward the upper limits of this range; Bell 1977.) Hayes (1995) states that the right-prominent effect illustrated in (3b) requires a durational contrast where the longer elements are 1.5 to 2 times as long as the shorter elements, noting that Woodrow (1909) found that smaller durational contrasts actually result in left-prominent groupings.

Brett Hyde

3

of stress systems in general, but it may provide an adequate foundation for an account of quantity-sensitive stress systems in particular (see chapter 57: quantity-sensitivity). This is not to say that it provides the best foundation. There is a clear sense in which the superficial and descriptive ITL is itself an observation in need of an explanation, much like the stress patterns found in natural language. Part of the appeal of the most promising alternatives is that they have the potential to account not only for the stress patterns of natural language, but also for the ITL itself. Before reviewing the various proposals, we should note the results of more recent investigations into the perception of rhythmic grouping. In some cases, more recent studies have confirmed the grouping preferences found in the earlier studies on which the ITL was based. In other cases, they have challenged their universality. The studies of Rice (1992), Vos (1977), and Hay and Diehl (2007), for example, found grouping preferences among English, French, and Dutch speakers similar to those found in the earlier studies of Bolton (1894) and Woodrow (1909).2 The studies of Kusumoto and Moreton (1997) and Iversen et al. (2008), however, found significant differences between speakers of English and Japanese. Iversen et al., for example, found that English speakers had a fairly strong preference (68 percent) for dividing sequences of amplitude contrasts into trochaic (loud–soft) groups, but Japanese speakers had a much stronger preference (91 percent) for trochaic grouping. English speakers showed a very strong preference (89 percent) to divide duration contrasts into iambic (short–long) groups, but Japanese speakers showed no preference. While the challenge to universality may be troubling to those particularly concerned with extralinguistic grounding, and it certainly presents an interesting problem in this connection, it does not necessarily tell us anything about the ITL’s ability to predict differences between iambic and trochaic stress patterns in language. Having noted the problem with respect to extralinguistic grounding, then, I will not address the issue further.

2

Interpretations of the ITL

The most recent ITL accounts (McCarthy and Prince 1986; Hayes 1987, 1995; Prince 1990) reflect two distinct interpretations. The stronger of the two, given in (6), takes the ITL to be concerned with the actual presence or absence of durational contrasts within rhythmic groupings. (6)

Strong interpretation of the ITL a. b.

If a foot contains a durational contrast, it is iambic. If a foot lacks a durational contrast, it is trochaic.

The weaker interpretation, given in (7), takes the ITL to be concerned with sensitivity to the positions of the heavy syllables that might help to create durational contrasts.

2

Rice’s study also found a preference for iambic grouping when elements contrasted in pitch, a result not found in previous studies.

The Iambic–Trochaic Law (7)

4

Weak interpretation of the ITL (Hayes 1985) a. b.

If parsing is sensitive to the position of heavy syllables, it is iambic. If parsing is insensitive to the position of heavy syllables, it is trochaic.

Even at the point at which the ITL was introduced to metrical stress theory, it was clear that the strong interpretation in (6) was unsustainable, at least when applied to stress systems generally. Under the strong interpretation, iambic footing and the presence of durational contrasts are intimately connected: only iambs contain durational contrasts; durational contrasts arise only in iambic feet; and only iambic systems employ rules that create durational contrasts. Similarly, trochaic footing and the absence of durational contrasts are intimately connected: only trochees lack durational contrasts; durational contrasts are absent only in trochaic feet; and only trochaic systems employ rules that destroy durational contrasts. Even a cursory look at the general typology of attested stress patterns reveals that the strong interpretation misses the mark by a wide margin. As mentioned above, many iambic languages are like Hixkaryana, lengthening the vowel of stressed syllables, if necessary, to ensure that surface iambs contain durational contrasts. Many other iambic languages are like Araucanian (Echeverría and Contreras 1965), however. They tolerate surface iambs that have no durational contrasts. (8)

Even iambs in Araucanian (L’L) (L’L)L (L’L)(L‘L) (L’L)(L‘L)L

(wu’le) (ti’pan)to (e’lu)(mu‘ju) (e’lu)(a‘e)new

‘tomorrow’ ‘year’ ‘give us’ ‘he will give me’

A similar situation obtains with trochaic languages. As mentioned above, several are like Cahuilla in prohibiting foot-internal durational contrasts. Several others, however, are like Chimalapa Zoque (Knudson 1975). They tolerate foot-internal durational contrasts and even have rules that create them. In Chimalapa Zoque, heavy syllables are CVV and CVC. (9)

Trochaic lengthening in Chimalapa Zoque a.

(’LH) (’kosa?) b. (‘L)(’LL) (‘hu)(’kutq) c. (‘LL)L(’LL) (‘wqti) hu(’kutq) d. (‘LH)H(’HL) (‘witu?)paj(’nqksq)

→ → → → → → → →

(’HH) (’ko(sa?) (‘H)(’HL) (‘hu()(’ku(tq) (‘HL)L(’HL) (‘wq(ti) hu(’ku(tq) (‘HH)H(’HL) (‘wi(tu?)paj(’nqksq)

‘scold (imp)’ ‘fire’ ‘big fire’ ‘he is coming and going’

There appears to be no close connection between iambs and the presence of durational contrasts, then, or between trochees and the absence of durational contrasts, at least in the general case. Given the shortcomings of the strong interpretation, Hayes (1985) introduced the ITL to metrical stress theory under the weak interpretation in (7). Under the weak interpretation, the crucial connections are between iambic footing and

5

Brett Hyde

quantity-sensitivity and trochaic footing and quantity-insensitivity. While iambic feet and trochaic feet might both contain durational contrasts, parsing is iambic if and only if it is sensitive to the positions of heavy syllables. Parsing is trochaic if and only if it is insensitive to the positions of heavy syllables. There are three problems with the weak interpretation. The first is conceptual. The ITL is plainly a generalization about the appropriateness of durational contrasts within two different types of feet. Since its requirements concerning durational contrasts affect both types, the ITL does not countenance situations where either type is quantity-insensitive (where either type simply ignores the differences in syllable weight that help to create durational contrasts). In viewing the primary concern of the ITL to be the appropriateness of quantity-sensitivity for different types of feet, the weak interpretation seems really to be a misinterpretation. The second problem is a loss of empirical coverage. Since it only addresses quantity-sensitivity, the weak interpretation tells us nothing about the status of lengthening and shortening rules addressed by the strong interpretation. The final problem is that the weak interpretation is false. A significant number of trochaic systems are quantity-sensitive, falsifying (7a), and a significant number of iambic systems are quantity-insensitive, falsifying (7b). In (4), for example, we saw that heavy syllables consistently perturb the basic stress pattern of the trochaic Cahuilla, indicating that it is quantity-sensitive. In (10), we see that heavy syllables consistently fail to perturb the basic pattern of the iambic Paumari (Everett 2003), indicating that it is quantity-insensitive. In the basic pattern, stress appears on every odd-numbered syllable from the right. CVV syllables are heavy. (10)

Quantity-insensitive iambs in Paumari (‘L)(L’L) (L‘L)(L’L) (‘H)(H’L) (H‘L)(L’L)

(‘ma)(si’ko) (ka‘–o)(wi’7i) (‘kai)(hai’hi) (wai‘Œa)(na’wa)

‘moon’ ‘island’ ‘type of medicine’ ‘little ones’

Additional quantity-sensitive trochaic languages include Palestinian Arabic (Brame 1973, 1974; Kenstowicz and Abdul-Karim 1980; Kenstowicz 1983) and Fijian (Schütz 1978, 1985; Dixon 1988). Additional quantity-insensitive iambic languages include Araucanian, Osage (Altshuler 2009), Suruwaha (Everett 1996), and Weri (Boxwell and Boxwell 1966). As we shall see in §3, parts of both interpretations, (6a) of the strong interpretation and (7b) of the weak interpretation, are brought together to form the basis for two subsequent ITL accounts, those of Hayes (1987, 1995) and of McCarthy and Prince (1986). This marriage between the halves of two very different interpretations often makes the connection between the ITL and the phenomena that these approaches attempt to account for less than clear. This is part of the reason, perhaps, that some have concluded that there is actually little of the ITL left in ITL-based approaches (see van der Hulst 1999, for example). A third ITL approach, that of Prince (1990), employs only the strong interpretation, but seeks to avoid the problems discussed above by employing it only in the context of quantity-sensitive systems and only as a relative “preference” rather than an absolute “law.” Though I will point out the aspects of the more recent ITL accounts that derive from the weak interpretation, it should be clear at this point that the weak

The Iambic–Trochaic Law

6

interpretation does not give us an accurate picture of the potential for quantitysensitivity with different types of feet, so I will not address the issue in any detail. I will address in some detail, however, the support that aspects deriving from the strong interpretation find among quantity-sensitive systems. While both iambic and trochaic languages can be quantity-sensitive, differences in the way that they resume basic stress alternations after a heavy syllable is encountered indicate that they are quantity-sensitive in different ways. Iambic systems require that heavy syllables occupy the prominent position in a disyllabic foot, but trochaic systems exclude heavy syllables from disyllabic feet entirely. Since the strong interpretation predicts this difference, it might provide the foundation for an account, not of stress systems generally, but of quantity-sensitive systems in particular.

3

Quantity-sensitivity

In quantity-sensitive languages, heavy syllables are always stressed, and this often has the effect of perturbing basic stress alternations. The feature of quantitysensitivity that is most significant to the discussion here is that there is sometimes a difference between trochaic systems and iambic systems in how they resume their basic alternations after encountering a heavy syllable. Under certain circumstances, the particular way in which a system resumes its basic alternation can indicate whether it prefers to parse heavy syllables into disyllabic feet or monosyllabic feet. Whether or not a difference in resumption actually emerges, however, depends on the combination of foot type and parsing directionality the system employs. No difference emerges when the headedness of the foot and parsing directionality match. As illustrated in (5), for example, in left-to-right iambic languages like Hixkaryana, a heavy syllable is always followed by a stressless syllable. Similarly, in right-to-left trochaic languages like Fijian, as (11) illustrates, heavy syllables are always preceded by a stressless syllable. In Fijian, heavy syllables are CVV. (11)

Fijian loanwords (Schütz 1978)3 ‘LL’LL L‘LL’LL ‘LL‘H’LL L‘LL’H ‘LL‘LL’H L‘HL’H ‘H‘LL’H

‘ndiko’nesi pe‘resi’tendi ‘mbele‘mbo(’tomu pa‘lasi’ta( ‘mini‘siti’ri( pa‘raima’ri( ‘ndai‘reki’ta(

‘deaconess’ ‘president’ ‘bell-bottoms’ ‘plaster’ ‘ministry’ ‘primary’ ‘director’

There is no difference, then, between left-to-right iambic systems and right-to-left trochaic systems in this context – both resume their basic alternations with a stressless syllable – so there is no evidence for a difference in their treatment of heavy syllables. 3

Loanwords are typically employed to illustrate the Fijian stress pattern, since long native stems are uncommon and morphology can influence the position of stress.

7

Brett Hyde

The reason is simply that the foot that is constructed immediately after the heavy syllable is parsed, rather than the foot that is constructed to parse the heavy syllable itself, determines how the basic alternation resumes. Whether heavy syllables are included in disyllabic feet or parsed as monosyllabic feet, the basic alternations of both iambic and trochaic systems would resume with an unstressed syllable (the underlined syllable in the examples below). (12)

Parsing directionality matches headedness a. Left-to-right iambic i. Iamb ( x)( x) ...L H L L...

ii. Monosyllable (x)( x) ...H L L...

b. Right-to-left trochaic i. Trochee (x )( x ) ...L L H L...

ii. Monosyllable (x )(x) ...L L H...

In left-to-right iambic systems, the heavy syllable must occur at the right edge of a foot whether the foot is an iamb, as in (12a.i), or a monosyllable, as in (12a.ii). Since the next foot constructed would be iambic in either case, the alternation resumes with an unstressed syllable. In right-to-left trochaic systems, the heavy syllable would be parsed at the left edge of a foot whether the foot is a trochee, as in (12b.i), or a monosyllable, as in (12b.ii). Since the next foot constructed would be trochaic in either case, the alternation again resumes with an unstressed syllable. A difference in the resumption of basic alternations emerges only in situations where parsing directionality and the headedness of the foot do not match. In right-to-left iambic languages like Tübatulabal (Voegelin 1935), as (13) illustrates, heavy syllables are always preceded by stressless syllables. In Tübatulabal, heavy syllables are CVV(C). (13)

Resumption with a stressless syllable in Tübatulabal ’LL’L L’LL’H L’LL’L ’LL’LL’HL’L ’H’LL’L ’H’LL’H

’ŒiIi’jal ti’Iija’laap wi’taIha’tal ’witaI’hata’laaba’tsu ’taa’hawi’la ’taa’hawi’laap

‘the red thistle’ ‘on the red thistle’ ‘the Tejon Indians’ ‘away from the Tejon Indians’ ‘the summer’ ‘in the summer’

In left-to-right trochaic languages like Cahuilla, however, as illustrated in (4), heavy syllables are always followed by stressed syllables. The difference between right-to-left iambic systems and left-to-right trochaic systems, then, is that the former resume their basic alternations with stressless syllables while the latter resume them with stressed syllables. The reason that a difference emerges when headedness and parsing directionality do not match is that the resumption of basic alternations depends directly on how the heavy syllable itself is footed.

The Iambic–Trochaic Law (14)

8

Mismatch between parsing directionality and headedness a. Right-to-left iambic i. Iamb ( x)( x) ...L L L H...

ii. Monosyllable *( x) (x) ...L L H...

b. Left-to-right trochaic i. Trochee * (x )(x ) ...H L L L...

ii. Monosyllable (x)(x ) ...H L L...

In right-to-left iambic systems, parsing the heavy syllable into an iamb, as in (14a.i), would position an unstressed syllable between the heavy syllable and the next stress to the left. Parsing it into a monosyllabic foot, however, as in (14a.ii), would make the next stress adjacent. The fact that right-to-left iambic languages resume their basic alternation with a stressless syllable indicates that they prefer to accommodate heavy syllables with disyllabic feet. In left-to-right trochaic systems, parsing the heavy syllable into a trochee, as in (14b.i), would position a stressless syllable between the heavy syllable and the next stress to the right. Parsing it into a monosyllabic foot, as in (14b.ii), would not. The fact that left-to-right trochaic languages resume their basic alternation with a stressed syllable indicates that they prefer to accommodate heavy syllables with monosyllabic feet. Though iambic and trochaic languages display no difference, then, in the resumption of basic stress alternations when parsing directionality and headedness match, they do show a difference when parsing directionality and headedness do not match. The difference indicates that iambic systems prefer to parse heavy syllables into disyllabic feet and trochaic systems prefer to parse heavy syllables as monosyllabic feet. As we shall see next, the foot inventories of ITL accounts capture these divergent preferences as directly as possible.

3.1

The asymmetric foot inventory

As mentioned above, some of the most recent ITL accounts fuse together parts of the strong interpretation and the weak interpretation. Hayes (1987, 1995), for example, relies on the combination to motivate two disparities in the inventory of parsing feet. (15)

a. b.

Quantity-insensitive Syllabic trochee

(q q)

Quantity-sensitive i. Moraic trochee ii. Standard iamb

(L L) or (H) (L q) or (H)

Clause (7b) of the weak interpretation, “if parsing is insensitive to the position of heavy syllables, it is trochaic,” motivates a disparity in the types of feet that can be quantity-insensitive. As (15a) indicates, Hayes’s account allows only trochaic feet to be quantity-insensitive. As discussed above, the approach is undermined by the

Brett Hyde

9

existence of several quantity-insensitive iambic languages. While Hayes argues that the quantity-insensitivity of such systems is only apparent, as they do not actually contain heavy syllables to perturb the basic pattern, the argument is plausible only in the cases of Araucanian and Weri. It is not plausible in the cases of Osage, Paumari, and Suruwaha, each of which has long vowels, diphthongs, or both. Given that both iambic and trochaic systems can be quantity-sensitive, clause (6a) of the strong interpretation, “if a foot contains a durational contrast, it is iambic,” motivates a disparity in precisely how the two types can be quantity-sensitive. As (15b) indicates, Hayes’s account requires trochaic systems to deal with heavy syllables differently than iambic systems. Iambs allow heavy syllables in strong position in disyllabic feet, where they can create durational contrasts. They exclude them only from weak position. Trochees, however, exclude heavy syllables from disyllabic feet entirely. Trochaic systems must parse heavy syllables into monosyllabic feet, where no durational contrast is possible. The disparity in how the two types of feet can be quantity-sensitive predicts the difference, discussed above, in how right-to-left iambic languages and left-to-right trochaic languages resume basic stress alternations after encountering a heavy syllable. The fact that iambic systems parse heavy syllables into disyllabic feet in Hayes’s account correctly predicts that right-to-left iambic languages will resume their basic alternation with a stressless syllable, as in (14a.i). The fact that trochaic systems must parse heavy syllables into monosyllabic feet correctly predicts that left-to-right trochaic languages will resume their basic alternation with a stressed syllable, as in (14b.ii). McCarthy and Prince (1986) arrive at a foot inventory similar to Hayes’s, but they arrive at it through a slightly different route and in service of a different purpose. They posit one type of quantity-insensitive foot: the balanced [q q] template, and two types of quantity-sensitive feet: the balanced [[ [] template and the unbalanced [q[ q[[] template. (16)

a.

Quantity-insensitive Balanced [q q]

b.

Quantity-sensitive i. Balanced [[ [] ii. Unbalanced [q[ q[[]

The ITL contributes to McCarthy and Prince’s account in two ways. First, clause (6a) of the strong interpretation, “if a foot contains a durational contrast, it is iambic,” motivates the iambic configuration of the quantitatively unbalanced foot. To guarantee that quantitatively iambic feet are also iambic with respect to stress, they posit the Quantity/Prominence Homology principle. It ensures that the heavier syllable in feet with a quantity contrast – in effect, the heavy syllable in a [q[ q[[] foot, given the limited possibilities in (16) – bears the stress. (17)

Quantity/Prominence Homology For a, b § F, if a > b quantitatively, then a > b stresswise.

Prominence in unbalanced feet is determined by the Trochaic Default principle, which ensures that [q q] and [[ [] feet both stress their initial syllable.

The Iambic–Trochaic Law (18)

10

Trochaic Default For a, b § F, if a = b quantitatively, then F = [s w].

That the single quantity-insensitive foot template, [q q], always emerges as a trochee is a second contribution of the ITL. It derives from clause (7b) of the weak interpretation, “if parsing is insensitive to the position of heavy syllables, it is trochaic.” Although they provide some minimal discussion of the asymmetric foot inventory’s role in creating stress patterns, McCarthy and Prince’s primary concern is to derive the types of feet encountered in morphological templates. The fact that the types of feet that seem to be required for creating the appropriate stress patterns in Hayes’s account are the same types that seem to be involved in morphological templates in McCarthy and Prince’s account significantly strengthens the case for the asymmetric foot inventory and the ITL. Prince’s (1990) Harmonic Parsing account also involves crucial asymmetries but, in this case, the asymmetries emerge in the preference hierarchies of iambic and trochaic systems rather than in the foot inventory itself. For Prince, quantitysensitive systems are those that obey the Weight-to-Stress principle and quantityinsensitive systems are those that do not. (19)

Weight-to-Stress If heavy, then stressed.

Focusing on the former, Prince provides three principles that can be used to determine the relative well-formedness of different types of iambic and trochaic feet in quantity-sensitive systems. The first principle, Binarity, is given in (20). It requires that feet be either disyllabic or bimoraic. (20)

Binarity Feet should be analyzable as binary.

The second and third principles, given in (21), are equivalent to the strong interpretation of the ITL in (6). |X| means “the size of X.” (21)

a. b.

Iambic Quantity In a rhythmic unit [W S], |S| > |W|, preferably. Trochaic Quantity In a rhythmic unit [S W], |S| = |W|, preferably.

Iambic Quantity expresses the preference that iambic feet contain a durational contrast, and Trochaic Quantity expresses the preference that trochaic feet not contain a durational contrast. As (22) indicates, the best-formed feet are those that respect both the relevant quantity principle and Binarity. The next best are those that respect Binarity only. The worst are those that respect neither.

Brett Hyde

11

satisfy:

(22) a. b.

Iambic Trochaic

IQ/TQ, Binarity [L H] [L L], [H]

Binarity only › ›

[L L], [H] [H L]

neither › ›

[L] [L]

The asymmetry in this case arises in how the iambic and trochaic hierarchies order balanced and unbalanced feet. Iambs prefer unbalanced [L H] to balanced [L L] and [H], but trochees prefer balanced [L L] and [H] to unbalanced [H L]. The main difference between the inventory of quantity-sensitive feet in Prince’s and Hayes’s accounts is the possibility of unbalanced [H L] trochees. As Prince notes, however, with their lesser status, there are limited situations in which unbalanced trochees might arise. First, consider the result of Harmonic Parsing in a left-to-right trochaic system. Since feet are constructed serially, and parsing the next syllable in line is the overriding concern, Harmonic Parsing would parse the H of an HL sequence as monosyllabic foot, just like Hayes’s moraic trochees. If the following L could not combine with another L to form a disyllabic foot, giving [H][LL], it might be parsed as a monosyllable, giving [H][L], or left unparsed, giving [H]L, depending on whether or not degenerate feet are tolerated. These are the same options available under Hayes’s moraic trochees. The results are rather different in right-toleft systems. Harmonic Parsing would always parse an HL sequence into an [HL] foot, but Hayes’s moraic trochees would yield either [H][L] or [H]L, depending on whether or not degenerate feet are tolerated. The latter option results in the same stress pattern, but the former does not. I am not aware, however, of a right-to-left trochaic language that would allow us to distinguish between the two approaches.

3.2

A symmetric foot inventory

As discussed above, the evidence for a difference between iambic and trochaic quantity-sensitivity comes from systems where parsing directionality is opposite the headedness of the foot. After left-to-right trochaic parsing encounters a heavy syllable, binary alternation resumes with a stressed syllable. In contrast, after rightto-left iambic parsing encounters a heavy syllable, binary alternation resumes with a stressless syllable. Where ITL approaches posit an asymmetric foot inventory to account for this difference, Kager (1993) proposes a symmetric foot inventory, arguing that the difference is best explained in terms of the metrical principles of clash and lapse avoidance. Kager distinguishes between parsing feet and the surface feet that can be formed later through adjunction of unparsed syllables. The inventory of parsing feet is symmetric. The quantity-insensitive syllabic trochee corresponds to a mirror-image syllabic iamb. The quantity-sensitive moraic trochee corresponds to a mirrorimage moraic iamb. Iambic quantity-sensitivity and trochaic quantity-sensitivity are identical, then, in that both exclude heavy syllables from disyllabic feet. (23)

Parsing feet

Syllabic (quantity-insensitive)

trochaic iambic (x ) ( x) q q q q

Moraic (quantity-sensitive)

(x ) [ [

( x) [ [

The Iambic–Trochaic Law

12

Crucial to Kager’s account is the claim that heavy syllables contain an internal prominence contrast corresponding to a decline in sonority between the first and second mora. According to Kager, the internal contrast is the characteristic of heavy syllables responsible for their attraction of stress (Prince’s 1990 Weight-to-Stress principle). The decline in sonority ensures that stress occurs over the first mora and that the second mora is stressless, as in (24). (24)

(x

)

H A heavy syllable’s strong–weak contour translates into different results with respect to clash at the mora level. When a stressed heavy syllable immediately follows another stressed syllable, as in (25a), the result is a clash. When the order is reversed, as in (25b), there is no clash. (25)

a. Clash

b. No clash

x

x

x

x

L

H

H

L

Assuming that clash is never tolerated at the point the basic alternation is resumed, the internal prominence contrast accounts for the different modes of resumption after a heavy syllable.4 In left-to-right trochaic systems, a trochaic foot can immediately follow the heavy syllable without creating clash, so the pattern resumes with a stressed syllable. (26)

Left-to-right trochees: No clash (x

L

) (x

L

H

) (x

)

L

L

In right-to-left iambic systems, however, an iambic foot cannot immediately precede the heavy syllable without creating clash. This being the case, the parsing algorithm must skip a syllable before constructing an iambic foot, and the pattern resumes with an unstressed syllable.

4

More precisely, Kager assumes that the construction of a foot cannot introduce clash within the parsing window. The parsing window consists of the syllables being parsed in the current iteration plus the string of syllables encountered by the parsing algorithm in previous iterations. It does not include syllables that the algorithm has not yet encountered.

Brett Hyde

13 (27)

Right-to-left iambs a. Clash * (

L

x) (x

L

L

) (

x)

L

H

b. Clash avoided (x ( x)

L

L

L

L

H

) (x

)

L

L

An asymmetric inventory of parsing feet is not actually necessary, then, to account for the different ways in which alternating patterns resume after a heavy syllable. Kager shifts the asymmetry to the prominence contrast within heavy syllables, and the difference in pattern resumption falls out from the principle of clash avoidance. Although the inventory of surface feet is still asymmetric in Kager’s account, the assymetry falls out from the principle of foot-internal lapse avoidance. The unparsed syllable in L(H) sequences can adjoin to the following foot to form an (LH) iamb, as in (28a), because it does not create a foot-internal lapse. In contrast, the unparsed syllable in (H)L sequences cannot adjoin to the preceding foot to form an (HL) trochee, as in (28b), because it would create a foot-internal lapse. (28)

Adjunction of stray syllables (x

a.

)

(

x

H

)

→ L

H

L

b. (x

)

* (x

)

H

L

→ H

L

Van de Vijver’s (1998) “Incidental Iamb” approach is similar to Kager’s Symmetrical Foot Inventory approach, in that it rejects a difference between iambic and trochaic quantity-sensitivity. Rather than accounting for the different behavior of iambic and trochaic systems in terms of rhythmic principles, however, van de Vijver claims that examples of the crucial iambic case, right-to-left iambs, simply do not exist. Following an earlier idea from Kager (1989), van de Vijver re-analyzes rightto-left iambic languages like Tübatulabal as right-to-left trochaic languages. He argues that diachronic processes have resulted in a lexical stress on word-final syllables in Tübatulabal and that trochaic feet are constructed from right to left away from the lexical stress, as in (29). (29)

qqqq’q qqqqq’q

→ →

(’qq)(’qq)(’q) q(’qq)(’qq)(’q)

While such an analysis does have the virtue of producing the correct stress pattern, one undesirable feature is the necessity of representing an entirely predictable aspect of the pattern – the word-final stress – in the lexicon (chapter 1: underlying representations).

The Iambic–Trochaic Law

14

To actually rule out the possibility of right-to-left iambic systems, and the necessity of accounting for the type of quantity-sensitivity that such systems would have to exhibit, the Incidental Iamb approach posits a constraint specifically promoting trochaic feet but not a constraint specifically promoting iambic feet. Iambic systems can only arise from the combined demands of two constraints. The first, Align-L(PrWd, Ft), requires that every word begin with a foot. The second, *Edgemost, requires that peripheral syllables be stressless. (30)

a.

b.

Align-L(PrWd, Ft) The left edge of the prosodic word should be aligned with the left edge of a foot. *Edgemost Edge-adjacent elements may not be prominent.

Combined, the constraints essentially make two demands. They demand that final syllables be stressless, due to *Edgemost, and they demand that every word begin with a disyllabic iamb, due to both *Edgemost and Align-L(PrWd, Ft). (Each word must begin with a foot, but it must be a foot that leaves the initial syllable stressless.) While trochaic systems often meet the demand that final syllable be stressless, they do meet the demand that words begin with an iambic foot, creating an opening for an iambic system to emerge. The type of iambic system that emerges, of course, must meet the combined demands of *Edgemost and Align-L(PrWd, Ft). While left-to-right iambic patterns like (31a.i) do begin with an iambic foot and leave final syllables stressless, leftto-right patterns like (31a.ii) ignore the latter requirement, and right-to-left patterns like those in (31b) ignore both. (31)

a.

b.

Left-to-right iambic systems i. Predicted (attested) (q’q)(q’q)qq (q’q)(q’q)(q’q)q Right-to-left iambic systems i. Not predicted (attested, but re-analyzed as trochaic) (q’q)(q’q)(q’q) (’q)(q’q)(q’q)(q’q)

ii.

Not predicted (attested) (q’q)(q’q)(q’q) (q’q)(q’q)(q’q)q

ii.

Not predicted (unattested) (q’q)(q’q)(q’q) q(q’q)(q’q)(q’q)

The results are mixed. The Incidental Iamb approach predicts the (31a.i) pattern, a pattern found in Carib (Hoff 1968), Hixkaryana, and Choctaw (Nicklas 1972, 1975; Lombardi and McCarthy 1991). It does not predict the (31a.ii) pattern, however, a pattern found in Araucanian. It also does not predict (31b.i), an attested pattern re-analyzed as trochaic, as discussed above, and it does not predict (31b.ii), an unattested pattern.

4

Rhythmic lengthening and rhythmic shortening

Rhythmic lengthening and rhythmic shortening are two processes where a syllable’s duration is altered because it occupies a particular position in an alternating

Brett Hyde

15

pattern. Rhythmic lengthening appears to be based solely on the alternation of strong and weak positions, affecting only the former. In contrast, rhythmic shortening can affect both strong and weak positions and seems in many cases to be motivated, at least partially, by a preference for exhaustive parsing. Rhythmic lengthening increases the duration of stressed syllables either through vowel lengthening (chapter 20: the representation of vowel length) or gemination of an adjacent consonant (chapter 37: geminates), the former method being more common than the latter. It can be found in both iambic and trochaic languages. In (5), for example, we saw that the stressed vowels of underlyingly light syllables lengthen in the iambic Hixkaryana, making them heavy on the surface. Other iambic lengthening languages include Carib, Choctaw, and several varieties of Yupik (Woodbury 1981, 1987; Jacobson 1984, 1985; Krauss 1985a; Leer 1985; among others). In (9), we saw that the stressed vowels of underlyingly light syllables lengthen in the trochaic Chimalapa Zoque. Other trochaic lengthening languages include Chamorro (Topping and Dungca 1973; Chung 1983), Icelandic (Árnason 1980, 1985), Mohawk (Michelson 1988), and Selayarese (Mithun and Basri 1986). An interesting difference between iambic and trochaic lengthening is that lengthening occurs in iambic systems only when they are quantity-sensitive, and in trochaic systems only when they are quantity-insensitive.5 When it is seen as shaping the possibilities of stress patterns generally, then, the existence of trochaic lengthening in quantity-insensitive systems clearly undermines the ITL. If we restrict the ITL’s scope to quantity-sensitive systems, however, the distribution of lengthening gives it considerable support. The presence of lengthening in iambic languages, where durational contrasts are encouraged, is consistent with the ITL, as is the absence of lengthening in trochaic languages, where durational contrasts are prohibited. Another important generalization concerning rhythmic lengthening is the correlation between what I will refer to as regular lengthening and certain types of minimal words. Regular lengthening is the exceptionless lengthening in nonminimal forms characteristic of many lengthening languages: vowels lengthen in underlyingly light syllables whenever they receive the appropriate degree of stress.6 As (32) indicates, languages with regular lengthening allow only three types of minimal word: H, LL, and HL. (32)

Minimal words associated with regular lengthening a.

Monosyllabic L unattested H Chimalapa Zoque (trochaic) Choctaw nouns (iambic) Icelandic (trochaic) Yupik varieties (iambic)

b.

Disyllabic LL Choctaw verbs (iambic) HL Carib (iambic) Hixkaryana (iambic) Selayarese (trochaic) LH

5

unattested

The clearest cases of quantity-insensitive iambs – Osage, Paumari, and Suruwaha – do not exhibit lengthening. The less clear cases – Weri and Araucanian – also do not exhibit lengthening. 6 Lengthening is not “regular” when it is prohibited in various positions in non-minimal forms, especially in final position. Syllables with primary stress in Italian, for example, lengthen if they are penultimate but not if they are antepenultimate or final. Languages like Unami and Munsee Delaware (Goddard 1979), which make stressed syllables heavy through consonant gemination, also fall outside the generalization.

The Iambic–Trochaic Law

16

Iambic lengthening languages and trochaic lengthening languages can both insist on H or HL minimal words, but only iambic lengthening languages can insist on an LL minimal word. There appear to be no regular lengthening languages with L minimal words, and none with LH minimal words. If we exclude alternations better described as vowel reduction (see below), rhythmic shortening is a marginal phenomenon. It occurs in only a few trochaic systems, and, as Mellander (2003) points out, each of these few is quantity-sensitive. Trochaic shortening can affect either a stressed syllable or an unstressed syllable. In Fijian, for example, stressed syllables shorten, converting HL sequences into LL sequences. (33)

Trochaic shortening in Fijian ,bu(-nIu ha(-j-a nre(-ta

→ → →

’mbu-nIu ’ta-j-a ’nre-ta

‘my grandmother’ ‘chop-trans-3 sg obj’ ‘pull-trans’

In Pre-Classical Latin (Allen 1973; Mester 1994), stressless syllables shorten, converting LH sequences into LL sequences.7 (34)

Trochaic shortening in Latin ego( → male( → ami(kitiam →

’ego ’male ‘ami’kitiam

‘I’ ‘bad’ ‘friendship’

Rhythmic shortening, though marginally attested, is consistent with the ITL. Among quantity-sensitive languages, it occurs only in trochaic systems, destroying the durational contrasts that the ITL prohibits. It does not occur in iambic systems, where it would destroy durational contrasts that the ITL requires. Quantity-insensitive languages, of either type, apparently do not exhibit rhythmic shortening. Before we proceed, it should be noted at this point that there are at least two languages with shortening phenomena that are potential counterexamples to the generalizations presented in the preceding paragraph: Central Slovak (DvonL 1955; Bethin 1998; Mellander 2003) and Gidabal (Geytenbeek and Geytenbeek 1971; Rice 1992; Mellander 2003). The stress patterns of both languages are fairly complex, however, and their analyses are not at all straightforward. It is not clear whether they are examples of shortening in trochaic feet (resulting in unbalanced HL trochees), shortening in iambic feet, or both, or whether they are simply examples of shortening in non-head syllables generally. Since it is unclear exactly how such examples are relevant, I have set them aside here. I have also set aside phenomena involving vowel strengthening in stressed syllables and vowel reduction and deletion in unstressed syllables. Some of the alternatives to an ITL approach draw to a significant extent on such phenomena as evidence that the difference between iambs and trochees with respect to rhythmic lengthening and shortening is not as great as previously thought (Revithiadou and van de Vijver 1997; van de Vijver 1998; Revithiadou 2004). Strengthening, reduction, and deletion phenomena are fairly common in both iambic and trochaic systems. 7

The Latin-type shortening is often referred to as iambic shortening, because it affects the second syllable in a two-syllable sequence rather than the first.

Brett Hyde

17

While Hayes (1985) introduced strengthening, reduction, and deletion into the discussion to support the ITL, claiming that the phenomena arose primarily in iambic systems and helped to create durational contrasts, he later observed that there were significant difficulties with this line of evidence (Hayes 1995). First, while they have some impact on phonetic duration, they are not primarily duration phenomena – as opposed to sonority or parsing phenomena. They do not involve phonological duration in the same way as the canonical examples of lengthening and shortening. The second reason, Hayes admits, is that they are not specifically iambic phenomena, which essentially grants the position of Revithiadou and van de Vijver. Although the introduction of strengthening, reduction, and deletion into the debate was probably a misstep in terms of defending the ITL, the resulting discussion has helped to identify its most plausible areas of influence. Just as the case for the ITL is much stronger in quantity-sensitive systems than in quantity-insensitive systems, the case is much stronger in the contexts of genuine lengthening and shortening than in the contexts of strengthening, reduction, and deletion. This being the case, I will not address the latter contexts further.

4.1

Lengthening and shortening rules

Just as Hayes’s (1987, 1995) asymmetric foot inventory effectively captures the different types of quantity-sensitivity that arise in iambic and trochaic systems, his approach to rhythmic lengthening and shortening effectively captures their distribution in quantity-sensitive systems. In Hayes’s account, rhythmic lengthening is only possible when it would create a durational contrast in iambs, a restriction motivated by the strong interpretation of the ITL. (35)

Iambic lengthening ( q Ø→ [/[

x) q [ __

As (35) indicates, iambic lengthening only occurs in iambic feet where both the first and second syllable are light. The second syllable becomes heavy, creating a durational contrast. Rhythmic shortening is only possible when it would avoid a durational contrast in trochees. The effect of the Fijian-type shortening rule in (36a) is to convert an (H)L sequence to an (LL) sequence. It helps to minimize underparsing without resorting to an (HL) foot. Since the appropriate context can also arise in limited circumstances in iambic systems, Hayes stipulates that the rule can only apply in trochaic systems. (36)

Trochaic shortening a.

Fijian-type →

where

is metrically stray

The Iambic–Trochaic Law b.

Latin-type (x )

(x

18

)



The effect of the Latin-type shortening rule in (36b) is to convert an ill-formed (LH) trochee to a well-formed (LL) trochee. An ill-formed (LH) trochee might be created inadvertently, for example, when an extrametrical H syllable is adjoined to a degenerate (L) foot, and (36b) repairs the defect. Though the ITL account captures the distribution of rhythmic lengthening and shortening in quantity-sensitive systems, it falls short in two ways. First, it fails to allow for trochaic lengthening in quantity-insensitive systems (a phenomenon whose existence Hayes denies). Second, it does not account for the correlation between regular lengthening and the group of minimal words in (32). Based on the ITL, lengthening rules might be employed to create durational contrasts in iambic feet (or, possibly, to destroy them in trochaic feet), but there is nothing in the law entailing that lengthening languages should prefer H, HL, and LL minimal words above L and LH minimal words. If the ITL is actually the motivation for lengthening, then the correlation of regular lengthening with these particular minimal words is a mystery.

4.2

Lapse avoidance and non-finality

Kager’s (1993) approach to the asymmetries in rhythmic lengthening and shortening is based on the same principles that governed his approach to quantitysensitivity (see §3.2). In conjunction with a prohibition against foot-internal lapse, the internal prominence contrast in heavy syllables restricts the occurrence of lengthening. Kager views lengthening of stressed syllables in general as phonetically motivated, but the restriction against foot-internal lapse ensures that such lengthening is more common in iambs than in trochees. As (37) illustrates, the grammar tolerates lengthening that creates (LH) iambs, because they contain no foot-internal lapse, but it does not tolerate lengthening that creates (HL) trochees, because they do contain a foot-internal lapse. (37)

Lengthening asymmetry through lapse avoidance a.

No lapse after iambic lengthening ( x) ( x ) → L

b.

L

L

H

Lapse after trochaic lengthening (x ) * (x ) → L

L

H

L

Brett Hyde

19

The prohibition against foot-internal lapse also accounts for the shortening asymmetry, but in this case it acts as a trigger. In Kager’s view, the purpose of trochaic shortening is to eliminate foot-internal lapses like those found in (HL) trochees. Since there is no foot-internal lapse in (LH) iambs, the motivation for shortening never arises in iambic systems. (38)

Lapse avoidance predicts shortening asymmetry a.

No lapse in iambs to trigger shortening ( x ) *( x)

L b.

H

L

L

Lapse in trochees triggers shortening (x ) (x )

H

L

L

L

Although foot-internal lapse avoidance effectively addresses the lesser frequency of lengthening in trochaic systems, it does not address the actual phonological triggers for lengthening. Hyde’s (2007) non-finality approach addresses the lesser frequency of lengthening in trochaic systems, but it also provides phonological triggers for rhythmic lengthening and addresses its correlation with certain types of minimal words (chapter 43: extrametricality and non-finality). Under the non-finality approach, rhythmic lengthening is a special case of the type of weight-sensitivity where stress avoids light syllables. To avoid stressing a light syllable, the syllable is lengthened to make it heavy. Non-finality produces this type of weight-sensitivity by prohibiting stress on domain-final moras. Following Kager (1995), Hyde applies non-finality to the foot domain to promote iambic lengthening. Going a step further, he also applies non-finality to the syllable domain. This gives the approach a second mechanism for promoting iambic lengthening but it also gives it a mechanism for promoting trochaic lengthening. (39)

a. b.

Non-finality(Ft) No stress occurs over the final mora of a foot. Non-finality(q) No stress occurs over the final mora of a syllable.

Non-finality(Ft) effectively prohibits stress on light foot-final syllables. Since it bans foot-level gridmarks from foot-final moras, foot-final syllables must be at least bimoraic to support stress. Non-finality(q) effectively prohibits stress on light syllables generally. Since it bans foot-level gridmarks from syllable-final moras, syllables generally must be at least bimoraic to support stress. To produce lengthening, one of the non-finality constraints must dominate Dep-[, the faithfulness constraint that prevents mora insertion. Under such rankings,

The Iambic–Trochaic Law

20

when stress would otherwise occupy a light syllable, a mora can be added to make the syllable heavy on the surface. The two non-finality constraints do not, however, have equal ability to promote lengthening in every type of foot. Since Non-finality(Ft) prohibits stress over light foot-final syllables in particular, it can lengthen the stressed syllable of an iamb but not the stressed syllable of a trochee. In contrast, since Non-finality(q) prohibits stress over light syllables in general, it can lengthen the stressed syllables of both. Consider first the situation where the stressed syllable occurs in an iamb. When Non-finality(Ft) dominates Dep-[, a second mora is added to underlyingly light syllables to avoid stress on foot-final moras. Non-fin(Ft) Dep-[

LLLL

(40)

x x x x x x x x [ [ [ [ [ [

☞ a.

** … q q

q q

x x x [ [

x x x [ [

b.



*!* … q q

q q



The result is similar when Non-finality(q) dominates Dep-[: a second mora is added to the underlyingly light syllables to avoid stress on syllable-final moras. Non-fin( ) Dep-

LLLL

(41)

x x x x x x x x

☞ a.

** …



x x x

b.

x x x *!*





21

Brett Hyde

Now consider the situation where the stressed syllable occurs in a trochee. When Non-finality(Ft) dominates Dep-[, as in (42), there is no lengthening. Because stress does not occupy the foot-final syllables in either candidate, there is no danger that it will occupy the foot-final moras, and Non-finality(Ft) cannot distinguish between them. The lower ranked Dep-[ settles on the faithful (42b) candidate. (42) a.

Non-fin(Ft) Dep-[

LLLL x x x x x x x x [ [ [ [ [ [

*!* … q

q q

q…

x x [

x x x [ [

x [

… q

q q

q…

☞ b.

When Non-finality(q) dominates Dep-[, however, as in (43), the lengthening candidate emerges as the winner. The stressed syllables become heavy, to allow stress to avoid syllable-final moras. (43) ☞ a.

Non-fin( ) Dep-

LLLL x x x x x x x x

** …



x x

b.

x x x

x *!*





One advantage of the non-finality approach is that it has a built-in explanation for the lesser frequency of lengthening among trochaic systems. Non-finality in the syllable and non-finality in the foot both produce iambic lengthening, but only

The Iambic–Trochaic Law

22

non-finality in the syllable produces trochaic lengthening. Every ranking that produces trochaic lengthening, then, also produces iambic lengthening, but some rankings that produce iambic lengthening do not produce trochaic lengthening. Since the percentage of possible rankings that produce iambic lengthening is greater than the percentage of possible rankings that produce trochaic lengthening, we would expect lengthening to occur with greater frequency in iambic systems than it does in trochaic systems, all else being equal. A second advantage of the non-finality approach is that it helps to account for the particular group of minimal words associated with regular lengthening. As discussed above, languages that automatically lengthen appropriately stressed vowels only allow three types of minimal word: H, LL, and HL. They never allow L or LH minimal words. Using the same non-finality constraints to produce rhythmic lengthening and the minimal word restrictions predicts this situation. L minimal words are absent, because the lengthening constraints themselves both establish H minimal words. Non-finality(q) has the same effect in monosyllabic feet that it has in disyllabic feet, and Non-finality(Ft) has the same effect that it has in iambs. They both force the stressed syllable to lengthen. As (44) indicates, if either of the lengthening constraints ranks highly enough to produce lengthening in the disyllabic feet of longer forms, then it also ranks highly enough to produce lengthening in the monosyllabic feet of monosyllabic forms. (44)

a. b.

Non-finality(q) >> Dep-[ Iambic or trochaic lengthening + H minimal word Non-finality(Ft) >> Dep-[ Iambic lengthening + H minimal word

Two desirable predictions result from this situation: regular lengthening is always accompanied by a minimal word that is at least bimoraic, and iambic lengthening languages and trochaic lengthening languages can both have H minimal words. Although the lengthening constraints cannot produce disyllabic minimal words on their own, they do help to determine which type of disyllable emerges. Assuming that disyllabic minimal words have a trochaic strong–weak stress contour, we can explain the two-syllable requirement with an additional nonfinality constraint, Non-finality(w), which bans stress from the final syllable of a prosodic word.8 Once the strong–weak contour is established, lengthening constraints determine the weight of the initial syllable. Non-finality(q), which produces lengthening in both iambic feet and trochaic feet, requires that the initial syllable be heavy. Non-finality(Ft), which produces lengthening only in iambic feet, tolerates a light initial syllable. (45)

a. b.

8

Non-finality(w), Non-finality(q) >> Dep-[ Iambic or trochaic lengthening + HL minimal word Non-finality(w), Non-finality(Ft) >> Dep-[ Iambic lengthening + LL minimal word

Plausible cases of iambic minimal words appear to be extremely rare.

Brett Hyde

23

This correctly predicts that either iambic or trochaic lengthening can be accompanied by an HL minimal word, but only iambic lengthening can be accompanied by an LL minimal word. Van de Vijver (1998) and Revithiadou (2004) propose an approach to rhythmic lengthening that is similar in some respects to the non-finality approach. Although it does not rely on the non-finality formulation, it does posit two lengthening mechanisms. One lengthens stressed syllables generally, which produces both iambic lengthening and trochaic lengthening, and the other lengthens foot-final syllables in particular, which produces only iambic lengthening. (46)

Lengthening constraints (van de Vijver 1998) a. b.

Stressed Syllable Length A stressed syllable is long and an unstressed syllable is short. FootFinal Foot-final elements are lengthened.

Since there are two sources for iambic lengthening and only one for trochaic lengthening, Revithiadou’s and van de Vijver’s proposals, like the non-finality approach, provide an account of the different frequencies with which the two types of lengthening occur. The advantage of the non-finality approach is that it incorporates the lengthening mechanisms into the much more general non-finality formulation, a formulation independently motivated by its ability to account for a surprisingly broad range of phenomena at different prosodic levels. (See chapter 43: extrametricality and non-finality.)

5

Summary

The most interesting interpretation of the Iambic–Trochaic Law is a strong interpretation that focuses on the presence or absence of durational contrasts in disyllabic feet. Since the general typology of attested stress systems offers very little support for the strong interpretation, Hayes (1985) introduced the ITL to metrical theory under a weaker interpretation that focused on quantity-sensitivity. This also turned out to be inadequate, however, as it was soon recognized that both iambic languages and trochaic languages could be either quantity-sensitive or quantity-insensitive. Two subsequent accounts – McCarthy and Prince (1986) and Hayes (1987, 1995) – pursued a hybrid approach, combining aspects of the weak interpretation and the strong interpretation. Another, Prince (1990), returned to a strong interpretation of the ITL, but applied it, in effect, only to quantity-sensitive systems and as a preference rather than an absolute requirement. Since the ITL is inherently quantity-sensitive, it seems more natural to employ it as a foundation for an account of quantity-sensitive systems in particular than as the foundation for an account of stress systems generally. There is, in fact, considerable support for the ITL among quantity-sensitive systems. Iambic quantity-sensitivity differs from trochaic quantity-sensitivity, as indicated by the different ways in which alternating patterns resume after encountering a heavy syllable, and the asymmetric foot inventory of ITL accounts very effectively for this difference. Standard iambs exclude heavy syllables from weak position

The Iambic–Trochaic Law

24

in disyllabic feet; moraic trochees exclude them from disyllabic feet entirely. The result, that (disyllabic) iambic feet can contain durational contrasts but (disyllabic) trochaic feet cannot, is consistent with a strong interpretation of the ITL. Lengthening and shortening asymmetries also support a strong interpretation. Among quantity-sensitive systems, lengthening only occurs in iambic systems, and shortening only occurs in trochaic systems. When we restrict our attention to quantity-sensitive systems, then, the ITL does capture important differences between iambs and trochees with respect to the particular type of quantity-sensitivity exhibited and the employment of lengthening and shortening rules. This does not necessarily mean, of course, that the ITL is the best explanation for these differences. Rather than being an explanation, the descriptive and superficial ITL actually seems to be an observation in need of an explanation, much like the attested stress patterns themselves. Particularly important to alternative accounts is an assumed prominence asymmetry that arises within heavy syllables (Kager 1993). When a heavy syllable is stressed, the stress occupies its first mora and its second mora is stressless. This allows Kager (1993) to account for differences in quantity-sensitivity in terms of the rhythmic principle of clash avoidance and to account for lengthening and shortening asymmetries in terms of the rhythmic principle of lapse avoidance. Hyde (2007) exploits the same syllable-internal prominence asymmetry to provide a non-finality-based account of lengthening asymmetries and minimal words. An important advantage of these alternatives is that they offer the potential to account not only for many of the asymmetries found in the typologies of attested stress patterns, but also for the Iambic–Trochaic Law itself.

REFERENCES Allen, W. Sidney. 1973. Accent and rhythm. Cambridge: Cambridge. University Press. Altshuler, Daniel. 2009. Quantity-insensitive iambs in Osage. International Journal of American Linguistics 75. 365–398. Árnason, Kristján. 1980. Quantity in historical phonology: Icelandic and related cases. Cambridge: Cambridge University Press. Árnason, Kristján. 1985. Icelandic word stress and metrical phonology. Studia Linguistica 39. 93–129. Bell, Alan. 1977. Accent placement and perception of prominence in rhythmic structures. In Larry Hyman (ed.) Studies in stress and accent, 1–13. Los Angeles: Department of Linguistics, University of Southern California. Bethin, Christina Y. 1998. Slavic prosody: Language change and phonological theory. Cambridge: Cambridge University Press. Bolton, Thaddeus L. 1894. Rhythm. American Journal of Psychology 6. 145–238. Boxwell, Helen & Maurice Boxwell. 1966. Weri phonemes. In S. A. Wurm (ed.) Papers in New Guinea Linguistics No. 5, 77–93. (Pacific Linguistics A37.) Canberra: Australian National University. Brame, Michael K. 1973. On stress assignment in two Arabic dialects. In Stephen R. Anderson & Paul Kiparsky (eds.) A Festschrift for Morris Halle, 14–25. New York: Holt, Rinehart & Winston. Brame, Michael K. 1974. The cycle in phonology: Stress in Palestinian, Maltese and Spanish. Linguistic Inquiry 5. 39–60. Chung, Sandra. 1983. Transderivational relationships in Chamorro phonology. Language 59. 35–66.

25

Brett Hyde

Derbyshire, Desmond C. 1985. Hixkaryana and linguistic typology. Dallas: Summer Institute of Linguistics & University of Texas at Arlington. Dixon, R. M. W. 1988. A grammar of Boumaa Fijian. Chicago: University of Chicago Press. DvonL, Ladislav. 1955. Rytmicky zákon v spisovnej slovenLine. Bratislava: Vydavatel’stvo Slovenskej Akadémie Vied. Echeverría, Max S. & Heles Contreras. 1965. Araucanian phonemics. International Journal of American Linguistics 31. 132–135. Everett, Daniel L. 1996. Prosodic levels and constraints in Banawá and Suruwaha. Unpublished ms., University of Pittsburgh (ROA-121). Everett, Daniel L. 2003. Iambic feet in Paumari and the theory of foot structure. Linguistic Discovery 2(1). 22–44. Geytenbeek, Brian B. & Helen Geytenbeek. 1971. Gidabal grammar and dictionary. Canberra: Australian Institute of Aboriginal Studies. Goddard, Ives. 1979. Delaware verbal morphology: A descriptive and comparative study. New York: Garland. Hay, Jessica S. F. & Randy L. Diehl. 2007. Perception of rhythmic grouping: Testing the iambic/trochaic law. Perception and Psychophysics 69. 113–122. Hayes, Bruce. 1985. Iambic and trochaic rhythm in stress rules. Proceedings of the Annual Meeting, Berkeley Linguistics Society 11. 429–446. Hayes, Bruce. 1987. A revised parametric metrical theory. Papers from the Annual Meeting of the North East Linguistic Society 17. 274–289. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hoff, Berend J. 1968. The Carib language: Phonology, morphonology, morphology, texts and word index. The Hague: Martinus Nijhoff. Hulst, Harry van der. 1999. Issues in foot typology. In S. J. Hannahs & Mike Davenport (eds.) Issues in phonological structure: Papers from an international workshop, 95–127. Amsterdam & Philadelphia: John Benjamins. Hyde, Brett. 2007. Non-finality and weight-sensitivity. Phonology 24. 287–334. Iversen, John R., Aniruddh D. Patel & Kengo Ohgushi. 2008. Perception of rhythmic grouping depends on auditory experience. Journal of the Acoustical Society of America 124. 2263–2271. Jacobson, Steven A. 1984. The stress conspiracy and stress-repelling bases in the Central Yup’ik and Siberian Yupik Eskimo languages. International Journal of American Linguistics 50. 312–324. Jacobson, Steven A. 1985. Siberian Yupik and Central Yupik prosody. In Krauss (1985b), 25–45. Kager, René. 1989. A metrical theory of stress and destressing in English and Dutch. Dordrecht: Foris. Kager, René. 1993. Alternatives to the iambic–trochaic law. Natural Language and Linguistic Theory 11. 381–432. Kager, René. 1995. Review article of Hayes (1995). Phonology 12. 437–464. Kenstowicz, Michael. 1983. Parametric variation and accent in the Arabic dialects. Papers from the Annual Regional Meeting, Chicago Linguistic Society 19. 205–213. Kenstowicz, Michael & Kamal Abdul-Karim. 1980. Cyclic stress in Levantine Arabic. Studies in the Linguistic Sciences 10(2). 55–76. Knudson, Lyle M. 1975. A natural phonology and morphophonemics of Chimalapa Zoque. Papers in Linguistics 8. 283–346. Krauss, Michael. 1985a. Supplementary notes on Central Siberian Yupik prosody. In Krauss (1985b), 47–50. Krauss, Michael (ed.) 1985b. Yupik Eskimo prosodic systems: Descriptive and comparative studies. Fairbanks: Alaska Native Language Center. Kusumoto, Kiyomi & Elliott Moreton. 1997. Native language determines parsing of nonlinguistic rhythmic stimuli. Journal of the Acoustical Society of America 102. 3204.

The Iambic–Trochaic Law

26

Leer, Jeff. 1985. Toward a metrical interpretation of Yupik prosody. In Krauss (1985b), 159–172. Lombardi, Linda & John J. McCarthy. 1991. Prosodic circumscription in Choctaw morphology. Phonology 8. 37–71. McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. Mellander, Evan W. 2003. (HL)-creating processes in a theory of foot structure. The Linguistic Review 20. 243–280. Mester, Armin. 1994. The quantitative trochee in Latin. Natural Language and Linguistic Theory 12. 1–61. Michelson, Karin. 1988. A comparative study of Lake-Iroquoian accent. Dordrecht: Kluwer. Mithun, Marianne & Hasan Basri. 1986. The phonology of Selayarese. Oceanic Linguistics 25. 210–254. Nicklas, Thurston Dale. 1972. The elements of Choctaw. Ph.D. dissertation, University of Michigan, Ann Arbor. Nicklas, Thurston Dale. 1975. Choctaw morphophonemics. In James M. Crawford (ed.) Studies in Southeastern Indian languages, 237–250. Athens: University of Georgia Press. Prince, Alan. 1990. Quantitative consequences of rhythmic organization. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26(2). 355–398. Revithiadou, Anthi. 2004. The Iambic/Trochaic Law revisited: Lengthening and shortening in trochaic systems. Leiden Papers in Linguistics 1. 37–62. Revithiadou, Anthi & Ruben van de Vijver. 1997. Durational contrasts and the iambic/ trochaic law. Proceedings of Western Conference on Linguistics 9. 229–242. Rice, Curt. 1992. Binarity and ternarity in metrical theory: Parametric extensions. Ph.D. dissertation, University of Texas, Austin. Schütz, Albert J. 1978. English loanwords in Fijian. In Albert J. Schütz (ed.) Fijian language studies: Borrowing and pidginization, 1–50. Suva: Fiji Museum. Schütz, Albert J. 1985. The Fijian language. Honolulu: University of Hawaii Press. Seiler, Hansjakob. 1965. Accent and morphophonemics in Cahuilla and in Uto-Aztecan. International Journal of American Linguistics 31. 50–59. Seiler, Hansjakob. 1967. Structure and reconstruction in some Uto-Aztecan languages. International Journal of American Linguistics 33. 135–147. Seiler, Hansjakob. 1977. Cahuilla grammar. Banning, CA: Malki Museum Press. Seiler, Hansjakob & Kojiro Hioki. 1979. Cahuilla dictionary. Banning, CA: Malki Museum Press. Topping, Donald M. & Bernadita C. Dungca. 1973. Chamorro reference grammar. Honolulu: University Press of Hawaii. Vijver, Ruben van de. 1998. The iambic issue: Iambs as a result of constraint interaction. Ph.D. dissertation, University of Leiden. Voegelin, Charles F. 1935. Tübatulabal grammar. University of California Publications in American Archaeology and Ethnology 34. 55–189. Vos, P. 1977. Temporal duration factors in the perception of auditory rhythmic patterns. Scientific Aesthetics 1. 183–199. Woodbury, Anthony. 1981. Study of the Chevak dialect of Central Alaskan Yupik. Ph.D. dissertation, University of California, Berkeley. Woodbury, Anthony. 1987. Meaningful phonological processes: A consideration of Central Alaskan Yupik Eskimo prosody. Language 63. 685–740. Woodrow, Herbert. 1909. A quantitative study of rhythm. Archives of Psychology (New York) 14. 1–66.

45

The Representation of Tone Larry M. Hyman

1

Introduction

No issue has had a greater impact on phonological representations than the study of tone. Although receiving only passing attention in both pre- and early generative phonology, tone quickly moved away from its marginal status to occupy center stage in the development of non-linear phonology. While both level and contour tones had been traditionally transcribed with either accents or numerals, as in (1) and (2), the assumption in early generative phonology, e.g. Wang (1967), was that tones consisted of features that could be added at the bottom of a segmental feature matrix, as in (3). (1) High (H) Low (L) HL (falling) LH (rising)

(2)

Falam

Obokuitai

ná ‘breast’ nà ‘house’ nâ ‘taro’ pa ‘fish’ (Loving 1966: 25)

páa ‘mushroom’ kèe ‘leg’ sâa ‘animal’ z"u ‘bear’ (personal notes)

kuik1 ‘rock’ kuik2 ‘insect (sp.)’ kuik12 ‘lizard (sp.)’

Jingpho H Mid (M) L HL (Qingxia

(3)

Awa

a.

(Jenison and Jenison 1991: 85)

Ayutla Mixtec mO55 ‘word’ mu33 ‘delicious’ mu31 ‘to see’ nu51 ‘mother’ and Diehl 2003: 401)

H tone /á/ G+syll J H−cons K H+back K H+low K I+highL

H–H œi1Ju?1 H–L œi1ni?3 M–L œi2ni?3 L–L ti3ku?3 (Pankratz and

‘pineapple’ ‘hat’ ‘head’ ‘louse’ Pike 1967: 291)

b. HL falling tone /â/ G+syll J H−cons K H+back K H+low K I+fallingL

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0045

The Representation of Tone

2

A major representational problem was how to account for the properties of contour tones. Although falling and rising tones could be expressed less formally by combining accents (â, a) or numerals (31, 13, etc.), features such as [±falling] and [±rising] fail to capture what are known as “edge effects”: a high to low falling tone acts like a H tone with respect to what precedes, but as a L tone with respect to what follows. Similarly, a low to high rising tone acts like a L tone with respect to what precedes, but as a H tone with respect to what follows. Representations such as in (4), which were occasionally entertained, would simply be incoherent in a framework in which a segment consists of a single vertical matrix of features: (4)

a.

Falling tone /â/ G+syll H−cons H+back H+low I[+high][+low]

b. Rising tone /a/ J K K K L

G+syll J H−cons K H+back K H+low K I[+low][+high]L

In order to solve this and other representational problems, Goldsmith (1976a, 1976b) proposed a theory of “autosegmental” phonology in which segments and tones appear on separate “tiers,” as in (5). (5)

Autosegmental representations of H, L, HL, LH a. level tones a

a

H

L

b. contour tones a H

a L

L

H

By so doing, Goldsmith was able to capture the traditional intuitions implicit in the accent and numeral notations and make predictions about what should vs. should not be found in tone systems. Armed with the autosegmental framework, enormous strides were made in the analysis of tone as well as in other applications of the framework, e.g. segmental harmonies (Clements 1977, 1981; Hoberman 1988), feature geometry (Clements 1985; Clements and Hume 1995) and prosodic morphology (McCarthy 1981; McCarthy and Prince 1986). The main question to be addressed in this chapter is the extent to which the key insight of autosegmental phonology, expressed in (6), is still valid: (6)

Tones are semi-autonomous from their tone-bearing units a. b.

tones are on a separate tier, but they are linked to their tone-bearing units by association lines.

The chapter is organized as follows: in §2 we consider some of the predictions of autosegmental tonology in order to see how they have fared since the 1970s. In §3 we consider the issue of underspecification, while §4 addresses the issues of tone features, tonal geometry, and tone-bearing units. §5 evaluates potential limitations of autosegmental representations, while §6 concludes the chapter with a brief consideration of tone in constraint-based phonology. We will see that there is much

Larry M. Hyman

3

reason to hold on to the autosegmental insight even as phonological frameworks have evolved.

2

Autosegmental tonology

One can distinguish two consequences of autosegmental phonology (see also chapter 14: autosegments) as applied to tone: those that follow directly from the architecture vs. those that involve additional principles or conventions. We take up the first of these here and postpone discussion of the second until §5. There are at least three direct consequences of the two-tier autosegmental architecture proposed by Goldsmith, as in (7): (7)

2.1

a. b. c.

non-isomorphism between the tiers zero representation on one vs. the other tier stability effects

Non-isomorphism

Using tbu to represent the tone-bearing unit to which tones link (see §4), the first of these is schematized in (8). (8)

a.

tbu H

b. tbu tbu

tbu L

L

H

H

tbu tbu L

We have already seen that more than one tone can link to a single tbu, resulting in falling and rising contour tones, as in (8a). Complex contours are also attested, as exemplified in (9). (9)

a.

falling-rising:

b.

rising-falling:

be243 ba243 Nzadi mwaìn dz≤`

Iau

‘tree fern’ ‘sticking to’ ‘child’ ‘eye’

(Bateman 1990) (personal notes)

In addition, Lomongo is said to have LHLH on one syllable derived from elision (Hulstaert 1961: 164): /èmí là w( basàngì/ → [èm â w a!sàngì] ‘it’s you and I who are related’, where marks the two places where elision occurs. The second type of non-isomorphism in (8b) shows that the same tone can link to more than one tbu. What this means is that there is a potential contrast between the representations in (10a) vs. those in (10b). (10)

a. tbu

tbu

b. tbu tbu

H

H

H

tbu

tbu

tbu tbu

L

L

L

The Representation of Tone

4

The representations in (10a) were originally thought to be prohibited by the obligatory contour principle (OCP), which disallows successive identical features on the same tier (Leben 1973; Goldsmith 1976a). However, the contrasts in (10) are clearly needed. Typically, the one-to-one representations in (10a) occur when the tones belong to different morphemes (ultimately, words), while those in (10b) are expected when the H–H or L–L sequence occurs tautomorphemically. This makes sense if one considers that morphemes are spelled out separately, such that they may be concatenated with other morphemes carrying identical tones. Examples follow. As reported by Paulian (1975), Kukuya noun stems are limited to five shapes (CV, CVV, CVCV, CVVCV, CVCVCV) and four tonal “melodies”: /H, L, HL, LH/. (A fifth melody, /LHL/, occurs only on polymorphemic verb stems.) When a /H/ noun stem consists of more than one tbu, the H is multiply linked, producing representations such as in (11). (11)

a. (mà-) bágá

‘show knives’

b. (lì-) bálágá

H

‘fence’

H

Now consider the two words [má-bá] ‘they are oil palms’ and [wátá] ‘bell’. The first consists of a /H/ copular tone assigned to the toneless prefix /ma-/ followed by a /H/ stem, while the latter lacks a prefix and instead has a single /H/ linked to both tbus of the stem. The two words are pronounced identically as H–H in medial position, as in (12a), but not in prepausal position in (12b), where there is a lowering to mid tone: (12)

a. Medial

b. Prepausal

má-bá

wátá

má-ba

wata

H H

H

H M

M

If we assume that the prepausal lowering rule refers only to the tonal tier, targeting the last H “autosegment,” as in (13), the right results are obtained (// = pause boundary): (13)

H → M / __ //

One H is lowered in [má-ba], while both Hs are lowered in [wata] (Hyman 1987). While the two representations in (12a) are expected to correlate with heteromorphemic vs. tautomorphemic H tones, Odden (1982) shows that Shambala requires both representations within noun stems. A noun such as [njóká] ‘snake’, pronounced H–H, has a single /H/ linked to both syllables of the stem. The question, then, is how to analyze stems such as [ngó↓tó] ‘sheep’, pronounced H–↓H (H followed by downstepped H tone; tonal downstep is indicated by ↓). Since successive heteromorphemic /H/ tones produce downsteps, i.e. /H/ + /H/ → H–↓H, the logical move is to represent [ngó↓tó] with two successive /H/s on the monomorphemic stem. (Odden analyzes Shambala with a privative /H/ tone, which

Larry M. Hyman

5

contrasts with /Ø/ rather than /L/.) Although a rare occurrence, the tautomorphemic version of the OCP is thereby violated. Similar contrasts and OCP problems are found with respect to /L/ tones in Dioula d’Odienné (Braconnier 1982). In this language there is a rule of the form in (14), where pause (//) may be interpreted with a %L boundary tone: (14)

L → H / {//, L} __ H

As seen in (15), the rule is not iterative, as only one L autosegment can be raised: (15)

a.

/ì mà dÜ tá/



b.

/ì mà tùrù tá/



ì *ì ì *ì

mà má mà má

dÅ tá dÅ tá túrú tá túrú tá

‘you didn’t take any child’ ‘you didn’t take any oil’

However, the following examples show that not all monomorphemic L tone nouns have the same properties before H: (16)

a.

before pause sèbè tùrù kàràkà sùmàrà

b. before H sèbé túrú kàràká sùmárá

‘paper’ ‘oil’ ‘bed’ ‘soumbala (a spice)’

One solution is to give the above nouns the representations in (17a), where the OCP is violated similarly to Odden’s (1982) tautomorphemic Hs in Shambala. (17)

a. sebe LL b. sebe L

turu L turu

karaka L L karaka

L

L

sumara L

L

sumara L

Another solution in (17b) might be to posit a distinction between /L/ and toneless tbus, which later become L by default. While the OCP is violable in the ways just seen, it is important to note that the autosegmental representations provide a straightforward way of encoding the contrastive tonal properties of Kukuya, Kishambaa, and Dioula d’Odienné. Other frameworks would have to resort to ad hoc junctures or diacritics.

2.2

Zero representation

The second consequence that directly flows from the autosegmental architecture is the possibility that a tone can exist without a tbu, and vice versa. When a morpheme consists solely of a tone, it is referred to as a tonal morpheme (see also chapter 82: featural affixes). An oft-cited example is the associative (genitive)

The Representation of Tone

6

marker in Igbo (see Williamson 1986 and references cited therein), which is most transparently observed when preceded and followed by a L–L noun. As seen in (18), the H tonal morpheme is assigned to the preceding tbu in Central Igbo and to the following tbu in the Aboh dialect: (18)

a. b.

Central Igbo àgbà + ´ + èIwè → àgbá èIwè ‘jaw of monkey’ Aboh Igbo Qgbà + ´ + èIwè → Qgbà éIwè ‘jaw of monkey’

Floating tones can also be part of a lexical morpheme. In Aghem, the nouns [kRfú] ‘rat’ and [kRwó] ‘hand’ are both pronounced with H–H tones in isolation (Hyman 1979). However, as seen in (19), they have different effects on the following word (the [kR-] prefix drops out when the noun is modified): (19)

a.



kîa

H

L



kSa

‘your (sg) rat’

‘your (sg) hand’

HL L

b.



kRn

H

H





kRn

‘this rat’

‘this hand’

HL H

In (19a), the H of the stem /-fú/ spreads onto the /L/ of /kSa/ ‘your sg (class 7)’ creating a HL contour. As seen, the floating L of /-wó`/ ‘hand’ blocks the spreading. When followed by the /H/ tone demonstrative /kRn/ ‘this (class 7)’ in (19b), ‘this rat’ is realized H–H, while ‘this hand’ has a downstep conditioned by the same lexical floating L of /-wó`/, which was originally due to a historically lost syllable (cf. Proto-Bantu *-bókò ‘hand’). Corresponding to floating tones, which lack a tbu, are tbus that lack tones. Such toneless morphemes may receive their tonal specification by context or by default tone assignment (see §3). An oft-cited example of the former comes from Mende (Leben 1973, 1978): base noun

(20) a. b. c. d. e.

/H/ /L/ /HL/ /LH/ /LHL/

kF bèlè mbû mba njàhâ

‘war’ ‘trousers’ ‘owl’ ‘rice’ ‘woman’

+hu ‘in’

+ma ‘on’

kF-hú bèlè-hù mbú-hù mbà-hú njàhá-hù

kF-má bèlè-mà mbú-mà mbà-má nyàhá-mà

As seen, the tone of the two locative postpositions is the same as the last underlying tone of the nouns to which they attach. The /HL/, /LH/, and /LHL/ “melodies” are linked one-to-one to the noun + postposition constituent.

2.3

Stability effects

The third consequence of autosegmental representations, stability, is related to the second: when a tbu is deleted, its tone may still remain, and vice versa. An example of this comes from Tangale, which Kenstowicz and Kidda (1987: 230)

7

Larry M. Hyman

interpret as having an underlying /H/ vs. /Ø/ contrast. As seen in the following examples, the final vowel of a word is deleted “when it is followed by another word in a close syntactic configuration”: (21)

a. /tuuÚe/ + /lawo/ → tuuÚ H

H

b. /jaara/ + /lawo/ H

lawo → tùuÚ

→ jaar

láwò

‘child’s horse’

làwò

‘child’s arm’

H lawo → jáar

H

H

In (21a), we observe that the deletion process targets only the tbu: When the final /e/ of /tuuÚé/ is deleted, its H tone thus floats and is relinked to the first tbu of the second noun. In (21b), on the other hand, when the final /á/ of /jáará/ is deleted, its link to the H tone is also lost. Since a floating H tone does not result from vowel deletion, /lawo/ ‘child’ is realized L–L by default. This is exactly what is expected from the doubly linked H enforced by the OCP in the autosegmental representation in the input in (21b). If /jáará/ had been represented with two H autosegments, the second would have floated, and could potentially have been relinked to the following noun. While one might preclude this possibility by introducing a floating-H deletion process, the autosegmental representation makes exactly the right prediction without stipulation.

3

Tonal underspecification

Another representational issue in the study of tone concerns underspecification: the possibility that some tbus do not have a tone at all (see chapter 7: feature specification and underspecification). The example in (21a) above demonstrates how a tone can shift onto another tbu that was claimed to be underlyingly toneless. Such shifts can be quite long-distance, as seen in following examples from Giryama, which contrasts /H/ vs. /Ø/ (Philippson 1998: 321): (22)

a. ku-tsol-a b. ku-on-a

ki-revu ‘to choose a beard’ ki-révu ‘to see a beard’

/-tsol-/ ‘choose’ /-ón-/ ‘see’

H In (22a) both the infinitive /ku-tsol-a/ ‘to choose’ and the object noun /ki-revu/ ‘beard’ are underlyingly toneless (and pronounced all L by default). In (22b), on the other hand, the infinitive /ku-ón-a/ ‘to see’ consists of an underlying H-tone root, which however shifts to the penultimate syllable of the following word. Such long-distance processes, which make little sense in a segmental interpretation of tone, are best described as displacing a tone, here H, across toneless tbus. If one instead proposed that these tbus have L tones that are permeable, one would have to establish a system of, say, markedness conventions to predict when tones can

The Representation of Tone

8

cross each other. The representation of L as /Ø/, on the other hand, allows for such long-distance processes without requiring further complications. Just as in the case of single vs. multiply linked tones, underspecification potentially allows for more distinctions than fully specified representations. Limiting ourselves first to languages that have only two tone heights, (23) summarizes the possible analyses: (23)

a. b. c. d.

/H/ /H/ /L/ /H/

vs. /L/ vs. /Ø/ vs. /Ø/ vs. /L/

vs. /Ø/

As seen, the tonal contrast may be binary, with /H/ contrasting with /L/; privative, with either /H/ or /L/ contrasting with /Ø/; or both, in the case of where a ternary system of /H, L, Ø/ is required. Thus, the central question concerning any such system is to determine which of the above representations best accounts for the properties of the surface [H] vs. [L] contrast. A /H, L/ system is required when both features are phonologically active (see also chapter 4: markedness for more discussion of markedness). This is seen in Kuki-Thaadow, which has both H- and L-tone spreading (Hyman 2010): (24)

a. /kà + zóoI + lìen + thúm/ L

H

L

[kà zòoI líen th"m] ‘my three big monkeys’

H

b. /kà + kèel + góoI + gùup/ [kà kèel gòoI gûup] ‘my six thin goats’ L

L

H

L

Not only do both /H/ and /L/ spread onto a following /L/ and /H/ syllable, respectively, but the result in final position is a rising (LH) or falling (HL) contour tone. Both spreading and contour tones would be difficult to represent if one of the tones were underspecified. The same would be true of a language which has both floating H and L. While /H, L/ systems are those in which the phonology refers to both tone values, in a privative tone system only one of the two tones is phonologically active. Many Bantu languages have a /H, Ø/ system, as was seen in the Giryama examples in (22). A few have a /L, Ø/ system, such as Ruund, for which Nash (1992–94) gives the following arguments: (25)

a. b. c. d.

Hs are by far more numerous than Ls, hence “unmarked.” Floating L exists, while floating H does not. Morphological rules assign L tones, not Hs. Phonological rules manipulate L tones, not Hs.

Athabaskan languages are also known for having H- vs. L-marked tone systems (see the various studies in Hargus and Rice 2005). The closely related South American dialects, Bora (Weber and Thiesen 2000) and Miraña (Seifart 2005), have

Larry M. Hyman

9

been reported to have /L, Ø/ systems based on properties similar to (25), but with the additional OCP argument that two word-level /L/ tones cannot occur in sequence. A similar argument is made for Munduruku (Picanço 2002), but with the need to distinguish two kinds of L tone: a /L/, which both triggers and undergoes a change to H after L, vs. a /Ø/ tone, which does not trigger, but does undergo, the change to H. Other /H, L, Ø/ systems in which toneless /Ø/ receives its tone from context or by default include Margi (Pulleyblank 1986) and Kinande (Mutaka 1994). A variation on this is Ganda, which Hyman et al. (1987) analyze with underlying /H, Ø/ and surface [H, L], but with a three-way H vs. L vs. Ø contrast at an intermediate level of representation. While Ganda introduces the L at the word level (where the dissimilatory process known as Meeussen’s Rule converts /H–Hn/ to H–Ln, e.g. /a-bá-láb-á/ → [a-bá-làb-à] ‘they who see’), other /H, Ø/ systems introduce L tones at the phonological phrase level or as part of the intonational system. Still others may introduce L targets in the phonetic interpretation or perhaps not at all (Myers 1998). In Tangale, an utterance-final /Ø/ tbu is pronounced extra-low. However, when the final syllable of a /Ø–H/ sequence undergoes lowering before pause, the final syllable does not become extra-low. While Kenstowicz and Kidda (1987) account for this by a rule delinking a singleton H before pause, an analysis more in line with what happens in other tone systems, e.g. in Grassfields Bantu (Hyman and Tadadjeu 1976), would involve L-tone spreading into a prepausal H, as in (26a). (26)

a. tùuNè˚ ‘horse’

b. laIóró

L H //

‘donkey’

L H

As in Grassfields Bantu, the symbol L° means that a L tone doesn’t fall (or lower) before pause (//). Since Kenstowicz and Kidda propose an underlying /H, Ø/ system, default Ls would first have to be introduced to trigger the delinking of the prepausal H. As seen in (26b), the L-spreading rule only targets an immediately following H tbu that occurs directly before pause. Corresponding to the four different analyses of two-height tone systems in (23) are those characterizing systems with three tone heights in (27). (27)

a. b. c. d. e.

/H/ /H/ /H/ /Ø/ /H/

vs. vs. vs. vs. vs.

/M/ /Ø/ /M/ /M/ /M/

vs. vs. vs. vs. vs.

/L/ /L/ /Ø/ /L/ /L/

vs. /Ø/

As in the case of /H, L/, if the three tone heights are phonologically active, the system will most likely require underlying /H, M, L/. Examples are MDnD (Kamanda-Kola 2003; Olson 2005), where all three heights occur as floating tones and combine to form all six tonal contours (HM, HL, MH, ML, LM, LH), and Gwari (Hyman and Magaji 1970), which has H-, M-, and L-tone spreading. It is generally assumed that M is the unmarked tone of a three-height tone system (Maddieson 1978; Pulleyblank 1988). Both Akinlabi (1985) and Pulleyblank

The Representation of Tone

10

(1988) propose that Yoruba has the system in (27b), where M tone is not only unmarked, but underspecified. Among the arguments is the fact that Yoruba allows only HL and LH contours, but none involving M tone (*MH, *LM, *HM, *ML). If M = /Ø/, it follows that Ø cannot form a contour with either H or L. Another argument derives from alternations in which H or L overrides M tone. While all of these properties naturally follow from the underspecification of M, the latter can be set up as /M/, as long as other mechanisms are put in place to capture the recessiveness of M tones. Pulleyblank (2004), for instance, proposes an account within Optimality Theory in which, essentially, M tones need not be preserved in outputs to the same degree as H or L. An argument for a /H, Ø, L/ system comes from Peñoles Mixtec (Daly and Hyman 2007), which has an OCP constraint disallowing successive Ls, or Ls which are separated by any number of /Ø/ tbus. The result is the L-tone deletion rule in (28a). (28)

a. b.

L → Ø / L __ (N = nasalization) qqN dìi-ni-kwe-œi kada-kwe-œi qqN qqN ŒiuN l l ↓ Ø ‘only one of them will do each of the jobs’

As seen, this rule is responsible for the loss of the second L on /ŒìuN/ ‘work’ in (28b), which is separated from the first L by twelve toneless tbus. That the two L tones can ‘see’ each other over long distances is strong evidence for the underspecification account. Returning to the systems in (27), (27c) and (27d) seem to be rare or nonoccurring. Paster (2003) proposes /H, M, Ø/ for Leggbó, with /Ø/ alternating between H and L in the verb morphology (cf. §4). If correct, the relatively few short-vowel LH, HL, and LM contours would have to be exceptionally marked: [ègg"] ‘catfish (sp.)’, [gbppjôn] ‘afternoon’, [lèssò¯l] ‘last year’. While I am unaware of any three-height tone system being analyzed as /Ø, M, L/, the system in (27e) seems at first appropriate for Yatzachi Zapotec (Pike 1948). Although this language has only the three surface tones H, M, L (and HM and MH contours on monosyllabic words), there are two kinds of L tones: those that alternate with M vs. those that do not. Pike identifies these, respectively, as class A vs. class B. For example, a class B L will become M when followed by a M or H within or across words, while a class A L tone will not be affected. This is seen in the minimal pair in (29). (29)

a. b.

La Lb

[bìa] ‘cactus’ [bìa] ‘animal’

bìa gdlc ‘old cactus’ bca gdlc ‘old animal’

A natural interpretation would be to recognize La as /L/ and Lb as /Ø/. Where the /Ø/ tone is not realized M, i.e. when not occurring before a /M/ or /H/, it will be realized with a default L tone, hence merging with class A /L/. However, it is hard to evaluate this proposal without considering what the featural representation is of all of the tones in the language (see Hyman 2009). It is likely that Yatzachi Zapotec has two kinds of L tone because it originally contrasted four tone heights. The Lb /Ø/ could therefore have been a tone level between M and

11

Larry M. Hyman

L, which Pulleyblank (1986), interestingly, considers the underspecified tone of a four-height tone system. In all of the above discussion, we have been considering only systems that have the same number of underlying and surface tone levels (perhaps with downstep). It is also possible for languages to have fewer underlying tone levels than on the surface. For example, Kom has underlying /H/ and /L/, which spread and float, and form LH and HL contours. However, as discussed in §5 below, there is a rule H → M / L __ which results in a single H tbu being realized M on the surface. Although Welmers (1962) analyzes Kpelle with three tone heights, the surface Ms are clearly the realization of the /LH/ melody, whether realized on one tbu or more. This raises the possibility of a /H, M/ or /L, M/ system, where the two highest or two lowest tones are underlying and the third tone derived. Smith (1968) and Stahlke (1971) posit /H, M/ for Ewe dialects with H, M, and L tone, while Clements (1977) does the same for a dialect which has four tone heights: H, M, L, and ↑H (raised H). While the second-to-lowest height has been hypothesized to be the unmarked, if not underspecified, tone in a four-height tone system, it is not clear what to propose for systems with five contrastive tone heights, the maximum number that has been attested (Maddieson 1978; Edmondson and Gregersen 1992). The value of underspecification seems greatest in two-height tone systems, the ultimate case being culminative tone systems such as Somali, which allow at most one H tone per word. The status of underspecification becomes less clear in systems that have multiple tone levels and tonal contours. As will be seen in the next section, multilevel systems raise the important question of whether tones should be analyzed in terms of (binary) features.

4

Tone features, tonal geometry, and tone-bearing units

In the preceding sections we have followed the common practice of symbolizing tones as H, M, and L. It is generally assumed, however, that tones should be analyzed in terms of features and feature geometry (chapter 27: the organization of features). Take, for example, the question of tonal contours. We have already seen several pieces of evidence that these should be decomposed into level tones (or tone features), but there have been recurrent claims that at least some contour tones should be analyzed as units, i.e. they are “true” contour tones rather than being sequences of tones which happen to be linked to the same tbu (Biber 1981; Newman 1986). Although most or all of the putative cases are reinterpretable, nowhere is the intuition “contour = unit” stronger than in the study of Chinesetype tone systems. Yip (1989) tackles this issue and proposes two different tonal geometries for true contour tones vs. tone clusters: (30)

a. tonal contours tbu

H

b. tone clusters

tbu

L

L

tbu

H

H

tbu

L

L

H

The Representation of Tone

12

In (30) the tones link to a tonal node, which in turn links to the tbu. As seen, a tonal contour is one where the H and L tones link to a single tonal node, while a cluster is one where each tone has its own tonal node. This, then, nicely captures the intuition that there is something different about a Chinese 51, 35, or 214 as opposed to a LH rising or HL falling tone found in other tone systems. Thus, in Chinese it is not uncommon for a sandhi rule to replace one tonal contour by another, possibly unrelated, contour. Such is unheard of in African tone systems, where the tone clusters rarely, if ever, behave as units. Since Yip (1980) it has generally been assumed that two tone features are needed: one to make a basic tonal distinction between high and low, and the other to make a further split into higher vs. lower registers of high and low. Adopting Yip’s upper and Pulleyblank’s (1986) raised for these functions, this produces the distinctions in (31), where M represents a lower mid tone: (31) upper raised

H + +

M + −

M − +

L − −

As seen, the natural classes are {H, M} vs. {M, L}, which differ in the feature upper, and {H, M} vs. {M, L}, which differ in the feature raised. The motivation for a grouping of non-contiguous levels, {4, 2} vs. {3, 1}, is seen from cases of “tonal bifurcations” in East and Southeast Asia (Matisoff 1973): if [±upper] represents the original tonal opposition, often attributable to a laryngeal distinction in syllable finals, [±raised], typically attributable to a laryngeal distinction in syllable initials, can potentially modify the original contrast and provide the four-way opposition in (31). Yip (1980), Pulleyblank (1986), and others have implemented this, or a slightly revised, interpretation of the two binary contrasts, including the assignment of the second feature to a register node in an elaboration of the tonal geometry in (30). One issue that immediately arises is how to represent the fifth tone level attested in, for example, Kam (Shidong) (Edmondson and Gregerson 1992): (32)

Ía11 Ía22 Ía33 Ía44 Ía55

‘thorn’ ‘eggplant’ ‘father’ ‘step over’ ‘cut down’

With this many contrasts, numbers seem to be more useful than letters to represent the tone levels (Chao 1930). Returning to the distinctions that upper and raised do make in (31), how would one express {M, M} as a natural class? Although they are contiguous on the pitch scale, they do not share a feature. Work on this topic has been inconclusive. Rules such as M → M / L __ typically lower a contiguous sequence of Ms after L and are therefore equally interpretable as M → ↓ M (Hyman 1986). While the phenomenon of tonal downstep (↓) is best known for establishing H vs. ↓H contrasts in underlying two tone-height systems, M vs. ↓ M and L vs. ↓L contrasts are also attested. The outstanding question here is how to represent downstepped tones. One possibility in (33a) is that the downstep in a H–↓H sequence is represented only structurally, e.g. as a floating L tone

Larry M. Hyman

13

wedged between H tones, as Clements and Ford (1979) originally proposed for Kikuyu: (33)

a. tbu H

tbu L

b. tbu tbu

H

H

H

However, it was mentioned with respect to Shambala in §2.1 that the downstep might instead be left to the phonetic interpretation of two or more H features on successive tbus, as in (33b). Such an analysis is not possible when a contrastively downstepped tone follows a non-identical tone. A H–↓M sequence would thus require an intervening floating L, as has also been proposed for H–↓L, L–↓L, and L–↓H sequences which contrast with H–L, L–L, and L–H in Bamileke-Dschang (Hyman and Tadadjeu 1976). The question still remains whether M and M ever function as a natural class. The feature specifications in (31) predict that they should not. Actually, they predict that in a three-height tone system, M tone might have either of two specifications: [+upper, −raised] or [−upper, +raised]. In fact, in a three-level tone system, there could be two phonetically equivalent M tones that realize both of these feature combinations. In a number of three-height systems where there is a lexical contrast of /H/, /M/, and /L/ on nouns, verbs instead fall into two tone classes, whose realizations vary according to inflectional features and clause type. In (34) I distinguish two such systems: Type I

(34) a. b.

Higher tone class Lower tone class

H M

M L

Type II M M

H L

In Type I languages, e.g. Day (Nougayrol 1979), the contrasting tones have a higher and lower variant in the two environments: H ~ M vs. M ~ L. The problem here is double. First, one has to decide what the underlying (input) tones are: /H, M/, /M, L/, or maybe even /H, L/? It all depends on whether one views the alternations as raising or lowering (or both). The second problem is determining the feature representations. One possibility would be to start with one tone as [+upper] and the other as [−upper]. The assigned grammatical feature would then be [+raised] in (34a) and [−raised] in (34b). In this analysis M would be [+upper, −raised] in (34a) vs. [−upper, +raised] in (34b). In the Type II languages, e.g. Leggbó (Hyman et al. 2002), one verb tone is /M/, which does not change, while the other tone varies between H and L. Since H and L do not share either feature value of upper and raised (and certainly not to the exclusion of M), we can either introduce another feature that they could share, such as [+extreme] (Maddieson 1971), or seek another solution. Paster (2003) posited /M/ vs. /Ø/ for Leggbó, with the morphology assigning a H or L prefix, which takes the place of /Ø/ but cannot override /M/. A third possibility is simply to represent the contrast as /H/ vs. /M/, with H → L being a morphologically conditioned rule. In the above it has been assumed without discussion that tone heights should be characterized by features that are binary rather than multivalued (Stahlke 1977)

The Representation of Tone

14

or binary and hierarchized (Clements 1983). Anderson (1978) provides a comprehensive appraisal of the different feature proposals up to that date. If assimilations are assumed to spread individual features such as [upper] and [raised], these latter must occur on independent tiers. This then raises the question of where these tiers link up, e.g. to a laryngeal node or directly to the tbu? There have been numerous proposals in the literature (see the surveys and discussion in Yip 1995, 2002; Bao 1999; Snider 1999; and Lee 2008). The advantage of the first proposal is that tones frequently interact with other laryngeal properties. Halle and Stevens’ (1971) system in (35) – an early attempt to capture the relation between tone and obstruent voicing – fails, however, to characterize more than three tone heights: tones

(35) stiff slack

H M + − − −

L − +

voiceless obstruents

sonorants

voiced obstruents

ptkfs + −

mnlwj − −

bdgvz − +

Clearly, one wants to account for the relation between tone and (non-modal) phonations, or the interference of obstruent voicing with tonal assimilations, but not at the expense of losing the generalization that tones are distributed by tbus. While tones have an autosegmental independence, they ultimately must be realized on something, e.g. a vowel or syllabic consonant. A language may consistently assign tones by mora, such that a CVV syllable receives two tones, or it may assign tones by syllable. Sometimes it is difficult to distinguish between the two. The complexities and corresponding representational possibilities are many. Both Clements et al. (2009) and Hyman (2009) have expressed doubts that tones should be analyzed in terms of features at all.

5

Possible limits of autosegmental representation

Two general conclusions can be drawn from the preceding sections. First, there is still much merit in the autosegmental insights on tone. Second, there is much more work to do. In this section we first address real or apparent problems for autosegmental tonology. The arguments for autosegmental tonology were enumerated in §2. Some of the evidence concerned the non-isomorphism between the tones and their tbus: more than one tone can link to a tbu, in which case we get a tonal contour or cluster; conversely, one tone can link to more than one tbu. Contrasting representations such as those in (12a) and (17a) were said to be needed. The question here is whether they ever get in the way: are there cases where it is disadvantageous to represent a tautomorphemic H–H or L–L as a single tone linked to two tbus? One such awkward case would seem to arise in Kom, which has an underlying contrast between /H/ and /L/, but a surface contrast between H, M, and L. All M tones are derived by a rule that lowers a single H tbu to M when preceded by a L or initial phrase boundary (which can be represented by a %L boundary tone). This produces outputs such as the following:

Larry M. Hyman

15 (36)

a.

/fe-:am/ H L b. /fe-Jwin/ H LH c. /fe-bu?/ H HL d. /fe-tam/ HH

→ fb-:âm

‘mat’

[M–HL]



fb-JwíT

‘bird’

[M–HM]



fb-bú?

‘gorilla’

[M–H]



fb-tám

‘fruit’

[M–H]

As in these examples, most nouns in Kom have a /H/-tone prefix followed by a monosyllabic stem that can have any of the four tone patterns exemplified in (36). As schematized in (37), the H of the prefix spreads onto a following L or LH stem: (37)

fb-:âm

a. %L

H L

fb-Jwí¶

b. %L

H

L

H

As seen in the transcriptions, the output is M–HL in (37a) and M–HM in (37b). The M tones in question are conditioned by the rule that lowers a H to M after a (linked, floating, or boundary) L. In both forms the prefix is thus lowered to M; in addition, in (37b), the (delinked) stem L lowers the following H to produce the HM contour. Since the prefix lowers to M without affecting the stem, the lowering rule cannot be written as a single-tier rule, as in (38), or we would get the wrong outputs *M–ML and *M–M: (38)

H → M / L __ (cf. the Kukuya rule in (13))

It is clear that the doubly linked H representation is not useful, but it is not fatally contradictory to the autosegmental approach. At least four responses come to mind. First, assuming that the tbu is the mora and that stems are bimoraic, the M can be derived by delinking a H from the first tbu (mora) that immediately follows a L. Second, one can complicate the tonal geometry to include a tonal root node, as in (39), to which the preceding L can link as a register feature, perhaps [−raised] (cf. the surveys of similar proposals in Bao 1999 and Snider 1999): (39)

[

[ (tonal root node) (tonal node)

L

H

Third, assuming that floating tones persist into the output, one might argue that the M outputs are derived by phonetic implementation, where the M may be interpolated as part of the aligning of output tones with their segmental supports. A fourth possibility is to spread the L to form a LH tbu, which is phonetically interpreted as M.

The Representation of Tone

16

In other cases the autosegmental representations may have to be seriously “fixed up.” In the Mijikenda languages Chikauma and Chirihe, the H tone of a prefix shifts to penultimate position (Cassimjee and Kisseberth 1992: 29). (40)

a. /ni-a-sukum-a/ → n-a-sukum-a b. /ú-a-sukum-a/ → w-a-sukúm-a H

‘I am pushing’ ‘s/he is pushing’

H

All of the morphemes in (40a) are underlyingly toneless, including the 1st person singular subject prefix /ni-/. In (40b), the class 1 (human singular) subject prefix /ú-/ has an underlying tone, which, however, is realized on the penult. Cassimjee and Kisseberth argue that this should be accounted for by spreading and delinking of all but the last link of the H, as shown. Evidence for this analysis comes from the interaction of so-called depressor consonants (voiced obstruents) with H-tone spreading. Thus consider the forms in (41), where the verb stem /-galuk-/ ‘change’ begins with a depressor consonant: (41)

a. b.

→ →

/ni-a-galuk-a/ /ú-a-galuk-a/

n-a-galuk-a w-á-galúk-a

‘I am changing’ ‘s/he is changing’

Again the input morphemes are all toneless in (41a). In (41b) we see that the one underlying H is realized both on the penult, where expected, but also on the marker -á-. In the final set of data, we see the toneless form in (42a) contrasting with three H tones in (42b). (42)

a. b.

/ni-a-zi-fugul-a/ /ú-a-zi-fugul-a/

→ →

n-a-zi-fugul-a ‘I am untying them’ w-á-zi-fú↓gúl-a ‘s/he is untying them’

Cassimjee and Kisseberth’s analysis is schematized in (43), where the only underlying H tone is on the subject prefix /ú-/: (43)

a.

b.

L

L

d.

c.

u-a-zi-fugul-a → w-a-zi-fugul-a → w-a-zi-fugul-a → w-á-zi-fú↓gúl-a H

H

H L H LH

H L HLH

As indicated in (43a), the H of /ú-/ spreads to the penult. In (43b) a L-tone feature is assigned to each depressor consonant, in this case to the /z/ of the object prefix /zi-/ and to the stem-internal /g/ of /-fugul-/ ‘untie’. In (43c) these Ls are folded in with the multiply linked H tone. In a process which Cassimjee and Kisseberth term “fission,” these Ls divide up the one H into separate H tone domains. After this, delinking occurs in (43d), where the downstepped ↓H is conditioned by the depressor L, which is wedged between two H tbus. As in the Kom case, something needs to be added to fix the multiply linked structures derived from tone spreading. Despite the complications as shown by Kom and Chikauma/Chirihe, there is no reason to abandon the autosegmental insights; rather, we can extend and build on them as languages require.

Larry M. Hyman

17

On the other hand, there were some proposals of early autosegmental tonology that had to be abandoned. Most of these, listed in (44), concerned conventions that turned out to be too strong: (44)

a. b. c.

Every tbu must have a tone, hence automatic spreading. Every tone must have a tbu, hence automatic contouring. Tonal melodies should automatically map left to right.

These conventions were felt to be needed to account for the dashed associations in (44). (45)

a. tbu tbu

b. tbu

H

H

c.

tbu tbu tbu L

L

H

In (45a) a toneless tbu follows a H tone tbu. As we saw in the case of the toneless locative postpositions in Mende in (20), it was proposed that the last tone on the left would automatically spread to any toneless tbus to its right. Pulleyblank (1986) argued, however, against “automatic spreading” by showing that another option was for the toneless tbu(s) to receive a default tone, e.g. L. In (45b), the opposite is the case: there are more tones than tbus. Early autosegmental tonology assumed that the leftover tones would automatically link to the last tbu, as shown. However, numerous studies argued that certain tones should be left floating, e.g. a floating L to condition downstep, while Hyman and Ngunga (1994) present data from Ciyao to argue that even a floating H does not undergo “automatic contouring.” Finally, in (45c), we see the “automatic” left-to-right mapping of a LH melody to three tbus. The proposal in early autosegmental tonology was that free tones link one-to-one from left to right. As seen, this produces a L–H–H sequence. However, an alternative of “edge-in” association is proposed by Yip (1988), by which the first tone would map to the first tbu and the last tone to the last tbu, and then intervening available tbus would receive their tones from the left, thereby producing a L–L–H sequence. Edge-in association works well for Kukuya (Paulian 1975; Hyman 1987), where /LH/ maps as L–L–H and /HL/ as H–L–L on trimoraic stems (with the internal tbu being L). On the other hand, Zoll (2003) argues that apparent directional effects may result from whether a language prefers multiple Hs or multiple Ls. Kukuya definitely prefers L sequences, violating Zoll’s Lapse constraint, over multiple Hs, which would violate her Clash constraint. As a dramatic confirmation of Zoll’s tone-specific approach, consider the realization of tonal schemas in Fore in (46), where H, L = floating tones (Scott 1990): (46)

schema

1q

2q

3q

4q

/L/ /H/ /LH/ /HL/ /LHL/ /HLH/ /LHLH/ /HLHL/

L H LH HL

L–L H–H L–H H–L L–HL H–LH

L–L–L H–H–H L–L–H H–L–L L–H–L H–L–H L–H–LH H–L–HL

L–L–L–L H–H–H–H L–L–L–H H–L–L–L L–H–L–L H–L–L–H L–H–L–H H–L–H–L

The Representation of Tone

18

As seen, there are no sequences of H tones if a L is present in the input. Scott (1990) summarizes the system as follows: The simplest system which may be hypothesized for Fore is one in which only changes between high and low tone are recognised as being contrastive. (1990: 141) There are no contrastive contours. (1990: 147) . . . tones appear to spread by increasing the domain of the L tones in preference to the spreading of H tones . . . From this it appears that H tones are to be considered as peaks of prominence or pitch targets. (1990: 147)

In other words, underlying /H/ and /L/ in Fore are constrained as follows. First, sequences of L are to be preferred to sequences of H and, second, there is no default L tone: if a word has a /H/ melody, the H will link to all of the tbus. In the next and final section we consider the question of whether non-derivational approaches to phonology need change our view of tonal representations.

6

Constraint-based tonology

The above characterization of Fore is of course a way of describing the tonal distributions from a constraint-based perspective. Within Optimality Theory (Prince and Smolensky 1993; McCarthy 2002), the constraints might be ranked as follows: (47)

Max(Tone) >> Dep(Tone) >> *H >> *L

Max(Tone) says that any input tone has a corresponding tone in the output, while Dep(Tone) says that any output tone has a corresponding tone in the input. In other words, tones are neither deleted nor inserted. The constraint ranking *H >> *L is designed to account for the limited number of H tones in (46): each tbu linked to a H tone is counted as a violation. The ranked constraints in (47) are but one way to conceptualize the Fore tonal distributions in such a framework. This therefore raises the question of what Optimality Theory (OT) has to offer tone and vice versa. High among the contributions of tone to our understanding of phonological systems is the family of OCP constraints that find a natural home in OT (Myers 1997). Building on his own and others’ previous work, Myers shows that the OCP functions as a conspiracy in a number of languages. In Chibemba, which contrasts /H/ and /Ø/, there is a process of bounded H-tone spreading illustrated in (48). (48)

a. b. c.

tu-la-kak-a tu-la-mu-kak-a bá-lá-kak-a bá-lá-mu-kak-a tu-la-bá-kák-a bá-la-bá-kák-a

‘we tie up’ ‘we tie him up’ ‘they tie up’ ‘they tie him up’ ‘we tie them up’ ‘they tie them up’

tu-la-súm-á tu-la-mu-súm-á bá-la-súm-á bá-lá-mu-súm-á tu-la-bá-súm-á bá-la-bá-súm-á

‘we bite’ ‘we bite him’ ‘they bite’ ‘they bite him’ ‘we bite them’ ‘they bite them’

In the above examples, all of the morphemes except /-súm-/ ‘bite’ and the plural class 2 prefixes /bá-/ ‘they’ and /-bá-/ ‘them’ are underlyingly toneless. As seen in the examples in the right column, the /H/ of /-súm-/ spreads onto the final

19

Larry M. Hyman

inflectional vowel /-a/. In (48b) the H of the subject prefix /bá-/ spreads onto the toneless present tense marker /-la-/, except in [bá-la-súm-á] ‘they bite’. This is because the spreading of H would have derived two H autosegments on successive tbus, thereby producing an OCP violation. In (48c), the /H/ of the object prefix /-bá-/ spreads onto the toneless verb root /-kak-/, as expected. Once again, the H of the subject prefix /bá-/ fails to spread, because *[-lá-bá-] would have constituted an OCP violation. However, note in the examples on the right in (48c) that, when concatenated, the object prefix /-bá-/ and the verb root /-súm-/ constitute an OCP violation, which is allowed to surface. In other words, the OCP blocks H-tone spreading when the result would be a violation, but does not condition any change on input violations. Such a situation, known as The Emergence of the Unmarked (McCarthy and Prince 1994; chapter 58: the emergence of the unmarked), is predicted by and can be elegantly modeled within OT: (49)

Max(H) >> OCP(H) >> Spread(H)

Max(H) guarantees that input Hs will be preserved in the output. OCP(H) says that there should not be Hs on successive tbus. The constraint Spread(H) is designed to say that every input H should spread onto the following tbu. The above ranking establishes that the OCP will be violated rather than deleting offending input Hs, while spreading will not occur if the result is an OCP violation. As we have seen, tone systems allow for a multiplicity of interpretations and solutions, producing an indeterminacy that can be referred to as “the too many analyses problem.” The problem is perhaps more pronounced in OT, where it is not clear how to interpret certain potentially complex tonal processes. For example, in a rule-based framework, bounded tone shift, where each input tone is realized on the following tbu, is straightforwardly described in terms of bounded spreading and delinking. While each of the two processes is independently motivated, it is not clear what is optimized in a system where the processes have been “telescoped” into a synchronic input–output relation of local tone shift. With few exceptions, the constraints that have been proposed for tone have not been universally adopted (see Akinlabi and Mutaka 2001 and Yip 2002 for partial inventories of tonal constraints). While OT tonology is perhaps in an unsettled state, a lot depends on the tonal representations that one assumes. At least one approach, Optimal Domains Theory (Cassimjee and Kisseberth 1998), groups tones together into tonal domains rather than exploiting the autosegmental manyto-one linkings between tones and tbus (see also McCarthy’s 2004 notion of “headed spans”). One area that seems crucial, but has not received the attention it deserves, is floating tones. Do floating tones violate Max(Tone) if they are allowed to survive as such in surface representations? To answer this one must first demonstrate that they are needed to survive in surface representations. We saw in (33a) that a floating L wedged between two linked Hs is one of the proposed representations of downstep. Since there would be no problem converting this structure into one where the floating L feature is assigned as a register feature on the following tone (and tonal sequence), the example is not conclusive. Let us then return to the tonal contrasts on Kom nouns in (36). In (50) their input and output tones are shown as they occur between the /L/ tone preposition /nè/ ‘with’ and the /H/-tone postposition /-fé/ (Hyman 2005: 317):

The Representation of Tone (50)

a.

-fe → nè fè-:aå -fè° ‘with a mat’

ne fe-:am L H L

b.

H

L H L

c.

L H

L H HL

H

d. ne fe-tam L H H

-fb

L H HL

H

-fe → nè fè-tám

-fé

H

L HH

‘with a bird’

[L–L–M–H]

‘with a gorilla’

[L–L–H–M]

‘with a fruit’

[L–L–H–H]

LH H

-fe → nè fè-bú?

ne fe-bu?

[L–L–ML–L°]

H

ne fe-Jwin -fe → nè fè-Jwcn -fé L H L H H

20

H

As seen, Kom has both H- and L-tone spreading. In (50a), L-tone spreading causes the following H to delink (producing a level L° tone on the postposition), while H-tone spreading does not cause the following L to delink. With a single H tbu lowering to M after L (cf. (37)), the noun stem /-:àm/ ‘mat’ surfaces with a ML contour tone. Similarly, the stem /-Jwì5/ ‘bird’ surfaces as M, each of the two linked H tones being lowered to M by a preceding L. The important examples are (50c) and (50d). As seen, when the L of /nè/ spreads and delinks the H of the noun prefix /fé-/, the latter’s H tone floats. As a result, the floating H shields the roots /-bú?`/ ‘gorilla’ and /-tám/ ‘fruit’ from lowering to M. (The floating L of /-bú?`/ is responsible for the lowering of the postposition /-fé/ to M in (50c).) Since the H to M lowering process is a late one for which we have even entertained the possibility that it applies in phonetic implementation, it should be clear that the floating Hs must persist into the surface representations. The alternative, that L-tone spreading does not delink a following H, but rather creates LH rising tones that are simplified in phonetic implementation, is perhaps suspect, if not ad hoc. If, on the other hand, the floating tones are allowed to occur in surface representations, the solution is straightforward. Such a conclusion would also seem to have consequences for other analyses. Particularly within OT, the question arises whether a surviving, delinked floating tone should be able to satisfy Max(Tone). Data from Kuki-Thaadow suggest maybe not. In this largely monosyllabic Tibeto-Burman language, words are underlyingly /H/, /L/, or /HL/ (Hyman 2010). As seen in (51a), however, a rising tone results from L-tone spreading: (51)

a.

L b.

→ hùon z oI

/hùon + zóo /

‘garden monkey’

H

/hùon + zóo + gùup/ → hùon zòoI gûup ‘six garden monkeys’ L

H

L

Larry M. Hyman

21

In (51b) we see that Kuki-Thaadow also has H-tone spreading, and the H of /zóoI/ is delinked. This is because rising (LH) and falling (HL) tones are permitted only phrase-finally. Thus, the L of /hùon/ ‘garden’ is realized on /zóoI/ ‘monkey’, and the H of /zóoI/ is realized on /gùup/ ‘six’. Now consider the forms in (52). (52)

a.

/hùon + zóoI + thúm/ → hùon zóoI thúm ‘three garden monkeys’ L

b.

H

H

/hùon + zóoI + gîet/ → hùon zóoI gîet L

H

‘eight garden monkeys’

HL

As seen, L-tone spreading does not occur when the targeted H tone is followed in turn by a H or HL tone. The question is: Why not? To answer this we have but to consider what the output would have been if L-tone spreading had applied in (52). The H of /zóoI/ would necessarily have had to delink, since a LH rising tone is well formed only in phrase-final position. As a result, the underlying /H/ would, at best, have to float. The question, then, is why this would not be well formed. It cannot be that an input link must remain as such in the output, since the H does delink in (51b). Instead, what seems to allow L-tone spreading to apply in (51a) and (51b) is that the input /H/ of /zóoI/ is preserved in the output: It is realized within the LH contour on its own syllable in (51a) and as part of the HL contour on the following syllable in (51b). In other words, Max(H) is satisfied. We must suppose, therefore, that if L-tone spreading applied in (52) and delinked the /H/ of /zóoI/, the resulting floating H would not satisfy Max(H). By contrast, H-tone spreading applies whether or not the targeted L remains linked on the surface (see Hyman 2010). We therefore can establish the ranking in (53). (53)

Max(H) >> Spread(H,L) >> Max(L)

In Kuki-Thaadow all input H tones make it to the surface, vs. closely related Hakha Lai, where the opposite ranking Max(L) >> Max(H) results in all input L tones being realized on the surface (Hyman and VanBik 2004). It should be noted that the non-application of L-tone spreading in (52) results more straightforwardly from the ranked constraints in (53), rather than from a requirement on recoverability: had L-tone spreading applied in (52a), the output L–L–H sequence would have unambiguously pointed to underlying /L–H–H/, since /L–L–H/ would have been realized as L–L–LH. (The same is not true of (52b), since L-tone spreading does not apply to a following HL tone.) If (53) is correct, OT will have made a unique contribution in providing a constraint, Max(H), which is responsible for the nonapplication of L-tone spreading in (52). By contrast, a rule-based approach would have to stipulate that L-tone spreading occurs to a L–H sequence when the H is followed either by pause or by a L tone, and would not provide any motivation for why the rule does not apply when the following tone is H or HL. Whether this is an indication that OT is on the right track – and can hence offer more improvements over the pre-OT conceptions of tonology – remains to be seen. In fairness,

The Representation of Tone

22

it must be said that the blocking of an otherwise general L-tone spreading process in (52) is unique to Kuki-Thaadow. As pointed out by Hyman (1973b: 157), we expect L–H–H to be a better target for L-tone spreading than L–H–L. Except for Kuki-Thaadow, this is true whether the H–H sequence is from two /H/ tones or from one /H/ that is doubly linked. Where does this leave the representation of tone? To summarize the foregoing sections, although the well-formedness and mapping conventions in (44) have been superseded in subsequent work, most of the essential representational insights of autosegmental tonology are still intact. The above discussion has only touched on a small part of the vast world of tone and of the growing constraint-based literature treating tone. Whatever the outcome of ongoing OT interpretations of tone, it is likely that questions of representation will remain central.

REFERENCES Akinlabi, Akinbiyi. 1985. Tonal underspecification and Yorùbá tone. Ph.D. dissertation, University of Ibadan. Akinlabi, Akinbiyi & Ngessimo M. Mutaka. 2001. Tone in the infinitive in Kinande: An OT analysis. In Ngessimo M. Mutaka & Sammy B. Chumbow (eds.) Research mate in African linguistics: Focus on Cameroon, 333–356. Cologne: Rüdiger Köppe Verlag. Anderson, Stephen R. 1978. Tone features. In Fromkin (1978), 133–175. Bao, Zhiming. 1999. The structure of tone. New York & Oxford: Oxford University Press. Bateman, Janet. 1990. Iau segmental and tonal phonology. Miscellaneous Studies of Indonesian and Other Languages in Indonesia 10. 29–42. Biber, Douglas. 1981. The lexical representation of contour tones. International Journal of American Linguistics 47. 271–282. Bogers, Koen, Harry van der Hulst & Maarten Mous (eds.) 1986. The phonological representation of suprasegmentals. Dodrecht: Foris. Braconnier, Cassian. 1982. Le système du dioula d’Odienné, vol. 1. Abidjan: University of Abidjan. Cassimjee, Farida & Charles W. Kisseberth. 1992. The tonology of depressor consonants: Evidence from Mijikenda and Nguni. Proceedings of the Annual Meeting, Berkeley Linguistics Society 18(2). 26–40. Cassimjee, Farida & Charles W. Kisseberth. 1998. Optimal Domains Theory and Bantu tonology: A case study from Isixhosa and Shingazidja. In Hyman & Kisseberth (1998), 33–132. Chao, Yuen-Ren. 1930. A system of tone-letters. Le maître phonétique 45. 24–27. Clements, G. N. 1977. The autosegmental treatment of vowel harmony. In Wolfgang U. Dressler & Oskar E. Pfeiffer (eds.) Phonologica 1976, 111–119. Innsbruck: Innsbrucker Beiträge zur Sprachwissenschaft. Clements, G. N. 1981. Akan vowel harmony: A non-linear analysis. Harvard Studies in Phonology 2. 108–177. Clements, G. N. 1983. The hierarchical representation of tone features. In Ivan R. Dihoff (ed.) Current approaches to African linguistics, vol. 1. 145–176. Dordrecht: Foris. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. & Kevin C. Ford. 1979. Kikuyu tone shift and its synchronic consequences. Linguistic Inquiry 10. 179–210. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In Goldsmith (1995), 245–306.

23

Larry M. Hyman

Clements, G. N., Alexis Michaud & Cédric Patin. 2009. Do we need tone features? Paper presented at the Symposium on Tones and Features, University of Chicago. Daly, John P. & Larry M. Hyman. 2007. On the representation of tone in Peñoles Mixtec. International Journal of American Linguistics 73. 165–208. Edmondson, Jerold A. & Kenneth J. Gregerson. 1992. On five-level tone systems. In Shina Ja J. Huang & William R. Merrifield (eds.) Language in context: Essays for Robert E. Longacre, 555–576. Arlington: Summer Institute of Linguistics & University of Texas at Arlington. Fromkin, Victoria A. (ed.) 1978. Tone: A linguistic survey. New York: Academic Press. Goldsmith, John A. 1976a. Autosegmental phonology. Ph.D. dissertation, MIT. Goldsmith, John A. 1976b. An overview of autosegmental phonology. Linguistic Analysis 2. 23–68. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. Halle, Morris & Kenneth N. Stevens. 1971. A note on laryngeal features. MIT Research Laboratory of Electronics Quarterly Progress Report 101. 198–213. Hargus, Sharon & Keren Rice (eds.) 2005. Athabaskan prosody. Amsterdam & Philadelphia: John Benjamins. Hoberman, Robert D. 1988. Emphasis harmony in modern Aramaic. Language 64. 1–26. Hulstaert, Gustaaf. 1961. Grammaire du LDmDngD, vol. 1: Phonologie. Tervuren: Musée Royal de l’Afrique Centrale. Hyman, Larry M. 1973a. The role of consonant types in natural tonal assimilations. In Hyman (1973b), 151–179. Hyman, Larry M. (ed.) 1973b. Consonant types and tone. Los Angeles: University of Southern California. Hyman, Larry M. 1979. Phonology and noun structure. In Larry M. Hyman (ed.) Aghem grammatical structure, 1–72. Los Angeles: University of Southern California. Hyman, Larry M. 1986. The representation of multiple tone heights. In Bogers et al. (1986), 109–152. Hyman, Larry M. 1987. Prosodic domains in Kukuya. Natural Language and Linguistic Theory 5. 311–333. Hyman, Larry M. 2005. Initial vowel and prefix tone in Kom: Related to the Bantu Augment? In Koen Bostoen & Jacky Maniacky (eds.) Studies in African comparative linguistics with special focus on Bantu and Mande: Essays in honour of Y. Bastin and C. Grégoire, 313–341. Cologne: Rüdiger Köppe Verlag. Hyman, Larry M. 2009. Do tones have features? Paper presented at the Symposium on Tones and Features, University of Chicago. Hyman, Larry M. 2010. Kuki-Thaadow: An African tone system in Southeast Asia. In Franck Floricic (ed.) Essais de typologie et de linguistique générale, 31–51. Lyon: Presses de l’École Normale Supérieure. Hyman, Larry M. & Charles W. Kisseberth (eds.) 1998. Theoretical aspects of Bantu tone. Stanford: CSLI. Hyman, Larry M. & Daniel J. Magaji. 1970. Essentials of Gwari grammar. Ibadan: Institute of African Studies, University of Ibadan. Hyman, Larry M. & Armindo Ngunga. 1994. On the non-universality of tonal association “conventions”: Evidence from Ciyao. Phonology 11. 25–68. Hyman, Larry M. & Maurice Tadadjeu. 1976. Floating tones in Mbam-Nkam. In Larry M. Hyman (ed.) Studies in Bantu tonology, 57–111. Los Angeles: Department of Linguistics, University of Southern California. Hyman, Larry M. & Kenneth VanBik. 2004. Directional rule application and output problems in Hakha Lai tone. Phonetics and Phonology, Special Issue, Language and Linguistics 5. 821–861. Taipei: Academia Sinica. Hyman, Larry M., Francis Katamba & Livingstone Walusimbi. 1987. Luganda and the strict layer hypothesis. Phonology Yearbook 4. 87–108.

The Representation of Tone

24

Hyman, Larry M., Heiko Narrog, Mary Paster & Imelda Udoh. 2002. Leggbó verb inflection: A semantic and phonological particle analysis. Proceedings of the Annual Meeting, Berkeley Linguistics Society 28, 399–410. Jenison, Scott D. & Priscilla B. Jenison. 1991. Obokuitai phonology. Workpapers in Indonesian Languages and Culture 9. 69–90. Kamanda-Kola, Roger. 2003. Phonologie et morphosyntaxe du mDnD: Langue oubanguienne du Congo R. D. Munich: Lincom Europa. Kenstowicz, Michael & Mairo Kidda. 1987. The obligatory contour principle and Tangale phonology. In David Odden (ed.) Current approaches to African linguistics, vol. 4, 223–238. Dordrecht: Foris. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. Leben, William R. 1978. The representation of tone. In Fromkin (1978), 177–219. Lee, Seunghun Julio. 2008. Consonant–tone interactions in Optimality Theory. Ph.D. dissertation, Rutgers University. Loving, Richard E. 1966. Awa phonemes, tonemes, and tonally differentiated allomorphs. Papers in New Guinea Linguistics 5. 23–32. Maddieson, Ian. 1971. The inventory of features. In Ian Maddieson (ed.) Tone in generative phonology, 3–18. Ibadan: Institute of African Studies, University of Ibadan. Maddieson, Ian. 1978. Universals of tone. In Joseph H. Greenberg, Charles A. Ferguson & Edith A. Moravcsik (eds.) Universals of human language, vol. 2: Phonology, 335–365. Stanford: Stanford University Press. Matisoff, James A. 1973. Tonogenesis in Southeast Asia. In Hyman (1973b), 71–95. McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12. 373–418. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. McCarthy, John J. 2004. Headed spans and autosegmental spreading. Unpublished ms., University of Massachusetts, Amherst (ROA-685). McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. McCarthy, John J. & Alan Prince. 1994. The emergence of the unmarked: Optimality in prosodic morphology. Papers from the Annual Meeting of the North East Linguistic Society 24. 333–379. Mutaka, Ngessimo. 1994. The lexical tonology of Kinande. Munich: Lincom Europa. Myers, Scott. 1997. OCP effects in Optimality Theory. Natural Language and Linguistic Theory 15. 847–892. Myers, Scott. 1998. Surface underspecification of tone in Chichewa. Phonology 15. 367–391. Nash, Jay A. 1992–94. Underlying low tones in Ruwund. Studies in African Linguistics 23. 223–278. Newman, Paul. 1986. Contour tones as phonemic primes in Grebo. In Bogers et al. (1986), 175–193. Nougayrol, Pierre. 1979. Le day de bouna (Tschad), vol. 1: Eléments de description linguistique. Paris: SELAF. Odden, David. 1982. Tonal phenomena in KiShambaa. Studies in African Linguistics 13. 177–208. Olson, Kenneth S. 2005. The phonology of Mono. Dallas: SIL International & University of Texas at Arlington. Pankratz, Leo & Eunice V. Pike. 1967. Phonology and morphophonemics of Ayutla Mixtec. International Journal of American Linguistics 33. 287–299. Paster, Mary. 2003. Tone specification in Leggbo. In John M. Mugane (ed.) Linguistic typology and representation of African languages, 139–150. Trenton, NJ: Africa World Press. Paulian, Christiane. 1975. Le kukuya: Langue teke du Congo. Paris: Societé d’Études Linguistiques et Anthropologiques de France.

25

Larry M. Hyman

Philippson, Gérard. 1998. Tone reduction vs. metrical attraction in the evolution of Eastern Bantu tone systems. In Hyman & Kisseberth (1998), 315–329. Picanço, Gessiane. 2002. Tonal polarity as phonologically conditioned allomorphy in Mundurukú. Proceedings of the Annual Meeting, Berkeley Linguistics Society 28. 237–248. Pike, Eunice V. 1948. Problems in Zapotec tone analysis. International Journal of American Linguistics 14. 161–170. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Pulleyblank, Douglas. 1986. Tone in Lexical Phonology. Dordrecht: Reidel. Pulleyblank, Douglas. 1988. Vocalic underspecification in Yoruba. Linguistic Inquiry 19. 233–270. Pulleyblank, Douglas. 2004. A note on tonal markedness in Yoruba. Phonology 21. 409–425. Qingxia, Dai & Lon Diehl. 2003. Jingpho. In Graham Thurgood & Randy J. LaPolla (eds.) The Sino-Tibetan languages, 401–408. London & New York: Routledge. Scott, Graham. 1990. A reanalysis of Fore accent. La Trobe University Working Papers in Linguistics 3. 139–150. Seifart, Frank. 2005. The structure and use of shape-based noun classes in Miraña (North West Amazon). Ph.D. dissertation, Radboud University Nijmegen. Smith, Neil V. 1968. Tone in Ewe. MIT Research Laboratory of Electronics Quarterly Progress Report 88. 290–304. Snider, Keith L. 1999. The geometry and features of tone. Dallas: Summer Institute of Linguistics & University of Texas at Arlington. Stahlke, Herbert F. W. 1971. The noun prefix in Ewe. Studies in African Linguistics. 141–159. Stahlke, Herbert F. W. 1977. Some problems with binary features for tone. International Journal of American Linguistics 43. 1–10. Wang, William S.-Y. 1967. Phonological features for tone. International Journal of American Linguistics 33. 93–105. Weber, David & Wesley Thiesen. 2000. A synopsis of Bora tone. Work Papers of the Summer Institute of Linguistics 45. Available (May 2010) at http://www.und.edu/dept/ linguistics/wp/2001.htm. Welmers, William E. 1962. The phonology of Kpelle. Journal of African Languages 1. 69–93. Williamson, Kay. 1986. The Igbo associative and specific constructions. In Bogers et al. (1986), 195–208. Yip, Moira. 1980. The tonal phonology of Chinese. Ph.D. dissertation, MIT. Yip, Moira. 1988. Template morphology and the direction of association. Natural Language and Linguistic Theory 6. 551–577. Yip, Moira. 1989. Contour tones. Phonology 6. 149–174. Yip, Moira. 1995. Tone in East Asian languages. In Goldsmith (1995), 476–494. Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press. Zoll, Cheryl. 2003. Optimal tone mapping. Linguistic Inquiry 34. 225–268.

46

Positional Effects in Consonant Clusters Jongho Jun

1

Introduction

It is commonly observed across languages that phonological processes may apply only in certain (“non-prominent”) positions. In contrast, elements in other (“prominent”) positions typically resist or trigger these processes. Such prominent vs. non-prominent positional distinctions are further applicable to more general patterns of licensing and neutralization of phonological contrasts; namely, featural/segmental contrasts are likely to be licensed in prominent positions, whereas these contrasts are likely to be neutralized in non-prominent positions. Pairs of prominent and non-prominent positions include word-initial vs. non-initial, stressed vs. unstressed, root vs. affix, and prevocalic vs. preconsonantal positions. Among the positional effects involving these pairs, this chapter is mainly concerned with those involving the two members of intervocalic consonant clusters. See Beckman (1998) and Barnes (2006) for recent extensive investigations of other positional effects; see also chapter 104: root–affix asymmetries and chapter 102: category-specific effects. In intervocalic C1C2 clusters, the preconsonantal C1 is more likely to undergo phonological processes such as voicing and place assimilation, in contrast with the prevocalic C2, which is rarely subject to such processes. I will refer to this asymmetric positional effect as the C2 dominance effect. This chapter discusses empirical data patterns which display positional effects. Its focus will be on how to explain the C2 dominance effect. I will begin with a discussion of typical data patterns of the C2 dominance effect and proceed to less common, somewhat exceptional, patterns which are nonetheless crucial in comparing the previous approaches. Specifically, I will concentrate on the comparison between prosody-based approaches (Itô 1986, 1989; Cho 1990; Goldsmith 1990; Rubach 1990; Lombardi 1995, 1999, 2001b; Beckman 1998; Kabak and Idsardi 2007) and cue-based approaches (Steriade 1993, 1995, 1999, 2001, 2009; Flemming 1995; Jun 1995, 2004; Padgett 1995; Boersma 1998; Hume 1999; Côté 2000; Wilson 2001; Blevins 2003; Seo 2003). It will be shown that current evidence is mixed. Much of the commonly observed data, to be discussed in the following two sections, can be equally well accommodated in the two approaches. However, there exist less common patterns which can be understood under only one of the two The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0046

Jongho Jun

2

approaches. Evidence exclusively supporting the cue-based approach will be discussed in §4, and evidence for the prosody-based approach in §5.

2

C2 dominance effect

Assimilation occurs in consonant clusters when one of two neighboring consonants takes on some property of the other. I will call the former (i.e. the undergoer of assimilation) the target, and the latter (i.e. the source of the assimilating property) the trigger. With respect to the assimilation in C1C2 clusters, it is crosslinguistically true that C1 and C2 are the target and the trigger, respectively, and thus the direction of assimilation is regressive. To illustrate this C2 dominance effect, I will first consider patterns of voicing assimilation and then patterns of place assimilation. Finally, patterns of consonant deletion will be discussed. As can be seen in (1), in Catalan, Polish and Russian, voiced and voiceless obstruents are separate phonemes, and they may occur unhindered in prevocalic position. But in clusters composed of obstruents, the first constituent of the cluster must agree in voicing with the following constituent. As shown in (1.i), underlyingly voiced obstruents in C1 become voiceless before a voiceless obstruent in C2 whereas underlyingly voiceless obstruents in C1 become voiced before a voiced obstruent in C2, as in (1.ii). Thus, voicing assimilation occurs in clusters, targeting C1. This C2 dominance effect in voicing assimilation can be seen in other languages, including Dutch, Yiddish, Sanskrit, Romanian, Serbo-Croatian, Ukrainian, Hungarian, Egyptian Arabic, and Lithuanian. Steriade (1999) and Beckman (1998) provide in-depth discussion of voicing assimilation, i.e. a type of laryngeal neutralization, in these languages, emphasizing that it is normally regressive, and thus its C2 dominance effect is quite robust. (1)

Regressive voicing assimilation a.

Catalan (from Beckman 1998, citing Hualde 1992) __ voiceless i. /b/ ’OoßH ‘wolf (fem)’ ’OoppH’tit /g/ H’m/:H ‘friend (fem)’ H’m/kpH’t/t __ voiced ii. /t/ ’gatH ‘cat (fem)’ ’gaddu’len /k/ ’pDkH ‘little (fem)’ ’pDg’du

‘small wolf’ ‘little friend’ ‘bad cat’ ‘a little hard’

b.

Polish (from Rubach 1996, Beckman 1998) __ voiceless ‘small frog’ i. /b/ za[b]a ‘frog’ za[pk]a ‘vodka’ /d/ wo[d]a ‘water’ wo[tk]a __ voiced ii. /Œ/ li[Œ]yo ‘count’ li[dzb]a ‘numeral’ /k/ szla[k]-u ‘route (gen sg)’ szla[gb]ojowy ‘war route’

c.

Russian (from Kenstowicz 1994, Kiparsky 1985, Padgett 2002) __ voiceless ‘bastbax (dim)’ i. /b/ korob-a ‘bastbax (gen sg)’ kirop-ka /d/ pod-nesti ‘to bring (to)’ pot-pisat j ‘to sign’

Positional Effects in Consonant Clusters

ii. /s’/ pros’-it’ /t/ ot-jexat j

‘to ask’ ‘to ride off’

__ voiced proz’-ba od-brosit j

3

‘request’ ‘to throw aside’

Consonant place assimilation occurs in clusters when one of two adjacent consonants, i.e. the target, takes on the place of articulation of the other, i.e. the trigger. Consonants differ in the likelihood of being targeted in place assimilation depending on the manner and place of articulation. Nasals (as opposed to stops and continuants) and coronals (as opposed to labials and velars) are the most likely targets of place assimilation (Mohanan 1993; Jun 1995, 2004). For instance, as shown in (2.i), in Diola Fogny only nasals can be targeted in place assimilation, and in Yakut only coronals can be targeted. However, assimilation applies only when such potential target consonants occupy C1, not C2, position. Notice in (2a.ii) that nasals such as /m/ in C2 only trigger, not undergo, assimilation. In (2b.ii), coronals in C2 resist place assimilation. (2)

Regressive place assimilation a.

b.

Diola Fogny (from Sapir 1965) i. /ni+gam+gam/ [nigaIgam] ii. /na+mi:n+mi(n/ [nami(mmi(n] Yakut (from Krueger 1962) i. Coronals in C1 are the target /at+ka/ [akka] /yn+kyr/ [yIkyr] ii. Coronals in C2 are not the target /tobuk+ta/ [tobukta] /silim+te/ [silimne]

‘I judge’ ‘he cut (with a knife)’

‘to a horse’ ‘sloping, aslant’ ‘knee (part)’ ‘glue (part)’

As shown above, place assimilation is typically regressive, just as voicing assimilation is; and thus it targets C1, and its trigger is C2. This C2 dominance effect in place assimilation is very robust across languages, as shown in typological studies by Webb (1982), Mohanan (1993), and Jun (1995, 2004), and there are only a small number of exceptions. (See Jun 1995, 2004 and McCarthy 2008 for discussion of such exceptional progressive assimilation patterns.) Optional patterns of place assimilation are not different from obligatory and categorical patterns in the preponderance of regressive assimilation over progressive assimilation. In Korean, the occurrence of place assimilation is subject to speech style and rate, and thus optional (Kim-Renaud 1974; Jun 1996; Son et al. 2007; Kochetov and Pouplier 2008; Son 2008). This optional assimilation is regressive. Specifically, only coronal and labial stops and nasals in C1 are the target, and non-coronals in C2 are the trigger, as can be seen in (3a.i). Notice in (3a.ii) that coronals in C2 do not undergo the assimilation, and non-coronals in C1 cannot trigger the assimilation (see chapter 12: coronals). English casual speech assimilation is very much like Korean assimilation in the direction of assimilation and segmental characteristics of the target. The only difference is that only coronals, not labials, can be the target in English assimilation. Notice that coronals undergo assimilation only when they are in C1, not C2, as can be seen in (3b).

Jongho Jun

4 (3)

Casual speech place assimilation a.

b.

Korean1 i. /mit+ko/ /ip+ko/ /cinan pam/ ii. /ik+ta/ /ip+ta/ /paI+pota/

[mitk’o] ~ [mikk’o] [ipk’o] ~ [ikk’o] [cinanbam] ~ [cinambam] [ikt’a], *[itt’a], *[ikk’a] [ipt’a], *[itt’a], *[ipp’a] [paIbota], *[pambota], *[paIgota]

‘believe (conn)’ ‘wear (inf)’ ‘last night’ ‘ripe (inf)’ ‘wear (inf)’ ‘(more) than room’

English (based on Bailey 1970) i. right poor righ[p] poor good-bye goo[b]bye ii. keep track *kee[t] track *keep [p]rack back track *ba[t] track *back [k]rack

This C2 dominance effect in casual speech assimilation can also be seen in other languages, such as German (Kohler 1990, 1991a, 1991b, 1992), Malay, Thai (Lodge 1986, 1992), Toba Batak (Hayes 1986), Spanish (Harris 1969), and Ponapean (Rehg and Sohl 1981); see also chapter 79: reduction on assimilation as a casual speech phenomenon. The C2 dominance effect is not limited to assimilation in consonant clusters, but extends to consonant deletion in clusters. Consonant deletion occurs in clusters when one of two adjacent consonants, i.e. the target, deletes. It has been observed and emphasized in the literature (Côté 2000; Wilson 2001; Jun 2002; McCarthy 2008) that C1, as opposed to C2, is always the target in such deletions. For instance, as shown in (4), stops in C1, not C2, delete in Diola Fogny, West Greenlandic, and Basque. (4)

Consonant deletion: C1 is the target a.

1

Diola Fogny (Sapir 1965) /let+ku+jaw/ [le kujaw] /kuteb sinaIas/ [kute sinaIas] [eke bo] /eket bo/

b.

West Greenlandic /qanik+lerpoq/ /ukijuq+tuqaq/ /anguti+kulak/

c.

Basque (Hualde 1987) /bat+paratu/ [ba-paratu] /bat+kurri/ [ba-kurri] /guk+piztu/ [gu-piztu] /guk+kendu/ [gu-kendu]

‘they won’t go’ ‘they carried the food’ ‘death there’

(Rischel 1974; Fortescue 1980) [qani lerpoq] ‘begins to approach’ [ukiju tuqaq] ‘old year’ [angu kulak] ‘he goat’ ‘put one’ ‘run one’ ‘we light’ ‘we take away’

In Korean, which has a three-way laryngeal contrast among obstruents, i.e. lenis, aspirated, and tense, lenis obstruents become tense after an obstruent (Post-obstruent tensing), and voiced between sonorants (Inter-sonorant voicing). See Kim-Renaud (1986) and Ahn (1998) for details of these automatic processes.

Positional Effects in Consonant Clusters

5

In addition to those shown above, consonant deletion with a C1 target can be found in Akan (Lombardi 2001b, citing Schachter and Fromkin 1968), Axininca person prefixes (Lombardi 2001b, citing Payne 1981), Carib (Gildea 1995), and Tunica (Wilson 2001, citing Haas 1946). Consequently, as summarized below, the cross-linguistic generalization which is common to the three phonological processes (voicing assimilation, place assimilation, and consonant deletion) is that C1 in intervocalic C1C2 clusters is the target, whereas C2 is the trigger. (5)

The C2 dominance effect in voicing assimilation, place assimilation, and consonant deletion In intervocalic C1C2 clusters, C1 is a typical target, and C2 is a typical trigger.

Let us consider how to capture this C2 dominance effect. It is not difficult to derive patterns with the C2 dominance effect within the framework of previous theories such as classical generative theory, autosegmental phonology, and underspecification theory. For instance, regressive assimilation can easily be characterized by a rule of the type shown in (6a). However, progressive assimilation can also be formulated with equal complexity, as shown in (6b), and its absence, or at best rarity, in the typology would be a surprise. Representational theories such as autosegmental phonology, feature geometry, and underspecification theory would be no better in this respect than classical generative theory, as there is no plausible reason to differentiate in the complexity of the representation between the two members of a consonant cluster. (See Jun 1995 for the relevant discussion.) (6)

Rules for consonant place assimilation a. b.

C → [aplace] / __ [C, aplace] C → [aplace] / [C, aplace] __

(regressive assimilation) (progressive assimilation)

Compared to the rule-based theories with a focus on the correct formulation of the language-specific phonological processes, Optimality Theory (McCarthy and Prince 1995; Prince and Smolensky 2004) is more concerned with universal patterns, thus being in a better position to explain positional effects such as the C2 dominance effect and understand the motivation behind processes showing the effect. Along with the development of Optimality Theory, there have been two major lines of approach to the analysis of the positional effects, Licensing-by-cue and Licensing-by-prosody (in Steriade’s 1999 terminology). The prosody-based approach explains positional asymmetries by reference to prosodic structure. It attributes the C2 dominance effect to the coda–onset asymmetry since C1 and C2 are usually syllabified as a coda and an onset, respectively. The C1 in the coda is likely to be targeted in the processes since the coda is phonologically non-prominent and marked. In contrast, the C2 in the onset resists these processes since the onset is phonologically prominent and unmarked. For an analysis of the data of the C2 dominance effect, either greater faithfulness to the onset and/or dispreference for the coda or (marked) properties in the coda have been called on. Specifically, within the framework of Optimality Theory, positional faithfulness constraints for the onset or/and positional markedness constraints for the coda have been adopted in the literature. (See Casali 1996, 1997 and Beckman 1998 for positional

Jongho Jun

6

faithfulness analyses, and Zoll 1998 for the comparison between positional faithfulness and positional markedness.) In contrast, the cue-based approach (Flemming 1995; Steriade 1995, 1999, 2001, 2009; Boersma 1998; Côté 2000; Wilson 2001; Blevins 2003; Jun 2004) explains the C2 dominance effect by relying on the perceptual factors involved. The C1 has low perceptibility since it is preconsonantal and thus may lack important perception cues, such as release bursts and C-to-V formant transitions, to laryngeal/place features and segmenthood under overlap with C2 (Lamontagne 1993; Wright 1996). In contrast, the C2 is perceptually prominent since it is prevocalic, being able to maintain such perception cues. (See Wright 2004 for a detailed discussion of perception cues.) From the assumption that change in perceptually prominent positions would cause drastic input–output difference, and thus be greatly dispreferred, whereas the comparable change in non-prominent positions would cause less difference, and thus be less dispreferred, it is derived that non-prominent C1 is more likely to be modified, i.e. targeted in phonological processes, than prominent C2. Thus, the cue-based approach attributes the C2 dominance effect to higher perceptibility of C2 over C1. The two approaches under consideration differ in whether the constraints (and rules) adopted to explain positional effects should be expressed as prosody-based or string-based (more precisely, cue-based) statements. However, the empirical data presented thus far will not distinguish the two approaches, since the preconsonantal C1 which is expected to be the target of the phonological processes in the cue-based approach is normally syllabified as a coda, which the prosody-based approach also expects to be the more likely target. In §4 and §5, I will present the data patterns for which the two approaches make distinct predictions.

3

Non-assimilatory neutralization

Assimilation can be considered as a case of contrast neutralization. As shown in the previous section, assimilation is primarily regressive (i.e. C1 is the target), and thus potential contrasts of the assimilating feature are neutralized in C1 position. For instance, in regressive voicing assimilation, consonants in C1 with distinct voice feature values in their underlying form would have identical phonetic realization with respect to voicing, i.e. they are voiced before a voiced segment and voiceless before a voiceless one. Non-assimilatory neutralization of voicing, as well as other laryngeal features and place of articulation features, targets the C1 and word-final position. In languages which have voicing assimilation in consonant clusters, the word-final position is the only available target of non-assimilatory neutralization, and in fact languages with voicing assimilation mentioned in the previous section show final devoicing, as in (7) (see also chapter 69: final devoicing and final laryngeal neutralization). (7) Final devoicing in languages with voicing assimilation a.

Catalan (Beckman 1998, citing Hualde 1992) __V __# i. /t/ [’gatH] ‘cat (fem)’ [’gat] /k/ [’pDkH] ‘little (fem)’ [’pDk]

‘bad (masc)’ ‘little (masc)’

Positional Effects in Consonant Clusters ii. b.

c.

/b/ /g/

[’OoßH] ‘wolf (fem)’ [H’m/:H] ‘friend (fem)’

Dutch (Kager 1999) __V i. /t/ [petHn] [betHn] ii. /d/ [bedHn] [hudHn]

‘caps’ ‘(we) dab’ ‘beds’ ‘hats’

Polish (Rubach 1996) __V /d/ sa[d]-y ‘orchards’ /z/ ko[z]-a ‘goat’ /v/ pra[v]-o ‘law’

[’Oop] [’Hm/k]

‘wolf (masc)’ ‘friend (masc)’

__# [pet] [bet] [bet] [hut]

‘cap’ ‘(I) dab’ ‘bed’ ‘hat’

__# sa[t] kó[s] pra[f]

‘orchards (nom sg)’ ‘goat (gen pl)’ ‘law (gen pl)’

7

d. Russian (Hayes 1984; Kiparsky 1985; Padgett 2002) __V __# /b/ klub-a ‘club (gen sg)’ klup ‘club’ /d/ sa[d]-a ‘garden (gen sg)’ sa[t] ‘garden (nom sg)’ /g/ knig-a ‘book (nom sg)’ knik ‘book (gen pl)’ Notice that voiced obstruents may appear before a vowel, more precisely a sonorant, but only neutralized, normally voiceless, obstruents are allowed to occur word-finally in these languages.2 In many other languages, such as Korean, Maidu, Greek, German, Thai, and Sanskrit, the non-assimilatory neutralization of the laryngeal features such as voicing, aspiration, and glottalization occurs not only at the end of the word, but also in preconsonantal C1 position. As mentioned above, Korean has a three-way laryngeal contrast between lenis, aspirated, and tense (i.e. glottalized) obstruents. As shown in (8a), the three-way contrast can be maintained before a vowel. However, Korean obstruents are neutralized to their homorganic lenis stop counterparts in word-final and preconsonantal C1 position (see chapter 111: laryngeal contrast in korean). As discussed in Lombardi (1995), Maidu also has a three-way contrast between voiceless, implosive, and glottalized obstruents. Just as in Korean, the three-way contrast in Maidu is maintained only syllable-initially, i.e. prevocalically, whereas laryngeally marked consonants, i.e. implosive and glottalized, occur neither in C1 nor at the end of the word. Some examples of alternations displaying neutralization of glottalized stops are shown in (8b). As discussed in Steriade (1999), Ancient Greek has voiceless unaspirated, voiced, and voiceless aspirated stops. The laryngeal distinction among stops can be made before vowels (and sonorant consonants) whereas it is neutralized before obstruents. Some examples of relevant alternations are shown in (8c). Notice that stops in Greek are not allowed to occur word-finally, and thus there is no way to observe active word-final neutralization.

2

See Steriade (1999) for a discussion of the phonetic realization of the neutralized obstruents and Blevins (2006) for an in-depth discussion of final devoicing and the related survey.

Jongho Jun

8 (8)

Laryngeal neutralization: target = word-final and preconsonantal C1 a.

Korean underlying form

i. ii. iii. b.

c.

lenis /kuk/ aspirated /puHkh/ glottalized /pak’/

‘soup’ ‘kitchen’ ‘outside’

Maidu (Lombardi 1995) __V /pit’/ pit’i k’atanoky ‘manure-rolling beetle’ /jep’/ jep’im symi ‘male’ ‘buck deer’

__V (-i (nom)) kugi puHkh i pak’i

__# isolation form kuk puHk pak

__C (-to ‘too’) kukt’o puHkt’o pakt’o

__# __C pit pitk’ololo ‘defecate, feces’ ‘intestines’ jepsy ‘men’

Greek (Steriade 1999) __V i. voiceless t horak-os ‘thorax (gen sg)’ ii. voiced laryng-os ‘larynx (gen sg)’ iii. aspirated trikh-os ‘hair (gen sg)’

__C t horak-si ‘thorax (dat pl)’ larynk-si ‘larynx (dat pl)’ t h rik-si ‘hair (dat pl)’

In summary, the preconsonantal C1 position is the cross-linguistically common target position of both assimilatory and non-assimilatory laryngeal neutralizations, and the word-final position is additionally the common target position of the non-assimilatory neutralization. Let us now consider non-assimilatory place neutralization. Just like laryngeal neutralization, place neutralization targets C1 and the word-final position. In languages like Spanish, Ancient Greek, and Japanese, certain place distinctions are removed through regressive assimilation in clusters, and in a non-assimilatory way at the end of the word. As mentioned in Steriade (2001), in Ancient Greek [n] and [m] in C1 assimilate in place to the following consonant, and only [n] is allowed to occur at the end of the word. Similar neutralizations can be seen in Spanish and Japanese. In Spanish, there are three nasal phonemes, bilabial, alveolar, and palatal, which are contrastive only in prevocalic position: ca[m]a, ca[n]a, and ca[J]a (Harris 1984). In intervocalic C1C2 clusters, place distinctions of nasals in C1 are neutralized since they always agree in place with the following consonant in C2, as shown in (9a.i). Moreover, only a single nasal may occur at the end of the word, although dialects differ in the exact place of articulation of the default nasal, i.e. alveolar in standard varieties and velar in non-standard varieties, as shown in (9a.ii, iii). In Japanese, in intervocalic C1C2 clusters, consonants in C1 must agree in place with those in C2, as shown in (9b.i, ii). Thus, within a word, only homorganic nasal and geminate clusters can occur. In addition, the word-final position can be occupied only by a single nasal, which is called the mora nasal and is usually transcribed as [N] or [I]. This nasal is produced with no fixed oral constriction (Vance 1987: 35), and thus it is sometimes argued to be a placeless segment (for instance, McCarthy 2008: 278). (See chapter 22: consonantal place of articulation.)

Positional Effects in Consonant Clusters (9)

9

Place assimilation and final place neutralization a.

Spanish (from Harris 1984) i. Homorganic NC clusters: ca[mp]o, ma[ns]o, ma[Ik]o, á[,f]ora, ma[|t]o ii. Final [n] in standard varieties: e[n] Chile, ta[n] frío, u[n] elefante iii. Final [I] in non-standard varieties: e[I] Chile, ta[I] frío, u[I] elefante

b.

Japanese (Vance 1987; Yip 1991; Itô et al. 1995; Kager 1999) i. Geminates kap.pa ‘a legendary being’ kit.te ‘stamp’ gak.koo ‘school’ ii. Homorganic nasal + obstruent tom.bo ‘dragonfly’ non.do ‘tranquil’ kaI.gae ‘thought’ /jom + te/ → jonde ‘reading’ /œin + te/ → œinde ‘dying’ iii. Final mora nasal hoN ‘book’ zeN ‘goodness’

There are also languages in which place neutralization occurs only in a nonassimilatory fashion. As discussed in Lombardi (2001b, citing Rice 1989), nonsonorant consonants in Slave (Athabaskan) are realized as /h/ syllable-finally, as shown in (10a). Sonorants are like obstruents in having no place distinctions, although the exact final neutralization patterns are not the same. Syllable-final nasals delete, nasalizing the preceding vowel, and /j/ is the only possible coda among non-nasal sonorants. Another example of non-assimilatory place neutralization is from the Kelantan dialect of Malay (Teoh 1988). Final stops /k t p/ are realized as [?], and final fricatives like /s/ as [h], as shown in (10b). (10)

Final place neutralization (from Lombardi 2001b) a.

b.

Slave i. /ts’ad/ ii. /xaz/ iii. /see:/ /tl’ux/

ts’ah xah seeh tl’uh

Kelantan Malay i. /ikat/ ii. /sHsak/ iii. /dakap/ iv. /tapis/

ika? ‘tie’ sHsa? ‘crowded’ daka? ‘embrace’ tap/h ‘to filter’

-ts’ade ‘hat’ -:aze ‘scar’ -zee:e ‘saliva’ -tl’uxe ‘rope’

In these languages, contrasts of some features other than place can be maintained finally, for instance obstruents vs. sonorants in Slave and stops vs. fricatives in Kelantan Malay. But there are also many languages like Burmese in which all

10

Jongho Jun

consonants are neutralized to [?]. (See Lombardi 2001b and references therein for more details.) In summary, as stated below, the preconsonantal C1 is not only the typical target of assimilation, but also the typical target of the non-assimilatory neutralization. The word-final position is an additional typical target of the neutralization. (11)

The C2 dominance effect in (assimilatory and non-assimilatory) laryngeal and place neutralization Preconsonantal C1 and word-final positions are common target positions.

It is usually the case that preconsonantal C1 and word-final positions form a natural class, i.e. coda. Thus it is obvious that the prosody-based approach can provide a unified account of positional neutralizations of C1 and word-final positions, attributing both cases to the coda–onset asymmetry. The cue-based approach also provides a somewhat unified account for the two common target positions based on the fact that the word-final position lacks C-to-V transition cues, just as preconsonantal C1 does, and thus it has lower perceptibility compared to the prevocalic C2 position. Consequently, the relatively common positional neutralization patterns presented thus far do not significantly distinguish the two approaches. In the remainder of this chapter, I will consider the patterns which crucially distinguish them.

4 4.1

Evidence for the cue-based approach Neutralization sites ≠ syllable positions

According to the prosody-based approach, neutralization contexts should be described in prosodic terms: for instance, “codas are the target of laryngeal neutralization.” But, as discussed by Steriade (1999), there are cases in which there is no consistent connection between neutralization sites and syllable structure. First, there are languages in which neutralization targets only C1, not word-final, positions. Languages in which laryngeal neutralization occurs only in C1, not at the end of the word, include Yiddish, Romanian, Serbo-Croatian (Lombardi 2001b: 269), French, Hungarian, and Kolami (Steriade 1999). In addition, place neutralization of nasals occurs only in C1, not at the end of the word, in Diola Fogny (Sapir 1965) and the Souletin dialect of Basque (Hualde 1993). Under the prosody-based approach, it is not clear why word-medial and final codas behave differently, and even less clear why medial codas are more likely to be targeted in the neutralization than word-final codas (see chapter 36: final consonants for more discussion). In contrast, in the cue-based approach, the asymmetry between preconsonantal C1 and final positions may be derived from their relative perceptibility difference. C1 may be considered less perceptible than word-final position, because stops in C1, which overlap with consonants in C2, are more likely to be unreleased, thus lacking the release burst and closure duration cues, than those in word-final position.3 3

Blevins (2006: 143) discusses data from Dhaasana, Chadic Araic, and Maltese in which devoicing occurs exclusively at the end of the word, not in C1 position. This word-final, but not syllable-final, devoicing can be a problem not only for the prosody-based approach but also for the cue-based approach.

Positional Effects in Consonant Clusters

11

Second, laryngeal neutralization targets C1 only before obstruents, not before sonorants, regardless of its syllabic assignment in many languages, including Lithuanian, German, Russian, Greek, Sanskrit, Polish, Hungarian, and Kolami. Notice that it is possible that C1 in intervocalic C1C2 clusters is syllabified as a coda when sonorant consonants occupy C2, and its voicing contrast is then expected to be neutralized in the prosody-based approach. Also, it is usually the case that an obstruent is syllabified as an onset when it occurs as the first constituent of the word-initial clusters composed of obstruents, and then its voicing contrast is expected to be licensed in the prosody-based approach. These two expectations of the prosody-based approach are not satisfied in the languages mentioned above. For instance, in Lithuanian, where consonant clusters are heterosyllabic regardless of composition (e.g. /’auk.le/), voicing of obstruents may be contrastive in the coda when they occur before sonorants, as in (12c). The voicing of an obstruent is neutralized in the onset when it precedes another obstruent, as in (12b). Consequently, it is difficult to provide an adequate description of neutralization contexts in syllabic terms. (12)

Lithuanian obstruents in clusters (Steriade 1999) a. b. c. d.

licensed onsets neutralized onsets licensed codas neutralized codas

voiceless sam’gus ‘cheerful’ spalva ‘color’ aug.muo ‘growth’ daj[k] ‘much’

voiced Nmo’gus ‘man’ ’lizdas ‘nest’ ak.muõ ‘stone’

In contrast, to explain the difference in the likelihood of the neutralization between pre-obstruent and pre-sonorant positions, the cue-based approach may still rely on the perceptibility difference of the two positions. Specifically, the preobstruent position lacks the main contextual cues (VOT and other release-related cues), and thus is less perceptible than the pre-sonorant position, where the main cues can be maintained. Finally, there are languages in which neutralization patterns are fixed despite the variable syllabification. As discussed by Steriade (1999), for both Sanskrit and Ancient Greek syllable divisions in obstruent–sonorant clusters were variable, depending on “the dialect, the period, the literary style and the juncture separating the consonants.” In contrast, there was no variation in the pattern of laryngeal neutralization: in styles or dialects where VC1.C2V divisions were the norm for all clusters, laryngeal neutralization did not take place before heterosyllabic sonorants. This indicates that laryngeal features in these languages are neutralized irrespective of the syllabic affiliation of clusters, and thus the neutralization patterns cannot be adequately described in syllable terms. The above patterns indicate that syllable positions like codas are neither a sufficient nor a necessary condition for the occurrence of neutralization. Codas are not a sufficient condition in the patterns in which only word-medial, as opposed to final, codas and pre-obstruent, as opposed to pre-sonorant, codas are neutralized. Codas are not a necessary condition in the patterns in which an obstruent onset in word-initial clusters is neutralized. Codas would be totally useless in describing the Sanskrit and Greek patterns with variable syllabification but fixed neutralization patterns. Consequently, all these patterns can be taken as evidence against the prosody-based approach and in favor of the cue-based approach.

Jongho Jun

12

4.2

Apical neutralization

As mentioned above, consonant place assimilation is predominantly regressive across languages. This strong cross-linguistic tendency toward regressive assimilation has been considered a subset case of the C2 dominance effect in place neutralization, as stated in (11) and repeated below as (13). (13)

The C2 dominance effect in (assimilatory and non-assimilatory) laryngeal and place neutralization Preconsonantal C1 and word-final positions are common target positions.

The place neutralization typology displaying this C2 dominance effect provides an important basis for the prosody-based approach’s analysis of the coda–onset asymmetry. However, Steriade (2001) notes that this C2 dominance effect in place neutralization is true only when contrasts between labials, alveolars, velars, and palato-alveolars (referred to by Steriade as major C-Place contrasts) are neutralized. In the case of the neutralization of contrasts between apico-alveolars and retroflexes (referred to by Steriade as apical contrasts), a completely opposite tendency is observed: postconsonantal C2 and word-initial positions are typically targeted. Let us consider first assimilatory neutralization of apical contrasts and then nonassimilatory neutralization. First, apical assimilation is typically progressive, as can be seen in (14). Notice that in both Sanskrit and Urali, postconsonantal C2 alveolars in the underlying form are realized as retroflexes after post-vocalic C1 retroflexes. Thus, apical assimilation targets C2, not C1, which is the opposite of major C-Place assimilation. (14) (Patterns of word-internal) apical assimilation (Steriade 2001) a.

b.

Sanskrit /av-iÕ-dh i/ /ÏaK-na(m/ /jyotiÏ-su/

[aviÕÕh i] [ÏaKKa(m] [jyotiÏÏu]

Urali /eK-nuuru/ /keÕ-t-a-/

[eKKuuru] ‘hundred-8 = 800’ [keÍÍa-] ‘spoil-intrans’

‘favor’ ‘of six’ ‘in planets’

This C1 dominance effect in apical assimilation seems as robust as the C2 dominance effect in major C-Place assimilation, although the total number of cases of apical assimilation is relatively small. Based on a typological survey of apical assimilation, Steriade (2001) reached the conclusion that apical assimilation in clusters is predominantly progressive, and there is no exception to it when clusters belong to the same word and the two constituents of the clusters are identical in stricture (e.g. both stops). Non-assimilatory neutralization of the apical contrast also targets the postconsonantal C2, possibly along with word-initial positions. For instance, in Muri|ba}a, alveolars and retroflexes contrast in both C1 and word-final position. In contrast, the apical contrast is neutralized in postconsonantal C2. Apicals in C2 are always realized as alveolars after non-apicals, and as homorganic with an apical C1. Thus, the non-assimilatory neutralization of the apical contrast targets

Positional Effects in Consonant Clusters

13

C2 in Muri|ba}a. Miriwung is just like Muri|ba}a, in that the apical contrast is maintained in C1, but neutralized in C2. Apical neutralization additionally occurs at the beginning of the word in which only alveolars, not retroflexes, are allowed to occur. Consequently, the typical targets of apical neutralization may be summarized as below: (15)

The C1 dominance effect in (assimilatory and non-assimilatory) apical neutralization Postconsonantal C2 and word-initial positions are common target positions.

This is therefore the complete opposite of the C2 dominance effect in the neutralization patterns of laryngeality and major C-Places of articulation. Given that C2 is usually an onset, the prosody-based approach cannot explain the C1 dominance effect in apical neutralization in the same way as the C2 dominance effect summarized in (13). In contrast, in the cue-based approach, the C1 dominance effect may be derived naturally from the perception fact that cues to the apical distinction lie primarily in the V-to-C, not C-to-V, transitions, and thus C1 is more prominent in the perception of the apical contrast than C2. As discussed by Steriade (2001), citing Ladefoged and Maddieson (1986), Dave (1976), Stevens and Blumstein (1975), and Bhat (1973), the formant transitions into retroflexes in C1 show distinctively low F3 and F4 values, compared to those of denti-alveolars, whereas the transitions out of retroflexes in C2 are not distinct from those of dentialveolars. This acoustic asymmetry originates from the characteristic articulation of retroflexes, in which the tongue tip moves forward during the closure and releases from the same constriction location as apico-alveolars. Consequently, both the C2 dominance effect in major C-Place assimilation and the C1 dominance effect in apical assimilation may be derived from the main argument of the cue-based approach, i.e. the neutralization targets positions which lack prominent perceptual cues to the contrasts in question. Thus, the apical neutralization typology may form very strong evidence for the cue-based approach by showing a case of contrast-specific neutralization. (See Zhang’s 2004 discussion of contour tone typology for an additional case of contrast-specific licensing/neutralization.)

5 5.1

Evidence for the prosody-based approach Obstruent–sonorant clusters in Catalan

The cue-based approach provides a string-based, not prosody-based, account for positional neutralizations. If two sequences are segmentally identical, and thus not significantly different in the perceptibility involved, the cue-based approach expects the two to behave similarly with respect to neutralization even when they have different prosodic structures. Suppose that in obstruent–sonorant C1C2 clusters, the C1 obstruent may be syllabified either as an onset or as a coda, depending on the environment. The cue-based approach expects that the C1 obstruent will behave invariably with respect to positional neutralization, regardless of whether it is an onset or a coda. If, as predicted by the prosody-based approach, the C1 is licensed when syllabified as an onset, but neutralized when syllabified

Jongho Jun

14

as a coda, it will raise a serious problem for the cue-based approach. Wheeler (2005) shows that such a pattern exists in Catalan. As shown in (16a, b), in Catalan, stops other than dentals are contrastive in voicing before liquids as well as glides, while sibilants are contrastive only before glides when the obstruent–sonorant sequences are within a word. However, sibilants are always voiced before liquids, as in (17a), and dentals are always voiced before laterals, as in (17b). There are no word-internal sibilant–liquid and dental–lateral sequences with distinct voicing values. In summary, in Catalan, although the voicing contrast of the C1 obstruents in obstruent–sonorant sequences is in general licensed, sibilants and dentals are neutralized in voicing before liquids and laterals, respectively. In the cue-based approach, it is difficult to explain why some obstruent–sonorant sequences behave differently in voicing neutralization from other obstruent–sonorant sequences. In contrast, Wheeler (2005) argues that all obstruents with contrastive voicing in (16) occur in the onset whereas all with neutralized voicing in (17) occur in the coda.4 Thus, the prosody-based approach can easily describe the voicing neutralization of obstruents in obstruent–sonorant sequences in Catalan. (16)

Contrastive obstruent voicing in Catalan obstruent–sonorant clusters a.

non-sibilant / __ liquid, glide voiceless p vs. b prou [’p7Dw] ‘enough’

voiced brou

[’b7Dw] ∼ [’ß7Dw] sempre [’sem.p7H] ‘always’ sembra [’sem.b7H] t vs. d truita [’t7uj.tH] ‘trout’ druida [’d7uj.ÏH] ∼ [’Ï7uj.ÏH] k vs. g creu [’k7ew] ‘cross’ greu [’g7ew] ∼ [’:7ew] classe [’klasH] ‘class’ glaça [’glasH] ∼ [’:lasH] qualla [’kwaOH] ‘congeals’ guatlla [’gwaO.OH] ∼ [’:waO.OH] b.

sibilant / __ glide voiceless gràcia [’g7a.sjH] ‘humor’

‘broth’ ‘sows’ ‘druid’ ‘serious’ ‘freezes’ ‘quail’

voiced afàsia [H’fa.zjH] ‘aphasia’

(17) Neutralized obstruent voicing in Catalan obstruent–sonorant clusters a.

4

sibilant / __ liquid voiced legislar [z.l] ‘to legislate’ Israel [z.r] ~ [r] ‘Israel’

voiceless none

Although Wheeler does not provide details of syllabification in Catalan, Blevins (2003: 399), who also argues that voicing neutralization in Catalan occurs syllable-finally, suggests that Catalan syllabification is quite predictable by stating that “Catalan syllabification judgments were entirely consistent across speakers.”

Positional Effects in Consonant Clusters b.

dental / __ lateral voiced atleta [d.l] ‘athlete’

15

voiceless none

Further, comparable active neutralization, which is even more difficult to explain within the cue-based approach, can be observed in the obstruent–sonorant sequences occurring across word boundaries. In Catalan, word-final obstruents assimilate in voicing to the following consonants including liquids. Word-final stops, which are voiceless before an initial vowel of the following word (18.ii), become voiced before an initial sonorant (18.i). Thus the C1 obstruents here are neutralized with respect to voicing. This final obstruent neutralization would not be expected within the cue-based approach, since the same obstruent-sonorant sequences within a word are not subject to the voicing neutralization, as shown in (16a). So this is a case in which identical sequences behave differently with respect to neutralization, depending on where they occur. According to Wheeler (2005), the only difference between the obstruent–sonorant sequences of (16a) and (18.i) is in syllabic affiliation: the C1 obstruents in (16a) are onsets, whereas those in (18.i) are codas. Thus, only the prosody-based, not cue-based, approaches can explain the voicing patterns of obstruents in Catalan. (18)

5.2

Neutralized obstruent voicing in Catalan obstruent sonorant clusters a.

non-sibilant / __ #liquid i. poc logic /’pDk#H’lDÚik/ [’pDg.’lDÚik] ‘not very logical’ ii. poc amable [’pD.kH’mab.blH] ‘not very friendly’

b.

non-sibilant / __ #glide i. poc whisky /’pDk#’wiski/ ii. poc usual

[’pDg.’wiski] [’pD.ku’zwal]

‘not much whisky’ ‘not very usual’

Obstruent–sonorant clusters in Eastern Andalusian Spanish

Eastern Andalusian Spanish shows an additional case in which sequences with similar perception cues behave differently in contrast distribution and neutralizing processes, thus posing a problem to the cue-based approach. The discussion of this section is mostly based on Gerfen (2001). As shown in (19), in Standard Peninsular Spanish, /s/ is allowed in preconsonantal and word-final positions. In contrast, in Eastern Andalusian Spanish, /s/ is not allowed to occur in those positions. As shown in (20a), word-final /s/ deletes and aspirates the preceding vowel. Preconsonantal C1 /s/ also deletes but the deletion is accompanied by the gemination of the following C2 consonant. (19)

/s/ in Standard Peninsular Spanish (Gerfen 2001) a. b.

word-final coda: pre-C coda:

[ga.fas] [gas.ko]

‘eyeglasses’ ‘helmet’

Jongho Jun

16 (20)

/s/ in Eastern Andalusian Spanish (Gerfen 2001) a.

word-final coda /ganas/ [ga.nah] ‘desire’

b.

pre-C coda SPS i. [bos.ke] ii. [es.la.ßo]

EAS [bohk.ke] ‘forest’ [ehl.la.ßo] ‘Slavic’

The prosody-based approach can easily attribute the distributional restrictions of /s/ in Eastern Andalusian Spanish to a syllable coda restriction. In contrast, the Licensing-by-cue approach can adopt a string-based statement that /s/ is allowed to occur only before a vowel in Eastern Andalusian Spanish. But, as argued by Gerfen (2001), this is hard to justify, since sibilants such as /s/ have very salient internal cues in the frication noise, and thus C-to-V transition cues would not be important in the perception of the presence or absence of /s/, unlike in the perception of laryngeal and place contrasts. Thus, there is no plausible phonetic motivation for such a string-based requirement, i.e. that a vowel must be present immediately after /s/, within the cue-based approach. Gerfen discusses two more related patterns which are even more problematic for the cue-based approach. In Eastern Andalusian Spanish, the C1 coda deletion (followed by the gemination of C2) is not limited to /s/, but to all obstruents, as shown in (21) (see also chapter 38: the representation of sc clusters). (21)

Aspiration of all obstruent codas in Eastern Andalusian Spanish (Gerfen 2001) SPS [ap.to] [pi..ka] [ak..jon] [ob.tu.so]

EAS [ah t.to] [pih k.ka] [ah ...jon] ([ahs.sjon]) [oh t.tu.so]

‘apt’ ‘pinch, small amount’ ‘action’ ‘obtuse’

Notice that the C1 obstruents in obstruent–liquid sequences, word-initially or medially, are not subject to this deletion, as shown in (22). In the prosody-based approach, this difference can be attributed to the coda–onset asymmetry: stop + liquid clusters are syllabified as onsets, thus resisting coda deletion, whereas obstruent clusters shown in (21) are syllabified as coda–onset sequences, and thus C1 obstruents in the coda are subject to the deletion. In contrast, to explain the above difference between obstruent–obstruent and stop–liquid sequences, the cue-based approach would rely on the asymmetry in perceptibility between pre-obstruent and pre-sonorant positions. Specifically, C2 liquids may have richer perceptual cues to the preceding obstruents than C2 obstruents. (22)

Stop + liquid clusters in Eastern Andalusian Spanish (Gerfen 2001) a.

Initial stop + liquid clusters i. [klaro] ‘clear’ ii. [grado] ‘grade’ iii. [plano] ‘flat’ iv. [trapo] ‘rag’

Positional Effects in Consonant Clusters b.

17

Word-internal obstruent + liquid clusters i. [a.klara] ‘s/he/it clears up’ ii. [a.grada] ‘s/he/it pleases’ iii. [a.plana] ‘s/he/it applies’ iv. [a.trapa] ‘s/he/it traps’

But it is still unclear why /sl/, an obstruent–liquid sequence, does not behave like stop–liquid sequences. As shown in (20b.ii), /s/ is subject to coda deletion before /l/. Also, as shown below, /tl/, a stop–liquid sequence, behaves like /sl/, not like /kl/ and /gl/. If rich perceptual cues to C1 before a liquid can guarantee the surface realization of stop–liquid sequences like /kl/, /gr/, /pl/, and /tr/, it is difficult to understand why /tl/ and /sl/ cannot surface as such. (23)

/tl/ clusters in Eastern Andalusian Spanish (Gerfen 2001) /atleta/

[ah lleta]

‘athlete’

To summarize, both Catalan and Eastern Andalusian Spanish show asymmetric patterns in which only a subset of obstruent–sonorant sequences is subject to the distributional restrictions and related alternations targeting an obstruent in C1. For the analysis of these patterns, the prosody-based approach can still attribute the difference among the obstruent–sonorant sequences to the coda-onset asymmetry, but an equally plausible, string-based, solution seems to be unavailable in the cue-based approach. See Kabak and Idsardi (2007) for an additional support for the prosody-based, as opposed to the cue-based, approach. They investigated Korean listeners’ perception of non-native sequences, and argue that only syllablebased, not string-based, phonotactic constraints can explain their experimental results.

6

Conclusion

From the literature on phonological typology, we know that it is common for phonological processes not to apply in all positions, and more generally for phonological contrasts not to be licensed in all positions. Such positional effects are characterized by reference to certain pairs of prominent and non-prominent positions such as word-initial vs. non-initial, stressed vs. unstressed, root vs. affix, and prevocalic C2 vs. preconsonantal C1 positions. Among these, this chapter has been mainly concerned with the C2 dominance effect in which preconsonantal C1 in intervocalic C1C2 clusters is likely to be targeted for neutralization, whereas prevocalic C2 is likely to trigger or resist such neutralization. This C2 dominance effect is quite robust in laryngeal and place neutralization and consonant deletion. I have looked at the relevant data patterns, ranging from well known and common to less known and relatively exceptional, while comparing the cue-based and prosody-based approaches. Common data patterns can be explained equally well by both approaches. In contrast, less common or somewhat exceptional patterns may distinguish the two approaches. However, the evidence so far is mixed. Not only neutralization patterns in which there is no connection between neutralization sites and syllable positions, but also apical neutralization patterns,

18

Jongho Jun

may form evidence against the prosody-based approach. In contrast, obstruent– sonorant sequences in which the voicing of C1 obstruent may be licensed or neutralized, arguably depending on syllable structure, may form evidence against the cue-based approach. Any adequate theory of positional effects, including the C2 dominance effect, should be able to account for all these patterns, common or exceptional.

REFERENCES Ahn, Sang-Cheol. 1998. An introduction to Korean phonology. Seoul: Hanshin. Aronoff, Mark & Richard T. Oehrle (eds.) 1984. Language sound structure. Cambridge, MA: MIT Press. Bailey, Charles-James N. 1970. Toward specifying constraints on phonological metathesis. Linguistic Inquiry 1. 347 –349. Barnes, Jonathan. 2006. Strength and weakness at the interface: Positional neutralization in phonetics and phonology. Berlin & New York: Mouton de Gruyter. Beckman, Jill N. 1998. Positional faithfulness. Ph.D. dissertation, University of Massachusetts, Amherst. Bhat, D. N. S. 1973. Retroflexion: An areal feature. Working Papers on Language Universals 13. 27 –58. Blevins, Juliette. 2003. The independent nature of phonotactic constraints: An alternative to syllable-based approaches. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 375 –403. Cambridge: Cambridge University Press. Blevins, Juliette. 2006. A theoretical synopsis of Evolutionary Phonology. Theoretical Linguistics 32. 117 –166. Boersma, Paul. 1998. Functional phonology: Formalizing the interactions between articulatory and perceptual drives. The Hague: Holland Academic Graphics. Casali, Roderic F. 1996. Resolving hiatus. Ph.D. dissertation, University of California, Los Angeles. Casali, Roderic F. 1997. Vowel elision in hiatus context: Which vowel goes? Language 73. 493–533. Cho, Young-Mee Yu. 1990. Parameters of consonantal assimilation. Ph.D. dissertation, Stanford University. Côté, Marie-Hélène. 2000. Consonant cluster phonotactics: A perceptual approach. Ph.D. dissertation, MIT. Dave, R. 1976. Retroflex and dental consonants in Gujarati: A palatographic and acoustic study. Annual Report of the Institute of Phonetics, University of Copenhagen (ARIPUC) 11. 27–155. Flemming, Edward. 1995. Auditory representations in phonology. Ph.D. dissertation, University of California, Los Angeles. Fortescue, Michael. 1980. Affix ordering in West Greenlandic derivational processes. International Journal of American Linguistics 46. 259 –278. Gerfen, Chip. 2001. A critical view of licensing by cue: Codas and obstruents in Eastern Andalusian Spanish. In Lombardi (2001a), 183–205. Gildea, Spike. 1995. A comparative description of syllable reduction in the Cariban language family. International Journal of American Linguistics 61. 62 –102. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Haas, Mary. 1946. A grammatical sketch of Tunica. In Cornelius Osgood (ed.) Linguistic structures of native America, 337–366. New York: Viking Fund. Harris, James W. 1969. Spanish phonology. Cambridge, MA: MIT Press.

Positional Effects in Consonant Clusters

19

Harris, James W. 1984. Autosegmental Phonology, Lexical Phonology, and Spanish nasals. In Aronoff & Oehrle (1984), 67 –82. Hayes, Bruce. 1984. The phonetics and phonology of Russian voicing assimilation. In Aronoff & Oehrle (1984), 318–328. Hayes, Bruce. 1986. Assimilation as spreading in Toba Batak. Linguistic Inquiry 17. 467–499. Hayes, Bruce, Robert Kirchner & Donca Steriade (eds.) 2004. Phonetically based phonology. Cambridge: Cambridge University Press. Hualde, José Ignacio. 1987. On Basque affricates. Proceedings of the West Coast Conference on Formal Linguistics 6. 77 –89. Hualde, José Ignacio. 1992. Catalan. London & New York: Routledge. Hualde, José Ignacio. 1993. Topics in Souletin Phonology. In José Ignacio Hualde & Jon Ortiz de Urbina (eds.) Generative studies in Basque linguistics, 289–327. Amsterdam & Philadelphia: John Benjamins. Hume, Elizabeth. 1999. The role of perceptibility in consonant/consonant metathesis. Proceedings of the West Coast Conference on Formal Linguistics 17. 293 –307. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts, Amherst. Itô, Junko. 1989. A prosodic theory of epenthesis. Natural Language and Linguistic Theory 7. 217–259. Itô, Junko, Armin Mester & Jaye Padgett. 1995. Licensing and underspecification in Optimality Theory. Linguistic Inquiry 26. 571–613. Jun, Jongho. 1995. Perceptual and articulatory factors in place assimilation: An Optimality Theoretic approach. Ph.D. dissertation, University of California, Los Angeles. Jun, Jongho. 1996. Place assimilation is not the result of gestural overlap: Evidence from Korean and English. Phonology 13. 377 –407. Jun, Jongho. 2002. Positional faithfulness, sympathy and inferred input. Unpublished ms., Yeungnam University, Daegu, Korea. http://ling.snu.ac.kr/jun. Jun, Jongho. 2004. Place assimilation. In Hayes et al. (2004), 58–86. Kabak, Baris & William J. Idsardi. 2007. Perceptual distortions in the adaptation of English consonant clusters: Syllable structure or consonantal contact constraints. Language and Speech 50. 23 –52. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kim-Renaud, Young-Key. 1974. Korean consonantal phonology. Ph.D. dissertation, University of Hawaii. Kim-Renaud, Young-Key. 1986. Studies in Korean linguistics. Seoul: Hanshin. Kiparsky, Paul. 1985. Some consequences of Lexical Phonology. Phonology Yearbook 2. 85–138. Kochetov, Alexei & Marianne Pouplier. 2008. Phonetic variability and grammatical knowledge: An articulatory study of Korean place assimilation. Phonology 25. 399 –431. Kohler, Klaus J. 1990. Segmental reduction in connected speech in German: Phonological facts and phonetic explanations. In W. J. Hardcastle & A. Marchal (eds.) Speech production and speech modelling, 69 –92. Dordrecht: Kluwer. Kohler, Klaus J. 1991a. The phonetics/phonology issue in the study of articulatory reduction. Phonetica 48. 180 –192. Kohler, Klaus J. 1991b. The organization of speech production: Clues from the study of reduction processes. Proceedings of the 12th International Congress of Phonetic Sciences, vol. 1, 102 –106. Aix-en-Provence: Université de Provence. Kohler, Klaus J. 1992. Gestural reorganization in connected speech: A functional viewpoint on “articulatory phonology.” Phonetica 49. 205 –211. Krueger, John R. 1962. Yakut manual. Bloomington: Indiana University.

20

Jongho Jun

Ladefoged, Peter & Ian Maddieson. 1986. Some of the sounds of the world’s languages. UCLA Working Papers in Phonetics 64. 1–137. Lamontagne, Greg. 1993. Syllabification and consonant cooccurrence conditions. Ph.D. dissertation, University of Massachusetts, Amherst. Lodge, Ken. 1986. Allegro rules in colloquial Thai: Some thoughts on process phonology. Journal of Linguistics 22. 331–354. Lodge, Ken. 1992. Assimilation, deletion paths and underspecification. Journal of Linguistics 28. 13 –52. Lombardi, Linda. 1995. Laryngeal neutralization and syllable wellformedness. Natural Language and Linguistic Theory 13. 39 –74. Lombardi, Linda. 1999. Positional faithfulness and voicing assimilation in Optimality Theory. Natural Language and Linguistic Theory 17. 267 –302. Lombardi, Linda (ed.) 2001a. Segmental phonology in Optimality Theory: Constraints and representations. Cambridge: Cambridge University Press. Lombardi, Linda. 2001b. Why Place and Voice are different: Constraint-specific alternations in Optimality Theory. In Lombardi (2001a), 13 –45. McCarthy, John J. 2008. The gradual path to cluster simplification. Phonology 25. 271–319. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249 –384. Amherst: GLSA. Mohanan, K. P. 1993. Fields of attraction in phonology. In John A. Goldsmith (ed.) The last phonological rule: Reflections on constraints and derivations, 61–116. Chicago & London: University of Chicago Press. Padgett, Jaye. 1995. Partial class behavior and nasal place assimilation. In Keiichiro Suzuki & Dirk Elzinga (eds.) Proceedings of the 1995 Southwestern Workshop on Optimality Theory (SWOT), 145 –183. Tucson: Department of Linguistics, University of Arizona (ROA-113). Padgett, Jaye. 2002. Russian voicing assimilation, final devoicing, and the problem of [v]. Unpublished ms., University of California, Santa Cruz (ROA-528). Payne, David L. 1981. The phonology and morphology of Axininca Campa. Austin, TX: Summer Institute of Linguistics. Prince, Alan & Paul Smolensky 2004. Optimality Theory: Constraint interaction in generative grammar. Malden, MA & Oxford: Blackwell. Rehg, Kenneth L. & Damien G. Sohl. 1981. Ponapean reference grammar. Honolulu: University of Hawai’i Press. Rice, Keren. 1989. A grammar of Slave. Berlin & New York: Mouton de Gruyter. Rischel, J. 1974. Topics in Greenlandic phonology: Regularities underlying the phonetic appearance of wordforms in a polysynthetic language. Copenhagen: Akademisk Forlag. Rubach, Jerzy. 1990. Final devoicing and cyclic syllabification in German. Linguistic Inquiry 21. 79 –94. Rubach, Jerzy. 1996. Nonsyllabic analysis of voice assimilation in Polish. Linguistic Inquiry 27. 69 –110. Sapir, J. David. 1965. A grammar of Diola-Fogny. Cambridge: Cambridge University Press. Schachter, Paul & Victoria P. Fromkin. 1968. A phonology of Akan: Akuapem, Asante, Fante. UCLA Working Papers in Phonetics 9. Seo, Misun. 2003. A segment contact account of the patterning of sonorants in consonant clusters. Ph.D. dissertation, Ohio State University. Son, Minjung. 2008. The nature of Korean place assimilation: Gestural overlap and gestural reduction. Ph.D. dissertation, Yale University. Son, Minjung, Alexei Kochetov & Marianne Pouplier. 2007. The role of gestural overlap in perceptual place assimilation in Korean. In Jennifer Cole & José Ignacio Hualde (eds.) Laboratory phonology 9, 507 –534. Berlin & New York: Mouton de Gruyter.

Positional Effects in Consonant Clusters

21

Steriade, Donca. 1993. Neutralization and the expression of contrast. Paper presented at the 24th Annual Meeting of the North East Linguistic Society, University of Massachusetts Amherst. Steriade, Donca. 1995. Positional neutralization. Unpublished ms., University of California, Los Angeles. Steriade, Donca. 1999. Phonetics in phonology: The case of laryngeal neutralization. UCLA Working Papers in Linguistics 2: Papers in Phonology 3. 25 –146. Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual account. In Elizabeth Hume & Keith Johnson (eds.) The role of speech perception in phonology, 219–250. San Diego: Academic Press. Steriade, Donca. 2009. The phonology of perceptibility effects: The P-map and its consequences for constraint organization. In Kristin Hanson & Sharon Inkelas (eds.) The nature of the word: Studies in honor of Paul Kiparsky, 151–179. Cambridge, MA: MIT Press. Stevens, Kenneth N. & Sheila Blumstein. 1975. Quantal aspects of consonant production and perception: A study of retroflex consonants. Journal of Phonetics 3. 215 –233. Teoh, Boon Seong. 1988. Aspects of Malay phonology revisited: A non-linear approach. Ph.D. dissertation, University of Illinois. Vance, Timothy J. 1987. An introduction to Japanese phonology. Albany: State University of New York Press. Webb, Charlotte. 1982. A constraint on progressive consonantal assimilation. Linguistics 20. 309–321. Wheeler, Max W. 2005. Voicing contrast: Licensed by prosody or licensed by cue? Paper presented at the 13th Manchester Phonology Meeting (ROA-769). Wilson, Colin. 2001. Consonant cluster neutralisation and targeted constraints. Phonology 18. 147 –197. Wright, Richard. 1996. Consonant clusters and cue preservation in Tsou. Ph.D. dissertation, University of California, Los Angeles. Wright, Richard. 2004. A review of perceptual cues and cue robustness. In Hayes et al. (2004), 34–57. Yip, Moira. 1991. Coronals, consonant clusters, and the coda condition. In Carole Paradis & Jean-François Prunet (eds.) The special status of coronals: Internal and external evidence, 61–78. San Diego: Academic Press. Zhang, Jie. 2004. The role of contrast-specific and language-specific phonetics in contour tone distribution. In Hayes et al. (2004), 157–190. Zoll, Cheryl. 1998. Positional asymmetries and licensing. Unpublished ms., MIT (ROA-282).

47

Initial Geminates Astrid Kraehenmann

1

Introduction

There are a number of different ways in which languages make phonemic differences between consonant segments. One, for example, is on the basis of phonological features. The groups of sounds in (1a) differ in terms of voicing (chapter 69: final devoicing and final laryngeal neutralization), the ones in (1b) in terms of continuancy (chapter 13: the stricture features), and the ones in (1c) in terms of place of articulation (labial, coronal, dorsal) (chapter 22: consonantal place of articulation), etc. (1)

a. b. c.

/p t k f s x/ /p t k b d g/ /p b f v/

vs. vs. vs.

/b d g v z :/ /f s x v z :/ /t d s z/

vs. /k g x :/

Another way in which consonants may contrast is on the basis of inherent prosodic structure such as sound length and/or weight.1 This distinction is also called a difference in quantity (see also chapter 37: geminates; chapter 57: quantitysensitivity). Thus the groups of sounds in (2) differ in terms of quantity, which is generally reflected (a) phonetically as long vs. short articulatory duration, (b) phonologically as heavy vs. light, and (c) structurally in the way they are associated to syllabic and higher-level prosodic structure. (2)

/pp tt kk ff ss xx/ vs. /p t k f s x/

Cross-linguistically, there are some constraints and implicational relationships on the existence of such quantity contrasts. As will be shown in the next section, initial geminates – i.e. geminates that contrast with singletons at the beginning of (lexical) words – are a special case, because they are even rarer than medial geminates. In the subsequent sections we will discuss in turn some issues regarding the phonological representation, prosodic behavior, phonetic properties, perception, and word-edge effects of initial geminates. These issues will provide some interesting insight on why initial geminates are so special. 1

In this chapter, length/quantity is transcribed by a sequence of identical phonetic symbols.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0047

Initial Geminates

2

2

Typological issues

Not many languages are known to have a lexical quantity contrast at word edges as well as word-medially. As an illustration we give examples from Thurgovian Swiss German (Indo-European)2 in (3), which are representative of the phonological quantity distinction common to the majority of Swiss German dialects. (3)

Swiss German quantity contrast a.

initial

b.

medial

c.

final

/ppaaö/ /ttaIkx/ /kkaaö/ /vappH/ /mattH/ /makkH/ /alpp/ /veött/ /maökk/

‘pair’ ‘tank’ ‘coach’ ‘crest’ ‘mat’ ‘fault’ ‘alp’ ‘value’ ‘marrow’

vs. vs. vs. vs. vs. vs. vs. vs. vs.

/paaö/ /taIkx/ /kaaö/ /vapH/ /matH/ /makH/ /xalp/ /heöt/ /aök/

‘bar’ ‘thank’ ‘cooked’ ‘honeycomb’ ‘maggot’ ‘stomach’ ‘calf’ ‘hearth’ ‘bad’

In order to eventually get a better feel for the typological status of initial geminates such as the ones in (3a), it is worth taking a look at the established typological facts of geminates in general. Surprisingly little comprehensive work has been done in this area to date. If we consult the largest extant phonological database – the UCLA Phonological Segment Inventory Database (UPSID), which is an extended and revised online version of Maddieson (1984), listing the phonemes of 451 languages – only 12 languages are coded for having at least one geminate consonant segment. But not all of them have a quantity contrast to speak of. There are only seven (less than 2 percent of the whole sample) if we discount those languages which – based on the inventories given – have either only a single geminate–singleton pair (!Xu (Khoisan), Iraqw (Afro-Asiatic), and Telugu (Dravidian)) or only geminates that come without a singleton counterpart (Inuit (Eskimo-Aleut) and Trumai (South American isolate)). Those seven are: the North Caucasian Archi, Avar, and Lak; Ocaina (Witotoan), Waray (Austronesian), Wichita (Caddoan), and Wolof (Niger-Congo). While the general picture that emerges seems to be that quantity contrasts are rather uncommon, it is doubtful that they are quite as uncommon as that. For example, the database also contains languages such as the Indo-European Bengali, Breton, and Norwegian, as well as Finnish (Uralic), for which there is general consensus among linguists that all have quantity contrasts. However, they are not coded as such in UPSID. Thus the typological interpretation of the UPSID data – not only with respect to quantity – must be taken with the necessary pinch of salt (see also Simpson 1999, among others). One particular piece of information missing in UPSID but of primary interest for our purposes is where within a word the geminates may occur. This information is given, albeit rather implicitly, in the survey of 63 languages with geminates by 2

Language family classification in all cases follow Lewis (2009).

3

Astrid Kraehenmann

Thurgood (1993). In his sample, Thurgood identifies a number of frequency facts and prerequisite conditions for geminates and proposes some implicational tendencies. For example, he finds preferences for certain places of articulation (alveolar > labial > velar > glottal)3 and certain prosodic environments (post-tonic > pre-tonic/unstressed; after short vowel > after long vowel). But most importantly, he establishes that geminates are most favored in intervocalic position or, more specifically, if preceded by a short (and stressed) vowel and followed by another short vowel (Thurgood 1993: 129). Although not explicitly stated, these flanking vowels preferably also belong to the same word. This means that if a language has any quantity contrast at all, it will be word-medially. Thus, the existence of initial and/or final geminates implies the existence of medial geminates (see also Muller 1999). While Thurgood (1993) lists the three Austronesian counterexamples Sa’ban, Kelantan Malay, and Pattani Malay, no further explanation is given as to how it could be that they exclusively have initial geminates, which is an interesting issue that we will come back to shortly. Thurgood does not mention word-final geminates separately, let alone whether or not the languages under investigation allow word-final (i.e. syllable-final) consonants in the first place. But, based on the preference factors he identifies, we can conclude that a quantity contrast is cross-linguistically most common medially, somewhat less common finally, and least common initially. Is there also a specific consonant class that stands out as the most preferred to display a quantity contrast? Based purely on the sonority of consonants (chapter 49: sonority), Morén (1999: 110) answers this question negatively. He finds languages with quantity distinctions in stops, continuants, and sonorants alike (Hungarian (Uralic), Brahui (Northern Dravidian), Italian, Baloch, Gajarati (Indo-European)), only in continuants (Tartar (Altaic)), only in sonorants (Hausa (Afro-Asiatic)), and only in obstruents (Chechen, Lak (North Caucasian)), but also languages that allow geminate stops and sonorants to the exclusion of continuants (Kurdish (Indo-European)). He does not list any languages either allowing only stop geminates or excluding only stop geminates, both of which he considers to be accidental gaps. In comparison, Thurgood’s (1993) generalizations are a bit more fine-grained for place of articulation and voicing within the major classes of obstruents and sonorants. He observes that geminate stops seem to be a prerequisite for geminate affricates to occur, and presents data that suggest a voiceless alveolar stop or fricative /tt ss/ or an alveolar nasal or liquid /nn ll/ to be good candidates for the prototypical geminate. To conclude our look at geminates in general, it is very surprising to notice that 46 of the 63 languages (= 73 percent) in Thurgood’s (1993) sample are also listed in UPSID, yet obviously with different phoneme analyses from the ones consulted by Thurgood. As regards language diversity, six language families figure very prominently, accounting for more than 50 percent of the whole set: namely, in descending frequency, Afro-Asiatic (10 languages), Indo-European (7), Austronesian (6), Altaic (3), Dravidian (3), and Uralic (3). 3

This means that Thurgood (1993) finds, on the one hand, that alveolar geminates are the most frequent and glottal geminates the least frequent and, on the other hand, that if a language has, for example, velar geminates it also tends to have labial and alveolar ones. He presents his data as an overview of the phonological systems and does not use them to make any claims about preferences for the emergence of geminates.

Initial Geminates

4

As can be seen in (4), the first three are also very prominently represented in languages that exhibit initial geminates. This set of languages is given in the appendix to Muller (2001), which lists the consonant inventories and some distributional facts of 29 different languages and represents the most comprehensive collection of languages with initial geminates to date.4 (4)

Language families and languages with initial geminates (Muller 2001: 204–233) Austronesian (13)

Indo-European (4) Afro-Asiatic (3) North Caucasian (2) Arawakan Austro-Asiatic Japanese Niger-Congo Nilo-Saharan Oto-Manguean West Papuan

Trukese, Dobel, Kiribati, Leti, Ngada, Pattani Malay, Ponapean, Puluwat, Roma, Sa’ban, Taba, Woleaian, Yapese Breton, Cypriot Greek, Swiss German (Bernese and Thurgovian) Cypriot Maronite Arabic, Moroccan Arabic, Tamazight Berber Circassian, Lak Piro Nyaheun Hatoma Luganda Lugbara Atepec Zapotec Hatam

Within this set of languages, five seem to have only initial geminates (see Figure 47.1): Leti, Ngada, Pattani Malay, Yapese, and Nyaheun. While Leti is listed as having medial geminates, all of them are derived rather than lexical (see also Hume et al. 1997). For Ngada, no information on medial geminates is given. Pattani Malay and Yapese are known to lack medial geminates (cf. Abramson 1987, 1991, 1999; Jensen 1977). However, it is important to note that in Yapese only two of the 27 possible consonant phonemes show a length distinction, i.e. /gg ll/,

Number of languages

25 20

20 15 10 5

5

4

0 only initial

initial & medial

initial, medial, & final

Figure 47.1 Geminates by word position in the 29 languages listed in Muller (2001: 204–233) 4

There is a substantial number of languages with initial geminates – for example, Maltese and various southern Italian dialects – that do not appear in Muller’s (2001) list. For the purpose of this brief overview, however, we restrict ourselves to Muller’s sample.

5

Astrid Kraehenmann

which – as we will see – are not prototypical geminates. For Nyaheun, Muller (2001: 234) gives a structural reason for the lack of medial geminates: words in this language are mostly monosyllabic. One could even include Sa’ban as the sixth language of the “initial-only” set, because it has but very few medial geminates, while initial ones are abundant. The fact that we do find languages with initial quantity contrasts to the exclusion of medial and final ones weakens Thurgood’s (1993) generalization – and, indeed, goes against the most basic definition of what a geminate is (see §3) – namely that medial intervocalic geminates are the default case. One explanation may be morpheme and/or syllable structure constraints (chapter 33: syllable-internal structure; chapter 86: morpheme structure constraints), as mentioned for Nyaheun. Alternatively, for the other five languages – all of which are Austronesian – the existence of initial geminates may simply have arisen historically through morpheme concatenation or reduction processes (chapter 79: reduction).5 However, as far as historical comparative evidence goes, Austronesian languages, especially the Western Micronesian branch, are reconstructed with initial geminates in place (cf. Jackson 1984; Bender et al. 2003), and the modern-day systems show a strong dislike for and various avoidance strategies against initial geminates (e.g. Kennedy 2005). Thus what we see in the Micronesian language of Yapese may just be the last remains of what used to be a stable quantity distinction. Incidentally, the other Micronesian languages in Muller’s (2001) sample (Trukese, Kiribati, Ponapean, Puluwat, Woleaian) still have a healthy medial quantity contrast, in addition to the initial distinction. Regarding geminates at the end of words, Moroccan Arabic, Tamaziht Berber, and the two Swiss German dialects allow all the geminates in the system to occur in this position. No information is given for Breton, Circassian, and Cypriot Maronite Arabic, while this category is listed as “not applicable” for Hatam (Muller 2001: 215). The remaining 21 languages do not seem to have any final geminates, but it is not indicated whether this is a systemic gap, as it presumably is for Hatam. On the face of it, this paucity of final geminates appears to conflict with Thurgood’s (1993) finding that final geminates should be more frequent than initial ones. However, it is important to keep in mind that the criterion for inclusion in Muller’s (2001) language set was the existence of initial geminates. Therefore, languages with medial and/or final geminates but without initial ones do not figure in her sample. It might still be the case that Thurgood’s (1993) generalization holds true overall, but that the frequency distribution within the languages with initial geminates is skewed.6 In sum, within Muller’s 29 languages with initial geminates, final geminates imply medial geminates. There are no languages with initial and final geminates that lack medial geminates (see Figure 47.1 for attested combinations). Also, the set of geminate phonemes allowed in initial position is always a proper subset of the whole, either equal to or smaller than the one allowed in medial (and final) position. 5

See Yupho (1989) for this finding in Pattani Malay. Dmitrieva (2009), quoting Thurgood (1993), claims that word-initial geminates are cross-linguistically preferred over word-final ones. However, on the interpretation here, the facts presented by Thurgood seem to point in the opposite direction. Therefore, it is not quite clear what data Dmitrieva bases her basic assumption on. She essentially presents a functional explanation, namely a mismatch between production and perception of the quantity distinction in word-initial vs. word-final position. The data of her production and perception experiments involved (Russian) non-words rather than real words.

6

Initial Geminates

6

Number of languages

25 20 15 10 5 0

nn

tt

mm

kk

ss

pp

bb

ll

dd

ff

rr

Figure 47.2 The 10 most frequent geminate phonemes in the 29 languages listed in Muller (2001: 204–233)

Looking at the frequency of initial geminate phonemes, there are some interesting observations to be made with respect to major consonant class, manner, place of articulation, and voicing.7 But before looking at these factors, let us first consider the 10 most frequent geminates in these languages, as illustrated in Figure 47.2. The nasals /nn mm/ and the voiceless stops /tt kk pp/ are among the most commonly occurring geminates. The voiceless fricatives /ss ff/, the voiced stops /bb dd/, and the liquids /ll rr/ are next on the top-10 frequency list. Thus, within the sonorants /nn/ is most universally present, and within the obstruents it is /tt/ for the stops and /ss/ for the fricatives. Very significantly, all these sounds follow the universal preference patterns (cf. Ladefoged and Maddieson 1996; Ladefoged 2001): they are coronal as well as voiced if sonorant and voiceless if obstruent, which is the less marked voicing characteristic within each respective class. Considering the entire set of initial geminates, stops and fricatives together make up almost two-thirds, with nasals taking the lion’s share in the last third (see Figure 47.3). Geminate fricatives imply geminate stops, except for Hatoma, which has /ff/ and /ss/, but does not allow any geminate stops or affricates. Overall, geminate affricates are least frequent and only occur in conjunction with 120

113

Frequency

100 74

80

61

60 40

25

20

9

5

glides

affricates

0 stops Figure 47.3 7

fricatives

nasals

liquids

Initial geminates by manner (data from Muller 2001: 204–233)

In this chapter, I refer only to type frequency of phonemes and members of natural classes. I have no information about token frequency of the sounds in the respective languages.

Astrid Kraehenmann

7

both stops and fricatives. As for the sonorants, nasals are a prerequisite for any other type of sonorant geminate (chapter 8: sonorants). The only exception is Yapese, which has only a coronal lateral /ll/ and no nasal or other sonorant geminate, although it has labial, coronal, and dorsal nasal singletons. In the complete set of languages, there are 22 that have both obstruent and sonorant geminates. However, Circassian, Lak, Bernese, and Thurgovian Swiss German allow only obstruents, while Kiribati, Piro, and Ponapean allow only sonorants. In the latter group, all three languages have geminate nasals, most often a coronal /nn/. Figure 47.4 illustrates the frequencies of the different places of articulation8 in the sample. As expected, the predominance of coronals over labials and dorsals mirrors the general attested cross-linguistic preferences. Finally, regarding voicing where it is contrastive, namely within the obstruents, voiceless geminates outnumber voiced ones by a little over 100 percent (see Figure 47.5). There are seven languages without voicing distinction, for which only “voiceless” geminates are listed: Atapec Zapotec, Trukese, Hatoma, Lak, Leti, Puluwat, and Thurgovian Swiss German. In three languages, there seem to be “voiced” geminates without any “voiceless” counterpart: Lugbara has /bb dd/ but no */pp tt/, Ngada has /bb/ but no */pp/, and Yapese has /gg/ but no */kk/. However, in all three cases, these phonemes are the only obstruent geminates in the respective systems. Dobel is a slightly different case, in that it lacks */pp/, with /bb/ being part of a bigger set, which includes voiceless /tt kk ?? ss/.

Frequency

160

148

120 78

80

53 40 4

3

glottal

pharyngeal

0 coronal Figure 47.4

Frequency

80

labial

Initial geminates by place of articulation (data from Muller 2001: 204–233) 73

voiceless voiced

54

60 40

dorsal

40 20

20

4 0 stops Figure 47.5 204–233) 8

fricatives

1

affricates

Initial obstruent geminates by manner and voicing (data from Muller 2001:

In Muller’s (2001) terminology, labials comprise bilabials and labio-dentals, coronals refer to (inter)dentals, alveolars, and palatals, and dorsals are velars.

Initial Geminates

8

Within the subset of languages for which voicing is distinctive, 11 display more than one pair of voiceless and voiced geminate obstruents. These are Breton, Circassian, Cypriot Maronite Arabic, Luganda, Nyaheun, Sa’ban, Hatam, Pattani Malay, Moroccan Arabic, Taba, and Tamazight Berber. Interestingly, there are two types of case in which there are more voiced geminates than voiceless ones. The first type involves an apparent dislike of voiceless bilabial stops. In addition to Lugbara, Ngada, and Dobel, mentioned above, Moroccan Arabic, Taba, and Tamaziht Berber allow /bb/ but not */pp/. The other type is less striking and shows a dislike of voiceless palatal stops. For example, both Hatam and Pattani Malay have /ÁÁ/ but not */cc/. However, we find the reverse preference as well: although /ÁÁ/ is part of the phoneme system of Nyaheun, it is the only obstruent that is not allowed word-initially. Voicing seems to be a more systematic restricting factor in Cypriot Greek and Bernese Swiss German. The former distinguishes voiced and voiceless fricatives word-medially, but only allows the voiceless ones initially. The latter apparently has the reverse preference: it distinguishes voiced and voiceless stops in all word positions except initial, where only voiced stops are allowed. It must be noted, though, that the phonemic analysis for the stops in the source reference (Ham 1998) is disputed (e.g. Marti 1985) and it is not firmly established that stops differ phonologically in terms of both voicing and quantity. Finally, voicing is only marginally distinctive in Roma and Woleaian. Both have but one voicing pair – /tt dd/ and /pp bb/, respectively – alongside other voiceless obstruents: /pp kk/ and /pp kk tt ff ss/, respectively. To summarize the main points, languages do not often make a phonological distinction between geminates and singletons, particularly not in word-initial position. Initial quantity contrasts do occur concentrated in a handful of language families, i.e. in Austronesian, Indo-European, Afro-Asiatic, and North Caucasian. Initial geminates, by and large, only occur if there are also medial geminates in the language system. Asymmetries and preferences for certain place of articulation, manner, major class, and voice characteristics follow the preferences of sounds that are not in length opposition: e.g. coronals and voiceless obstruents are most frequent. The rarity of quantity contrasts, which has now been established, raises the question of why this should be the case. On purely physiological grounds, there are no substantial hindrances to articulating speech sounds in long and short versions, although with some sounds (e.g. continuants such as sonorants and fricatives) it might be easier than with others (e.g. non-continuants such as stops). In the following sections it will become clear that the relative nature of the distinction is the main reason. It poses challenges on many levels, namely in the way it must be stored in mental representations, in the way it is realized by the speakers in different linguistic (and social) contexts, and in the way it is perceived by the listeners in those different contexts.

3

Underlying representation, syllabification, and metrical issues

A classical definition of geminates requires these sounds to be single sounds that are long and ambisyllabic when intervocalic (e.g. Swadesh 1937: 10; Trubetzkoy 1939: 156; see also chapter 37: geminates). This means that a geminate both closes

Astrid Kraehenmann

9

a preceding syllable and is the onset of a following syllable; see the structures in (5). As a rule, “intervocalic” is interpreted to mean “within one and the same word.” (5)

Syllabification of geminates and singletons9 a.

ambisyllabic link: geminate q q a

p a [appa]

b.

tautosyllabic link: singleton q q a

p a [apa]

While there is considerable agreement as to the ambisyllabic nature of geminates (but see Topintzi 2008 for a different view), theories vary in what unit they posit to link up with these syllable constituents. These assumptions have far-reaching consequences for how geminates are phonologically represented, especially so if the goal is to establish a single representation that is to capture not only languagespecific but also cross-linguistic patterns of geminates (e.g. Selkirk 1990; Tranel 1991; Broselow 1995; Broselow et al. 1997; Ham 1998; Davis 1999a, 1999b, 2003; Muller 2001; Curtis 2003; Kraehenmann 2003; Topintzi 2008; Ringen and Vago 2010; among others). For convenience, we briefly sketch the main tenets of the two competing theories at this point.10 In moraic theory, a geminate is associated with a unit of weight, the mora – see (6a) – and should therefore participate in weight-related phenomena such as stress patterns, compensatory lengthening, and minimal word conditions. In skeletal theory (chapter 54: the skeleton), though, a geminate is doubly linked to the timing tier – see (6b) – and is prosodically long without pre-association to any weight tier. As a consequence, weightless geminates pose a challenge for the former theory, moraic geminates for the latter. (6)

Underlying representation and syllabification a. moraic theory (e.g. Hyman 1985; Hayes 1989) q q q q

9

[

[

/pp/

a

[

[

p a [appa]

/p/

[

[

a

p a [apa]

syllable tier weight tier melodic tier

The segmental labels linking up to syllable nodes are shorthand for the root nodes representing these segments. 10 See Curtis (2003) for a concise survey of the different models of geminate representation. chapter 37: geminates also discusses in detail the basic controversy between the weight account and the timing account, as well as the Composite Model (e.g. Hume et al. 1997; Muller 2001; Curtis 2003; Kraehenmann 2003; Baker 2008). A full exploration of this theory goes beyond the scope of this chapter. Suffice it to say, however, that it is a combination of the other two models: geminates are associated to two timing positions and weight, if it plays a role at all, is a consequence of syllabification, mainly one of Weight-by-Position (Hayes 1989). One worry with this view is that it runs the risk of overgeneration, since it allows for all types of combinations between length and weight, possibly even unattested ones.

Initial Geminates b. skeletal theory11 q

q

q

x x

x

x

x

x

x

x

x

/pp/

a

p [appa]

a

/p/

a

p

q

syllable tier

x

skeletal/ timing tier

10

a melodic tier [apa]

The major twist for initial geminates is that they are primarily seen as elements belonging to the syllable onset, the prototypical non-weight position. In the majority of languages, onsets play no role in weight-sensitive processes (Hayes 1989; Curtis 2003; Topintzi 2008; Davis, forthcoming; among many others; see also chapter 55: onsets). This holds true for singletons as well as geminates. For example, in Leti (Hume et al. 1997; Muller 2001: 117–142), CV syllables consisting of initial geminates and short vowels behave like light syllables: (a) they do not attract main stress, in contrast to syllables with long vowels, and (b) they also do not satisfy the bimoraic minimality condition on words (e.g. *[ppa] is not a possible Leti word). On the assumption that geminates always are moraic, these facts are difficult to explain. The authors therefore propose the non-moraic skeletal representation in (7a).12 In moraic theory, the bisegmental structure in (7b) has been suggested for the Leti-type geminates (e.g. Hayes 1989; Davis 1999a). (7)

Representations of non-moraic initial geminates a.

single root doubly linked q

b.

bisegmental

[ x

x

root

x

root root

c.

two-root nodes

q

q

[

[ root root [features]

For Topintzi (2008: 176), weightless geminates “cannot be real geminates, since geminates are by definition underlyingly moraic.” She claims that they are fake geminates, i.e. doubled consonants with two separate root nodes. Assuming it is important to differentiate between underlying and surface form, it is somewhat surprising that she does not opt for the bisegmental representation in (7b), but 11

In the interest of simplicity, the skeletal theory is illustrated with x-slot notation (following Levin 1985). There is no claim being made about it being superior to other existing timing models, such as Selkirk’s (1990) two-root theory (see also Curtis 2003: 310) or CV representation (e.g. McCarthy 1979; Clements and Keyser 1983). The point here is simply to illustrate the different prosodic structures in a weight account, as opposed to a length/timing account. 12 See also Kraehenmann (2001, 2003) for initial geminates in Swiss German not participating in a bimoraic minimality constraint. The proposed structures link a single root node to two timing positions, the first of which is associated to the prosodic word level rather than the syllable level.

Astrid Kraehenmann

11

rather Selkirk’s (1990) two-root node model (7c), which posits real geminates with a branching structure underlyingly (chapter 1: underlying representations). The bisegmental analysis is in line with the basic tenet of moraic theory requiring onset segments not to contribute to syllable weight. This is achieved by denying the onset consonants geminate status. Yet, while this analysis can properly account for the fact that Leti geminates pattern like (other) consonant clusters, it cannot explain why the geminates show integrity behavior by patterning like single consonants for processes such as reduplication, in which the onset must not be split up (Hume et al. 1997; Muller 2001). Turning now to the rarer instances in which onsets do seem to add weight to syllables, Trukese (also called Chuukese by some authors) is one of the favorite and most convincing examples cited (e.g. Davis 1999a, forthcoming; Muller 1999, 2001; Curtis 2003; Topintzi 2008; Ringen and Vago 2010; see chapter 37: geminates for more data). Similarly to Leti, Trukese has a bimoraic minimum condition on nouns, but in contrast to Leti, Trukese monosyllables consisting of an onset geminate and a short vowel satisfy this condition, i.e. [tto] ‘clam (sp.)’ is fine but *[tH] ‘islet’ is not (it retains its underlying long vowel: [tHH]) (Davis 1999a: 95). Different representations have been proposed, all of which have in common that the geminates come pre-associated with a mora. Within the skeletal framework, Muller (2001: 174, 179) assumes the semisyllabic13 representation illustrated in (8a) both for Trukese and Luganda initial geminates, the latter of which participate in tone processes restricted to moraic units. Kiparsky (2003: 165) proposes a similar structure in the moraic framework for initial geminates in VC- and C-dialects of Arabic; see (9a). (8)

Representations of moraic initial geminates a.

semisyllabic w

[ x

x

b. syllabic σ σ

q

µ

[

root

µ

c.

stray mora σ µ

µ root

d. tautosyllabic σ µ

µ

root

x

root An important difference, though, is that the Arabic geminates are morphologically derived and, moreover, they are invisible to stress, i.e. weightless (chapter 124: word stress in arabic). In contrast to Topintzi’s (2008) treatment of weightless initial geminates, Kiparsky grants them geminate status as a matter of course, with all the consequences: they are associated with a mora in spite of the fact that they are in a non-weight position. The adjunction of the geminate mora directly 13

The label “semisyllabic” is adopted from Kiparsky (2003: 154), who bases his terminology on Sievers’s (1901) concept of “Nebensilbe.”

Initial Geminates

12

to the prosodic word level is intended to account for the fact that it does not count for stress purposes. Although designed for the Arabic geminates, according to Kiparsky the structure in (9a) is representative for weightless initial geminates in general. (9)

Representations of Arabic initial geminates (Kiparsky 2003: 165) a. light semisyllabic w

[ root

b.

heavy semisyllabic

t

U

q

q

[

[

[

root

The question thus arises of how a mora adjoined to the word level can be weightcontributing in one instance (8a), and not in another (9a). The answer seems to be in the nature of the phenomenon that the structures need to capture. In the Arabic-type case the weight of the syllable (and/or foot) is crucial since stress – evaluated at the syllable/foot level – is at issue; in the Trukese-type case, the weight of the whole word is at issue when minimality conditions are imposed.14 Similarly, Muller (2001: 174) proposes adjunction to the word level, and that only in the absence of evidence for any independent syllable-based prosodic processes. But it must be pointed out that mora adjunction such as in (8a) and (9) introduces different levels of weight projection, namely one in which moras contribute to syllable weight, which is the commonly held view, and others in which moras contribute to foot or word weight. And these levels need not coincide. More importantly, the principle that weight is predictable from syllable structure (e.g. Gordon 2005; see also chapter 39: stress: phonotactic and phonetic evidence) is no longer valid: the structures in (8a) and (9) by themselves might not unequivocally tell us whether or not the mora is indeed weight-contributing. To return to the other proposed representations for moraic initial geminates, the debate is primarily about where and/or how to associate the underlying mora to make it count. For example, Kiparsky (2003: 165) puts forward the structure in (9b), where the mora is adjoined to the foot level and is intended to participate in weight-related processes. Hayes (1989: 302) suggests the projection of a separate syllable as in (8b). This representation has not found much support in the literature on geminates, presumably because there is no evidence that initial geminates can or indeed must form their own syllable. For Trukese initial geminates and Italian initial palatals, Davis (1999a) opts for the representation in (8c) with a stray mora, first proposed in Hayes (1989: 303). However, as pointed out by Hume 14

See also Curtis (2003: 298), who proposes this structure for Trukese, arguing that weight should count at the word level rather than the syllable level.

13

Astrid Kraehenmann

et al. (1997: 393), Kraehenmann (2003: 26), and Topintzi (2008: 180), the mora cannot count for weight purposes by any measure of analysis when not associated to higher level prosodic structure. Finally, Topintzi’s (2008) proposal for Trukese and Pattani Malay – and moraic onsets in general – is the one illustrated in (8d), in which the geminate mora is directly associated with the syllable node to which the following vowel belongs. This tautosyllabic representation is striking for at least two reasons. First, it treats the geminate like a regular onset consonant by making it entirely the initial constituent within the first syllable. There is no ambisyllabic double linking, which is characteristic of geminates in all other proposals. Second, the geminate carries a mora equal in all respects to the mora of the syllable nucleus. The implications of such structures are, on the one hand, that moraic onsets are expected to occur not only word-initially but also word-medially and, on the other hand, that, to an even greater extent, moras play the double role of a unit of weight and length. In support of the first implication, Topintzi (2008: 170–172) presents some intriguing data from Marshallese stress patterns and Trique compensatory lengthening processes which suggest an analysis of medial geminates with tautosyllabification (see also chapter 55: onsets; Topintzi 2010). In the majority of cases, though, medial geminates must be analyzed ambisyllabically as illustrated in (8), cases like Marshallese being the exception rather than the rule. Regarding the length issue, Topintzi replaces the basic tenet of traditional moraic theory that onset consonants are never moraic with another, based on findings by Hubbard (1994) and Ham (1998), namely “that moras are allocated a minimum target duration” (2008: 174), the controversial proposal being that surface phonetic duration is in direct relation to underlying prosodic length. To summarize this section, there are a number of different challenges that initial geminates pose for the existing theoretical models of representation. Like medial geminates, initial geminates do not present a uniform behavior across languages. In some cases it is clear that they fully participate in weight-sensitive processes (e.g. Trukese), in other cases weight is not involved at all (e.g. Leti). If, in the interest of a universal geminate representation, geminates are not underlyingly moraic, how do we explain the Trukese cases? And vice versa, if geminates are underlyingly moraic, how do we explain the Leti cases? Moreover, how can such Leti-type geminates occupy the syllable onset, the non-weight position? It is evident that strict adherence to the traditional concepts of the two theories is not very fruitful. A combination of the two approaches seems rather more promising but the proper balance still needs to be established (see also note 10).

4

Phonetic properties and perception

In terms of phonetic properties, initial geminates are a model case for illustrating the interdependence between articulation, acoustics, and perception because not only segmental features, such as place of articulation, manner, etc., are involved, but prosodic features representing duration also play a role. Temporal characteristics have received most attention in phonetic studies of geminates to date. Although there is still some discussion on whether geminates are to be considered long or tense consonants, it is generally accepted that the

Initial Geminates

14

most robust cross-linguistic phonetic correlation with phonological quantity is the relative duration of articulatory constriction, in particular, for voiceless stop (and affricate) consonants, the relative duration of the closure (Abramson 1986a, 1987, 1991, 1999, 2003; Lahiri and Hankamer 1988; Kraehenmann 2001, 2003; Tserdanelis and Arvaniti 2001; Payne and Eftychiou 2006; Mikuteit and Reetz 2007 and references therein; Ridouane 2007 and references therein). Depending on the language and specific phonological contexts within the language, geminates can be between 1.5 and 3 times the duration of singletons (cf. Clements 1986; Lahiri and Hankamer 1988; Ladefoged and Maddieson 1996).15 Closure duration – or consonant duration for non-stop consonants – is thus recognized as the primary phonetic correlate of the quantity distinction. Due to the very nature of the quantity contrast, intervocalic medial geminates are the most studied, covering all existing types from sonorants to obstruents, voiced and voiceless. Investigations of initial geminates, in contrast, have overwhelmingly focused on voiceless stops in just a small handful of languages. One reason for this concentration on voiceless stops is certainly the fact that these sounds are the most frequent ones in initial quantity oppositions, as we saw in §2. Another – and more cogent – reason, however, is that they pose an interesting theoretical conundrum for linguists and a practical conundrum for speakers and listeners, particularly if these sounds are utterance/phrase-initial: in contrast to nasals, continuant consonants (e.g. liquids, glides, and fricatives) and voiced sounds in general, the acoustic duration cue for voiceless stops is a period of silence, the beginning of which cannot be determined by the listeners if no sound precedes it. One of the overarching questions is therefore whether there are additional cues that enable the listeners to distinguish initial voiceless stop geminates from singletons in utterance-initial position, i.e. in absence of the primary duration cue. In a series of meticulous experimental studies on Pattani Malay, Abramson (1986a, 1986b, 1991, 1999, 2003) answers this question in the affirmative. He argues that native listeners rely on secondary, non-durational acoustic cues, namely a combination of amplitude of the first syllable (the louder the syllable, the greater the perceived length of the consonant) and fundamental frequency (F0) of the vowel following the initial consonant (the higher the F0, the greater the perceived length of the consonant). In his acoustic study on geminates in Tashlhiyt Berber, Ridouane (2007) also reports on non-temporal correlates, most of which however involve voiced rather than voiceless stops and are classified into two different types. He calls the first type “concomitant correlates” (2007: 137), because they are largely optional. For examples, for three out of the five speakers, F0 averaged over the first 10 msecs of the postconsonantal vowel of voiced stops is slightly but significantly higher for geminates than for singletons. But no such correlation was found for voiceless stops. The other type is called “secondary correlates” (Ridouane 2007: 137). The author argues that they are contextually limited and/or speaker-specific but are 15

The type of sound can also play a crucial role. For example, Khattab (2007: 156) reports a staggering geminate to singleton ratio of 7.5 : 1 for intervocalic (medial) rhotics after short vowels, along with the comparably modest ratio of 1.6 : 1 for intervocalic voiceless velar stops after a long vowel for Lebanese Arabic.

15

Astrid Kraehenmann

manifestations of the tense articulation of geminates. One example is the finding that the normalized Root Mean Square (RMS) amplitude of the stop release is slightly higher for geminates than for singletons for all but one of the five speakers, who shows the reverse pattern. Another example would be the fact that geminates tend to devoice for aerodynamic reasons (cf. Ohala 1983), depending in degree on factors such as speaker, place of articulation, and position within the word. Also, voiced stop singletons have a weakish propensity to lenite, i.e. become fricatives, while geminates never do. As for durational correlates apart from the primary cue, Ridouane (2007: 129) finds that the duration of the stop release – also called “voice onset time” (VOT) or “after closure time” by other authors – shows no difference in voiceless stops but is longer for voiced geminates than for singletons.16 None of these additional correlates, however, has yet been tested for its significance in perception. The jury is still out on whether Tashlhiyt Berber native listeners can and do rely on these cues when the primary duration cue is not available. While VOT plays no role for voiceless stops of Tashlhiyt Berber, it is a temporal measure correlating with the quantity distinction of these sounds in Cypriot Greek. Parallel to Tserdanelis and Arvaniti’s (2001) findings on medial geminates, Muller (2001, 2003) establishes that VOT is also significantly longer for geminates than for singletons in word-initial position. Like native listeners of Pattani Malay, native listeners of Cypriot Greek can distinguish voiceless stop geminates from singletons at the beginning of an utterance. Muller hypothesizes that the subjects of her very basic perception experiment thus utilize VOT as one of possibly many secondary cues to the phonological quantity distinction. Finally, as well as Pattani Malay, Tashlhiyt Berber, and Cypriot Greek, Swiss German initial geminates have also received some attention in the literature, in particular in the Thurgovian dialect of Swiss German (Kraehenmann 2001, 2003, 2009; Kraehenmann and Lahiri 2008). Kraehenmann (2001, 2003) confirms earlier findings on other dialects, for example by Enstrom and Spörri-Bütler (1981) and Fulop (1994), that VOT does not participate in the quantity opposition: there is no difference in the duration of the stop release for geminates and singletons, be they initial, medial, or final. Since voiceless stops are the only consonants occurring in initial length opposition, the question is again whether the phonological difference is enhanced by cues other than the primary one. While lacking any corroborating perception evidence, Fulop (1994) claims to have found potential secondary nontemporary cues, namely increased intensity, movement, and clarity of post-release sonorant formants above F2 for geminates in comparison to singletons. However, anecdotal evidence in Moulton (1979) and the results of a pilot perception study reported in Kraehenmann (2003) call into question whether listeners can make use of these acoustic differences. Their listeners seemed unable to recover the quantity contrast in utterance-initial context. A crucial fact to verify especially in this case is whether the difference is actually produced by the speakers or whether contrast neutralization occurs. Using electropalatography (EPG), Kraehenmann and Lahiri (2008) set out to do just that in an articulatory study. They find that geminates are indeed articulated significantly longer (by about 85 msecs) than singletons even in absolute utterance/phrase-initial position (see Figure 47.6). 16

Exactly the same finding is reported in Mikuteit and Reetz (2007) for voiceless and voiced stops in East Bengali.

Articulation duration

Initial Geminates 300 250 200

16

geminate singleton

249

195 164 134

150 100

70

64

50 ##_V

V#_V

V_V

Figure 47.6 Articulation duration in ms for initial geminates and singletons in phrase-initial and phrase-medial post-vocalic context in comparison to medial geminates and singletons in intervocalic context (Kraehenmann and Lahiri 2008: 4450)

While, in terms of absolute measures, this duration difference is larger than the 65 msecs for the quantity distinction in a vocalic context (i.e. initial stops after a vowel-final word), in terms of proportional measure, the difference is in fact smaller – namely 1.5 : 1 as opposed to 2 : 1 – because both geminates and singletons are longer on average by about 100 msecs when utterance-initial. This means that although the articulation is heightened (see next section for a more detailed discussion on prosodically conditioned edge phenomena), the phonological contrast is made less salient in production.17 The contrast is produced most clearly in word-medial intervocalic context, where the duration of singletons is one third of that of geminates. Kraehenmann (2009) reports in a controlled follow-up perception experiment, which was conducted with audio data gathered in the articulatory study by Kraehenmann and Lahiri (2008), that native listeners are not able to discriminate phrase-initial geminates from singletons, whereas they have no problem doing so in a phrasal intervocalic context. Unlike Pattani Malay and Cypriot Greek listeners, therefore, Swiss German listeners cannot rely on secondary or enhancement cues. Why not? Although this question leads beyond the scope of this chapter, two points need to be made. First, the speakers do not seem to produce any secondary cues. This is unusual, because, according to work by Stevens and colleagues on Enhancement Theory (e.g. Stevens et al. 1986; Stevens and Keyser 1989; Keyser and Stevens 2006), distinctive properties of sounds are generally supplemented with other phonetic cues in order to make the phonological contrast more salient for perception, the longer VOT of Cypriot Greek geminates being a good example. Second, it is not at all clear why speakers maintain a distinctive articulation for geminates and singletons, when listeners cannot pick up on the primary cue. This clearly goes against functionalist principles of economy and ease of articulation (e.g. Hayes 1999). When a distinction cannot be perceived, why not neutralize it? These are issues that still need to be addressed satisfactorily in future work. In sum, relative consonant/closure duration is the main physical instantiation of the quantity contrast in any word position, even if utterance-initial: geminates are long and singletons are short, relative to each other. Languages differ in the way they do or do not enhance the primary correlate but there is no universal secondary cue that reliably distinguishes geminates from singletons. Muller (2001: 17

In a similar articulatory study, Ridouane (2007: 128–129) reports the same results for Tashlhiyt Berber voiceless stops. See §5 for more details.

17

Astrid Kraehenmann

201) rightly speculates that the segmental inventory of the language in question may affect the “availability” of such acoustic cues. But the complexity of syllable structures and the degree of the functional load of the contrast must also play a significant role.

5

Edge effects

In the field of the phonology–phonetics interface, one area of investigation focuses on how the articulation of speech sounds is influenced by the position within prosodic structure they occur in. A number of effects have been established that involve the edges of prosodic domains such as the syllable (chapter 33: syllableinternal structure), the phonological word (chapter 51: the phonological word), the phonological phrase (chapter 50: tonal alignment), etc. What is most interesting and relevant with respect to the topic of this chapter is the fact that these domain edges – beginnings in particular – have a significant effect on sound duration. More specifically, the length of segments or particular properties of segments is greater the higher up in the prosodic hierarchy their edges are (e.g. Fujimura 1990; Byrd et al. 2005). This has become known in the literature as initial articulatory strengthening or prosodic strengthening. It is important to note at this point, however, that most studies so far have been on sounds that are not in a quantity opposition. For example, in their seminal EPG study, Fougeron and Keating (1997) tested the English dummy syllable no in trisyllabic words with initial, medial, and final stress embedded in a carrier phrase. They found that the amount of linguopalatal contact as well as the acoustic duration of the syllable-initial nasal increased with each increasing prosodic level. Fougeron and Keating conclude that these “more extreme articulations” (1997: 3738) perform an important function for perception. On the one hand, they facilitate segmentation into higher prosodic domains, but particularly into words. On the other hand, they enhance the acoustic cues for identifying sounds and thus assist lexical access. Although the correlation between the amount of articulator contact and acoustic duration is reported as being only weak, it is there nonetheless. Because EPG studies are still few and far between, we will also use this acoustic finding as the main basis for our remaining discussion. In a phonetic study on real words, Cho and McQueen (2005) investigated wordinitial Dutch alveolar stops /t d/ in different phrasal and word-stress contexts. As expected, the closure durations of both voiced and voiceless stops were significantly longer in stronger prosodic positions. In addition, however, the VOTs of voiceless stops were cumulatively shorter in the same prosodic positions, i.e. the opposite of the findings for English /t/. Cho and McQueen argue in essence that the difference is due to the difference in how the phonological encoding of the voicing contrast is phonetically enhanced in the two languages. That is, Dutch /t/, as the phonologically unmarked member of the pair (voiceless vs. voiced), has the phonetic (default) specification [−spread glottis], which is enhanced by shorter aspiration; in contrast, English /t/, as the phonologically marked member of the pair (aspirated vs. non-aspirated), has the phonetic specification [+spread glottis], which is enhanced by longer aspiration. One of their main claims is thus that prosodic strengthening acoustically amplifies the difference between sounds in

249

145

195

94

##_V

V#_V

V_V

120 100

V#_V

V_V

20 0

##_V V#_V V_V Tashlhiyt Berber

##_V V#_V V_V Swiss German

29

40

43

60

47 45

80 43 42

VOT

##_V

114 101

64

V_V

134

164

V#_V

19 20

##_V

70

0

19 20

50

57

100

76

150

141

145

200

49

250

18

geminate singleton

20 19

b.

300 215

a.

Articulation duration

Initial Geminates

##_V V#_V V_V Cypriot Greek

Figure 47.7 Articulation/closure durationa and VOT in msec for initial geminates and singletons in phrase-initial (## __ V) and phrase-medial post-vocalic (V# __ V) context and medial geminates and singletons in intervocalic (V __ V) context in three languages: Tashlhiyt Berber (Ridouane 2007: 128–129), Swiss German (Kraehenmann and Lahiri 2008: 4450 –4451), and Cypriot Greek (Muller 2001: 29, 32) a

The measures of the medial geminates and singletons in Tashlhiyt Berber and Cypriot Greek are acoustic (closure duration); all others are articulatory (linguo-palatal contact)

phonological contrast. With respect to VOT effects, this claim seems to be supported by studies on Korean word-initial voiceless stops (Jun 1993; Cho and Keating 2001; Keating et al. 2003), which found that VOT measures were longest phrase initially, somewhat shorter word-initially within a phrase, and shortest wordmedially within a phrase. The question we want to ask now is whether articulatory strengthening also applies to word-initial geminates and singletons and, if so, whether it shows different characteristics. There are no studies comparable to Fougeron and Keating (1997) and their later work that directly address this issue in any depth. However, some generalizations can be drawn from the studies on voiceless stops in Cypriot Greek (Muller 2001), Tashlhiyt Berber (Ridouane 2007), and Swiss German (Kraehenmann and Lahiri 2008), by comparing medial vs. initial geminates and – in the latter two cases – initial geminates in different phrasal contexts. The measures we are therefore most interested in are articulatory/acoustic closure duration and VOT. To start with the latter, we would not expect VOT to become a differentiating characteristic for initial geminates and singletons if it does not play any role for the medial contrast. This is indeed what we find. As can be seen in Figure 47.7b, Tashlhiyt Berber VOTs are only slightly but not significantly

19

Astrid Kraehenmann

shorter when syllable, word, phrase, and utterance boundaries coincide (## __ V), compared to word-medially (V __ V). For the Swiss German data, the non-effect is even more pronounced, with basically identical measures in all three contexts. If anything had changed, longer measures would have been expected in the vocalic context (V# __ V) for initial stops and the longest ones at the phrase edge (## __ V), at least for geminates, if not for singletons. In contrast to these two languages, there are drastically longer measures in the Cypriot Greek data, where VOT is a secondary cue: geminate VOTs are longer by 71 msecs (265 percent), singleton VOTs by 72 msecs (348 percent). Although the 13 msecs difference in the phraseinitial context is still statistically significant, the differential has substantially decreased from 1.5 : 1 to 1.1 : 1, which goes against Cho and McQueen’s (2005) claim of contrast amplification at increasing prosodic domain edges. Unfortunately, Muller (2001) does not provide any data for the contact/closure duration in utterance-initial or vocalic context. Thus we do not know how the primary cue is affected by prosodic strengthening. This, however, is different for Tashlhiyt Berber and Swiss German (see Figure 47.7a). In Tashlhiyt Berber the trend seems to go in the expected direction in two of the three contexts. The geminate to singleton proportion is 2.5 : 1 word-medially and increases to 3 : 1 for word-initial stops after a vowel-final word. But at the utterance boundary, the highest prosodic domain, it decreases again to 2.8 : 1. In Swiss German we find exactly the opposite of what prosodic strengthening would predict, namely that the contrast magnitude decreases as the prosodic level increases. The difference between geminates and singletons is biggest in word-medial intervocalic context (syllable level, 3 : 1), somewhat smaller in word-initial intervocalic context (word level, 2 : 1), and smallest in utterance-initial context (phrase/utterance level, 1.5 : 1). To conclude, there are some strengthening effects in languages with wordinitial quantity contrast but only to the extent that the primary correlates lengthen substantially at the highest prosodic level, the utterance. However, this lengthening affects both geminates and singletons, resulting in contrast diminishment rather than augmentation. The segmental context, rather than the prosodic level, might therefore be the more reliable predictor of how the quantity contrast is realized, because intervocalic geminates – be they word-medial or phrase-medial word-initial – are actually not at a domain boundary, but contain one, because they straddle two syllables; see (6). The only time initial geminates occur at a domain edge is utterance-initially, which, as we have seen, is a very special context in many respects.

6

Conclusion

The field of word-initial quantity contrasts shows many faces and still awaits unveiling and scrutinizing investigations of a wealth of issues. A start has been made. Yet the existing disputes about phonological representation (length vs. weight), acoustic and articulatory properties, and perception are far from settled and promise to provide discussion material and theoretical arguments for years and decades to come. The historical question has only been hinted at in this chapter but constitutes another fertile ground and worthwhile area of research to be followed.

Initial Geminates

20

REFERENCES Abramson, Arthur S. 1986a. Distinctive length in initial consonants: Pattani Malay. Journal of the Acoustical Society of America 79. 527–527. Abramson, Arthur S. 1986b. The perception of word-initial consonant length: Pattani Malay. Journal of the International Phonetic Association 16. 8–16. Abramson, Arthur S. 1987. Word-initial consonant length in Pattani Malay. In Thomas V. Gamkrelidze (ed.) Proceedings of the 11th International Congress of Phonetic Sciences, vol. 6, 68–70. Tallinn: Academy of Sciences of the Estonian SSR. Abramson, Arthur S. 1991. Amplitude as cue to word-initial consonant length: Pattani Malay. Proceedings of the 12th International Congress of Phonetic Sciences, vol. 3, 98–101. Aix-en-Provence: Université de Provence. Abramson, Arthur S. 1999. Fundamental frequency as cue to word-initial consonant length: Pattani Malay. In Ohala et al. (1999), 591–594. Abramson, Arthur S. 2003. Acoustic cues to word-initial stop length in Pattani Malay. In Solé et al. (2003), 387–390. Baker, Brett. 2008. Word structure in Ngalakgan. Stanford: CSLI. Bender, Byron W., Ward H. Goodenough, Frederick H. Jackson, Jeffrey C. Marck, Kenneth L. Rehg, Ho-min Sohn, Stephen Trussel & Judith W. Wang. 2003. Proto-Micronesian reconstructions I. Oceanic Linguistics 42. 1–110. Broselow, Ellen. 1995. Skeletal positions and moras. In John A. Goldsmith (ed.) The handbook of phonological theory, 175–205. Cambridge, MA & Oxford: Blackwell. Broselow, Ellen, Su-I Chen & Marie Huffman. 1997. Syllable weight: Convergence of phonology and phonetics. Phonology 14. 47– 82. Byrd, Dani, Sungbok Lee, Daylen Riggs & Jason Adams. 2005. Interacting effects of syllable and phrase position on consonant articulation. Journal of the Acoustical Society of America 118. 3860–3873. Cho, Taehong & Patricia Keating. 2001. Articulatory and acoustic studies on domaininitial strengthening in Korean. Journal of Phonetics 29. 155–190. Cho, Taehong & James M. McQueen. 2005. Prosodic influences on consonant production in Dutch: Effects of prosodic boundaries, phrasal accent and lexical stress. Journal of Phonetics 33. 121–157. Clements, G. N. 1986. Compensatory lengthening and consonant gemination in LuGanda. In W. Leo Wetzels & Engin Sezer (eds.) Studies in compensatory lengthening, 37–77. Dordrecht: Foris. Clements, G. N. & Samuel J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Curtis, Emily. 2003. Geminate weight: Case studies and formal models. Ph.D. dissertation, University of Washington. Davis, Stuart. 1999a. On the representation of initial geminates. Phonology 16. 93–104. Davis, Stuart. 1999b. On the moraic representation of underlying geminates: Evidence from prosodic morphology. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.) The prosody–morphology interface, 39–61. Cambridge: Cambridge University Press. Davis, Stuart. 2003. The controversy over geminates and syllable weight. In Féry & van de Vijver (2003), 77–98. Davis, Stuart. Forthcoming. Quantity. In John A. Goldsmith, Jason Riggle & Alan C. L. Yu (eds.) The handbook of phonological theory. 2nd edn. Malden, MA & Oxford: Wiley-Blackwell. Dmitrieva, Olga. 2009. Geminate typology and perception of consonant length: Experimental evidence from Russian. Paper presented at the 83rd Annual Meeting of the Linguistic Society of America, San Francisco. Enstrom, Daly & Sonja Spörri-Bütler. 1981. A voice onset time analysis of initial SwissGerman stops. Folia Phoniatrica 33. 137–150.

21

Astrid Kraehenmann

Féry, Caroline & Ruben van de Vijver (eds.) 2003. The syllable in Optimality Theory. Cambridge: Cambridge University Press. Fougeron, Cécile & Patricia Keating. 1997. Articulatory strengthening at edges of prosodic domains. Journal of the Acoustical Society of America 101. 3728–3740. Fujimura, Osamu. 1990. Methods and goals of speech production research. Language and Speech 33. 195–258. Fulop, Sean A. 1994. Acoustic correlates of the fortis/lenis contrast in Swiss German plosives. Calgary Working Papers in Linguistics 16. 55–63. Gordon, Matthew. 2005. A perceptually-driven account of onset-sensitive stress. Natural Language and Linguistics 23. 595–653. Ham, William. 1998. Phonetic and phonological aspects of geminate timing. Ph.D. dissertation, Cornell University. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Hayes, Bruce. 1999. Phonetically driven phonology: The role of Optimality Theory and inductive grounding. In Michael Darnell, Edith Moravcsik, Frederick Newmeyer, Michael Noonan & Kathleen Wheatley (eds.) Functionalism and formalism in linguistics, vol. 1: General papers, 243–285. Amsterdam & Philadelphia: John Benjamins. Hubbard, Kathleen. 1994. Duration in moraic theory. Ph.D. dissertation, University of California, Berkeley. Hume, Elizabeth, Jennifer Muller & Aone van Engelenhoven. 1997. Non-moraic geminates in Leti. Phonology 14. 371–402. Hyman, Larry M. 1985. A theory of phonological weight. Dordrecht: Foris. Jackson, Frederick H. 1984. The internal and external relationships of the Trukic languages of Micronesia. Ph.D. dissertation, University of Hawai’i. Jensen, John T. 1977. Yapese reference grammar. Honolulu: University Press of Hawaii. Jun, Sun-Ah. 1993. The phonetics and phonology of Korean prosody. Ph.D. dissertation, Ohio State University. Keating, Patricia, Taehong Cho, Cécile Fougeron & Chai-Shune Hsu. 2003. Domain initial articulatory strengthening in four languages. In John Local, Richard Ogden & Rosalind Temple (eds.) Phonetic interpretation: Papers in laboratory phonology VI, 145–163. Cambridge: Cambridge University Press. Kennedy, Robert. 2005. Reflexes of initial gemination in Western Micronesian languages. Paper presented at the 12th Meeting of the Austronesian Formal Linguistics Association, University of California, Los Angeles. Keyser, Samuel J. & Kenneth N. Stevens. 2006. Enhancement and overlap in the speech chain. Language 82. 33–63. Khattab, Ghada. 2007. A phonetic study of gemination in Lebanese Arabic. In Jürgen Trouvain & William J. Barry (eds.) Proceedings of the 16th International Congress of Phonetic Sciences, 153–158. Saarbrücken: Saarland University. Kiparsky, Paul. 2003. Syllables and moras in Arabic. In Féry & van de Vijver (2003), 147–182. Kraehenmann, Astrid. 2001. Swiss German stops: Geminates all over the word. Phonology 18. 109–145. Kraehenmann, Astrid. 2003. Quantity and prosodic asymmetries in Alemannic: Synchronic and diachronic perspectives. Berlin & New York: Mouton de Gruyter. Kraehenmann, Astrid. 2009. The perception of the word-initial quantity contrast in voiceless Swiss German stops. Paper presented at the 17th Manchester Phonology Meeting. Kraehenmann, Astrid & Aditi Lahiri. 2008. Duration differences in the articulation and acoustics of Swiss German word-initial geminate and singleton stops. Journal of the Acoustical Society of America 123. 4446–4455. Ladefoged, Peter. 2001. Vowels and consonants: An introduction to the sounds of languages. Malden, MA & Oxford: Blackwell.

Initial Geminates

22

Ladefoged, Peter & Ian Maddieson. 1996. The sounds of the world’s languages. Oxford & Malden, MA: Blackwell. Lahiri, Aditi & Jorge Hankamer. 1988. The timing of geminate consonants. Journal of Phonetics 16. 327–338. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. Lewis, M. Paul (ed.) 2009. Ethnologue: Languages of the world. 16th edn. Dallas: SIL International. Available (August 2010) at www.ethnologue.com. Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University Press. Marti, Werner. 1985. Berndeutsch-Grammatik. Bern: Francke. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. Mikuteit, Simone & Henning Reetz. 2007. Caught in the ACT: The timing of aspiration and voicing in East Bengali. Language and Speech 50. 247–277. Morén, Bruce. 1999. Distinctiveness, coercion and sonority: A unified theory of weight. Ph.D. dissertation, University of Maryland at College Park. Moulton, William G. 1979. Notker’s “Anlautgesetz.” In Irmengard Rauch & Gerald F. Carr (eds.) Linguistic method: Essays in honor of Herbert Penzl, 241–251. The Hague: Mouton. Muller, Jennifer. 1999. Geminate markedness: Evidence from Chuukese. Paper presented at the 6th Meeting of the Austronesian Formal Linguistics Association, University of Toronto. Muller, Jennifer. 2001. The phonology and phonetics of word-initial geminates. Ph.D. dissertation, Ohio State University. Muller, Jennifer. 2003. The production and perception of word initial geminates in Cypriot Greek. In Solé et al. (2003), 1867–1870. Ohala, John J. 1983. The origin of sound patterns in vocal tract constraints. In Peter F. MacNeilage (ed.) The production of speech, 189–216. New York: Springer. Ohala, John J., Yoko Hasegawa, Manjari Ohala, Daniel Granville & Ashlee C. Bailey (eds.) 1999. Proceedings of the 14th International Congress of Phonetic Sciences. Berkeley: University of California. Payne, Elinor & Eftychia Eftychiou. 2006. Prosodic shaping of consonant gemination in Cypriot Greek. Phonetica 63. 175–198. Ridouane, Rachid. 2007. Gemination in Tashlhiyt Berber: An acoustic and articulatory study. Journal of the International Phonetic Association 37. 119–142. Ringen, Catherine & Robert M. Vago. 2010. Geminates: Heavy or long? In Charles Cairns & Eric Raimy (eds.) Handbook of the syllable. Leiden: Brill. Selkirk, Elisabeth. 1990. A two root theory of length. University of Massachusetts Occasional Papers 14. 123–171. Sievers, Eduard. 1901. Grundzüge der Phonetik zur Einführung in das Studium der Lautlehre der indogermanischen Sprachen. 5th edn. Leipzig: Breitkopf & Härtel. Simpson, Adrian P. 1999. Fundamental problems in comparative phonetics and phonology: Does UPSID help to solve them? In Ohala et al. (1999), 349–352. Solé, M. J., D. Recasens & J. Romero (eds.) 2003. Proceedings of the 15th International Congress of Phonetic Sciences. Barcelona: Causal Productions. Stevens, Kenneth N. & Samuel J. Keyser. 1989. Primary features and their enhancement in consonants. Language 65. 81–106. Stevens, Kenneth N., Samuel J. Keyser & Haruko Kawasaki. 1986. Toward a phonetic and phonological theory of redundant features. In Joseph S. Perkell & Dennis H. Klatt (eds.) Invariance and variability in speech processes, 426–449. Hillsdale, NJ: Lawrence Erlbaum. Swadesh, Morris. 1937. The phonemic interpretation of long consonants. Language 13. 1–10. Thurgood, Graham. 1993. Geminates: A cross-linguistic examination. In Joel Ashmore Nevis, Gerald McMenamin & Graham Thurgood (eds.) Papers in honor of Frederick H. Brengelman on the occasion of the twenty-fifth anniversary of the Department of Linguistics, CSU Fresno, 129–139. Fresno: Department of Linguistics, California State University, Fresno.

23

Astrid Kraehenmann

Topintzi, Nina. 2008. On the existence of moraic onsets. Natural Language and Linguistic Theory 26. 147–184. Topintzi, Nina. 2010. Onsets: Suprasegmental and prosodic behaviour. Cambridge: Cambridge University Press. Tranel, Bernard. 1991. CVC light syllables, geminates and moraic theory. Phonology 8. 291–302. Trubetzkoy, Nikolai S. 1939. Grundzüge der Phonologie. Göttingen: van der Hoeck & Ruprecht. Translated 1969 by Christiane A. M. Baltaxe as Principles of phonology. Berkeley & Los Angeles: University of California Press. Tserdanelis, Georgios & Amalia Arvaniti. 2001. The acoustic characteristics of geminate consonants in Cypriot Greek. Proceedings of the 4th International Conference on Greek Linguistics, 29–36. Thessaloniki: University Studio Press. Yupho, Nawanit. 1989. Consonant clusters and stress rules in Pattani Malay. Mon-Khmer Studies 15. 125–137.

48

Stress-timed vs. Syllabletimed Languages Marina Nespor Mohinish Shukla Jacques Mehler

1

Introduction

Rhythm characterizes most natural phenomena: heartbeats have a rhythmic organization, and so do the waves of the sea, the alternation of day and night, and bird songs. Language is yet another natural phenomenon that is characterized by rhythm. What is rhythm? Is it possible to give a general enough definition of rhythm to include all the phenomena we just mentioned? The origin of the word rhythm is the Greek word osh[óp, derived from the verb oeí, which means ‘to flow’. We could say that rhythm determines the flow of different phenomena. Plato (The Laws, book II: 93) gave a very general – and in our opinion the most beautiful – definition of rhythm: “rhythm is order in movement.” In order to understand how rhythm is instantiated in different natural phenomena, including language, it is necessary to discover the elements responsible for it in each single case. Thus the question we address is: which elements establish order in linguistic rhythm, i.e. in the flow of speech?

2

The rhythmic hierarchy: Rhythm as alternation

Rhythm is hierarchical in nature in language, as it is in music. According to the metrical grid theory, i.e. the representation of linguistic rhythm within Generative Grammar (cf., amongst others, Liberman and Prince 1977; Prince 1983; Nespor and Vogel 1989; chapter 41: the representation of word stress), the element that “establishes order” in the flow of speech is stress: universally, stressed and unstressed positions alternate at different levels of the hierarchy (see chapter 39: stress: phonotactic and phonetic evidence). Two examples of stress alternation are given in (1) and (2), on the basis of Italian and English, respectively. The first level of the grid assigns a star (*) to each syllable, and is meant to represent an abstract notion of time; on the second, third, and fourth level, a star is assigned to every syllable bearing secondary word stress, primary word stress, and phonological phrase stress, respectively. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0048

Marina Nespor, Mohinish Shukla & Jacques Mehler

2 (1)

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Domani mattina partiremo presto con il barcone nuovo di Federico ‘Tomorrow morning we will leave early with the new boat of Federico’

(2)

* * * * * * * * * * * * * * Guinevere will arrive with

* * * * * * * * * * * * * ** * * * * * * * * * * * * Oliver tomorrow morning with a transatlantic

Indeed, these examples clearly show that in the two languages there is a similar alternation of stresses ranging from secondary word stress to primary word stress to phonological phrase stress. The level that is problematic in the metrical grid is the basic level, i.e. the level corresponding to the syllable. This representation does not show any alternation, or any element that establishes order in movement: if we restrict our attention to this level, all syllables are represented with equal prominence. It is clear, however, that grids that are identical at all levels, as in the two following Italian and English sentences, may represent very different rhythms. In particular, the first level – which represents an abstract notion of time for syllables – does not represent important differences between languages, precisely because it is abstract: simple syllables and very complex ones receive identical representations. (3)

a.

* * * * * * * * * * * * * * * * * * Domani Luca tornerà ‘Tomorrow Luca will return’

b.

* * * * * * * * * * * * * * * * * * Tomorrow Albert will return

There are thus empirical differences in rhythm between languages that are not represented in a metrical grid. Long before the metrical grid theory was proposed, phoneticians (e.g. Pike 1945) had proposed the existence of rhythmic classes to account for the rhythmic differences between languages like English or German, on the one hand, and languages like Spanish or Italian, on the other.

3

Linguistic rhythm as isochrony

The idea that languages have different rhythms was first advanced by Lloyd James (1940), who observed that the rhythm of Spanish recalls that of a machine gun

Stress-timed vs. Syllable-timed Languages

3

and that of English that of messages in the Morse code. Indeed, this is the same difference as we hear in sentences like (1) and (2), pronounced by native speakers of Italian and English, respectively. Subsequently, Pike (1945) in an attempt to provide empirical support for this dichotomy, proposed that this difference between Spanish and English was due to the requirement of isochrony at different levels. That is, languages would differ according to which chunks of speech must have similar durations, i.e. must be isochronous. The requirement of isochrony would hold between syllables in Spanish, and between interstress intervals in English. This proposal accounted for the fact that the syllables of Spanish or Italian, but not those of English or Dutch, are similar in quantity. Spanish, and languages with a similar rhythm, were thus referred to as syllable-timed, and languages with a rhythm similar to that of English as stress-timed. In subsequent work along the same lines, Abercrombie (1967) proposed that this was a general pattern of temporal organization for all languages of the world. Like Spanish and Italian, French, Telugu, and Yoruba are syllable-timed. And, like English, Russian and Arabic are stress-timed. A third rhythmic class was later added by Ladefoged (1975) to account for Japanese, whose rhythm differs both from that of English and from that of Spanish. According to Ladefoged, in Japanese, isochrony is maintained at the level of the mora, a sub-syllabic constituent that includes either onset and nucleus, or a coda. Japanese – and languages with a similar rhythm, e.g. Tamil – were thus characterized as mora-timed. In terms of metrical or prosodic phonology, this proposal amounts to establishing that, in the languages of the world, the requirement of isochrony holds at one of three phonological constituents: going from the smallest to the largest of the three, the mora ([), the syllable (q) or the foot. The three different types of isochrony are illustrated in (4). (4)

a. stress-timing Äq Äqq Äq Äq Äqq Äq b. syllable-timing CV CCVC CV CV CVC c. mora-timing [[ [[ [ CV V CV C CV q q

q

q

The different types of isochrony are mutually exclusive: isochrony of both syllables and feet would be possible only in an ideal language – to the best of our knowledge not attested – in which all the syllables were of the same type and in which secondary stresses were maximally alternating. Syllabic isochrony is also incompatible with mora isochrony: it would be compatible only in a language in which all syllables had the same number of moras. This case is attested, to the best of our knowledge, only in Hua, a language spoken in Botswana (Blevins 1995), and the West African language Senufo (Fasold and Connor Linton 2006), both reported to have only CV syllables.

4

Marina Nespor, Mohinish Shukla & Jacques Mehler

It is important to observe that this three-way distinction into rhythmic classes was not meant to deny the relevance of stress for either syllable- or mora-timed languages. Different levels of stress are, in fact, undeniable cross-linguistically. In terms of the phonology of rhythm, the distinction was meant to identify different rhythms exclusively at the basic level. According to this conception of rhythm, as isochrony maintained at one of three different levels, belonging to one or the other group would have consequences for the phonology of a language. For example, if feet are isochronous, the syllables of polysyllabic feet should be reduced in duration, while the only syllable of a monosyllabic foot should be stretched. The basic dichotomy between syllable- and stress-timing was largely taken for granted until various phoneticians, on the basis of measurements in different languages, showed that isochrony was not present in the signal. It was shown that interstress intervals in English vary in duration proportionally to the number of syllables they contain, so that the duration of the intervals between consecutive stresses is not constant (Shen and Peterson 1962; O’Connor 1965; Lea 1974). A similar result was obtained for Spanish: syllable duration was found to vary in proportion to the number of segments they contain. Interstress intervals, instead, were found to have similar durations, an unexplainable fact if isochrony is a characteristic of the syllabic rather than the foot level (Borzone de Manrique and Signorini 1983). Similarly, Dauer (1983), on the basis of an analysis of several syllable-timed languages (Spanish, Greek, and Italian), and of English as an example of a stress-timed language, concluded that the duration of interstress intervals does not differ across the different languages. Rather, the timing of stresses reflects universal properties of rhythmic organization. Similar conclusions are reached by den Os (1988): in a comparative study of Italian and Dutch utterances she showed that if the phonetic material of the two languages is kept similar – by selecting utterances in the two languages with an identical number of segments and syllables – their rhythm in terms of isochrony is similar. The difference in rhythm of “machine-gun” languages and “Morse-code” languages is, however, an undeniable fact. If isochrony between different types of constituents is not at the basis of this clear rhythmic difference, what are the factors responsible for it? Dauer (1983) observed that various phonological properties distinguish the two groups of languages: for example, “syllable-timed” languages have a smaller variety of syllable types than “stress-timed” languages, and they do not display vowel reduction (see chapter 79: reduction). These two characteristics are responsible for the fact that syllables in syllable-timed languages are more similar to each other in duration. In Spanish and French, for example, more than half of the syllables (by type frequency) consist of a consonant followed by a vowel (CV) (Dauer 1983). In Italian, 60 percent of the syllable types are CV (Bortolini 1976). The illusion of isochrony thus finds its origin in different phonological characteristics of languages and not in different temporal organizations. These considerations, together with the existence of languages that are neither clearly classifiable as syllable-timed nor as stress-timed, such as Catalan, European Portuguese, and Polish, led Nespor (1990) to draw the conclusion that there is no rhythm parameter. Establishing different rhythms as the cause – rather than the effect – of various phonological phenomena would in addition not account for the fact that very similar phenomena apply to eliminate arhythmic configurations in, for example, English and Italian. In both

Stress-timed vs. Syllable-timed Languages

5

languages, adjacent primary word stresses constitute a stress clash and are eliminated in much the same way (Liberman and Prince 1977; Nespor and Vogel 1979, 1989). That languages vary in their rhythm is a fact. However, from these studies it can be concluded that it is not different rhythms that trigger different phonological phenomena. Rather, different rhythms arise as a consequence of a series of independent phonological properties (cf. also Dasher and Bolinger 1982).

4

Infants’ sensitivity to rhythmic classes

Linguists were not alone in investigating rhythmic classes. The discovery in developmental psychology that newborns are capable of discriminating a switch from one language to another (Mehler et al. 1987; Mehler et al. 1988) triggered further experiments to explore which cues were responsible for this early human ability. In particular, the grouping of languages into different rhythmic classes attracted the attention of cognitive scientists interested in understanding how language develops in the infant’s brain. Mehler et al. (1996) relied on the classification of languages into syllable-timed, stress-timed, and mora-timed to advance a proposal as to how infants may access the phonological system of the language they are exposed to. In particular, they proposed that the rhythmic class of the language of exposure determines the unit exploited in the segmentation of connected speech: infants exposed to stress-timed languages would use the stress foot (that is, the interstress interval), those exposed to syllable-timed language the syllable and those exposed to a mora-timed language the mora (Cutler et al. 1986; Otake et al. 1993; Mehler et al. 1996). Most convincing are a number of experiments carried out with French newborns, which show that they are able to discriminate English from Japanese, but not English from Dutch, in low-pass filtered sentences, that is, in sentences whose segmental information is reduced, while prosodic information is largely preserved (Nazzi et al. 1998). In order to show that rhythm – rather than any other property of the test languages – is responsible for this discrimination ability, Nazzi et al. also tested newborns on a set of randomly intermixed English and Dutch sentences, and showed that they discriminate these from a set of randomly intermixed Spanish and Italian sentences. However, the discrimination ability disappeared when the newborns were tested on a set of English and Spanish sentences vs. a set of Italian and Dutch sentences. Thus the intuitions that many phoneticians shared about different rhythms in English and Italian, for example, are confirmed by newborns’ sensitivity to this distinction. It is thus clear that some physical property must be present in the signal to account for this difference, but until recently it has not been clear what this property was.

5

Rhythm as alternation at all levels

If isochrony is not responsible for the machine-gun and Morse-code effects, we should ask which characteristics in the signal are responsible for it. That is, what is there in the signal that would account for the clear rhythmic differences of languages belonging to different classes? Or what is the element that establishes

6

Marina Nespor, Mohinish Shukla & Jacques Mehler

order at this level? Ramus et al. (1999) answered this question starting from the hypothesis that newborns hear speech as a sequence of vowels interrupted by unanalyzed noise, i.e. consonants; this hypothesis is known as the Time-Intensity Grid Representation (TIGRE; Mehler et al. 1996). Ramus et al. (1999) proposed that, at the basic level, the perception of different rhythms is created by the way in which vowels alternate with consonants. It is thus the regularity with which vowels recur that establishes alternation at this level: vowels alternate with consonants. Starting from the observation that as we go from stress-timed to syllable-timed and then to mora-timed languages the syllabic structure tends to get simpler – and the observation that simple syllables imply the presence of proportionately greater vocalic spaces – vowels would occupy less time in the flow of speech in stress-timed languages than syllable-timed languages. Likewise, in syllabletimed languages vowels would occupy less time than in mora-timed languages, which have the largest amount of time per utterance occupied by vowels. This difference is clear from the rough division into Vs and Cs in the three sentences in (5)–(7). Notice that, in agreement with Ramus et al. (1999), glides are treated as Cs if prevocalic and as Vs if post-vocalic. (5)

English The next local elections will take place during the winter CVCVCCCCVVCVCVCVCCCVCCCVCCVVCCVVCCVCVCCV

(6)

Italian Le prossime elezioni locali avranno luogo in inverno CVCCVCCVCVVCVCCVCVCVCVCVVCCVCCVCCVCVVCVCCVCCV

(7) Japanese Tsugi no chiho senkyo wa haruni okonawareru daro CVCVCVCVCVCVCCVVCVCVCVCVVCVCVCVCVCVCV Ramus et al. (1999) tested this idea on a corpus of eight languages: English, Polish, and Dutch, representatives of the stress-timed category, French, Italian, Spanish, and Catalan, representatives of the syllable-timed category (Abercrombie 1967), and Japanese, representative of the mora-timed languages (Ladefoged 1975). They observed that languages from the same rhythmic class had similar values for %V – i.e. a similar amount of time occupied by vowels in the speech stream – as compared to languages from different rhythmic classes. The computation of %V was carried out on the basis of a careful segmentation, on the basis of both auditory and visual cues from the spectrogram (cf. Ramus et al. 1999). Given the assumption that newborns do not retain the difference between individual Cs and individual Vs, for each sentence only the vocalic and consonantal intervals were measured. Adjacent vowels and adjacent consonants are thus treated as vocalic and consonantal chunks, respectively. A second measure that clusters the languages into three groups is the standard deviation of the duration of consonantal intervals (DC), i.e. a broad measure of the regularity with which vowels recur (see Figure 48.1). Both measures are related to syllable structure. A high %V implies that the repertoire of the possible syllable types is restricted, thus also that the consonantal intervals do not vary a great deal, given that there are no languages in which all

Stress-timed vs. Syllable-timed Languages

7

0.06 English Dutch Turkish

Italian

0.05 DC

Polish Spanish

Marathi

French

0.04

Catalan Hungarian Finnish Basque

0.03 38

40

42

44

46 48 %V

50

Tamil Japanese

52

54

Figure 48.1 DC, the standard deviation of the consonantal intervals, vs. %V, the amount of time per utterance spent in vowels, for 14 languages. The widths of the ellipses along the two axes represent standard errors of the mean along the axes. Dark ellipses represent head-initial languages, and light ellipses head-final languages. Turkish, Hungarian, Basque, Finnish, Marathi, and Tamil are from unpublished results. Data for the remaining languages are from Ramus et al. (1999)

syllables are complex. Rather, even in languages with the greatest variety of syllable types, the basic syllable type – CV – is the most unmarked (Blevins 1995; Rice 2007). Thus, according to this proposal, rhythm is alternation at all levels: of consonants and vowels at the basic level and of stressed and unstressed syllables, feet, and words at subsequent levels. This order in the flow of speech is always established by the alternation of more and less audible elements.

6

Other proposals

The analysis proposed by Ramus et al. (1999) is not the only one to rely on a purely acoustic-phonetic description of the speech stream in trying to understand the basis of linguistic rhythm. In Ramus et al. the %V and DC variables do not consider the relative ordering of long and short intervals inside an utterance. That is, sequences like CCV(.CCV(.CV.CV and CCV(.CV.CV.CCV( (where V( is a long vowel, as opposed to VV, which denotes two different adjacent vowels) will yield identical values for their two variables. Grabe and colleagues therefore chose to examine the pairwise variability indices (PVI) in the vocalic and intervocalic intervals in speech (e.g. Low et al. 2000; Grabe and Low 2002). This measure is meant to capture a little more of the (local) temporal patterns in speech by considering the variability of all pairs of vocalic or intervocalic intervals.

Marina Nespor, Mohinish Shukla & Jacques Mehler

8

The raw pairwise variability index is given by the formula in (8): (8)

m−1 G rPVI = ∑ |dk − dk+1 |/(m − 1)J I k=1 L

where m is the number of intervals, and dk is the duration of the k th interval. Thus, for example, in a language with only simple syllable types, the average durational variability between successive consonantal intervals will be less than in a language with many syllable types. The former language will therefore have a lower consonantal PVI than the latter. These authors showed that also the PVI of the vocalic and intervocalic intervals in speech captures some of the rhythmic distinctions between languages. Thus, as in Ramus et al., these authors too find a measure – the pairwise variability of the vocalic intervals – that separates stress-timed languages (higher PVI) from syllable-timed languages (lower PVI). However, there remain some discrepancies between these measures and those of Ramus et al. For example, while %V and DC clearly separate Japanese (“mora-timed”) from the “syllable-timed” languages, this difference is not apparent with PVIs. Since one of the purposes of a theory of grammar in general and of rhythm in particular is to account for first-language acquisition (cf. §7 below), infants’ discrimination or lack of it between a “syllabletimed” and a “mora-timed” language would help decide which of the two theories makes the correct prediction. The fact that Nazzi et al. (2000) found that 5-monthold infants raised in an American English environment discriminate Japanese from Italian leads us to prefer the %V and DC proposal. Both the analysis proposed by Ramus et al. and that proposed by Grabe and Low start with an initial categorization of the speech stream into vocalic and intervocalic intervals. A different approach is proposed by Galves et al. (2002), who try to avoid any analysis into categories (e.g. vowels and consonants). Instead, they use a measure of sonority, as estimated directly from the spectrogram. In particular, they use a procedure that maps regions of the spectrum as more or less stable (i.e. constant) over short periods of time, as measured by the entropy from one time-slice to the next. This is a fully automatic procedure, and the value for each set of time-slices of the spectrogram goes from 0 to 1. Values close to 1 correspond to a regular spectrum with little variation (low entropy), typical of sonorants, and values close to 0 reflect noisy spectra (high entropy), as might be expected for obstruents. These authors then consider measures of mean and variation in the “sonority” of time-slices as analogues of Ramus et al.’s measures, i.e. %V and DC. This proposal has the great advantage of being based on an automated method. And the authors succeed in roughly replicating the observation of Ramus et al. on their corpus. They can segregate the stress-, syllable-, and moratimed languages in a similar, though less precise manner. An alternative attempt to automatically determine the rhythmic class of a language has been elaborated in Singhvi et al. (in progress). It consists of algorithms for vowel and consonant recognition based on a variety of acoustic features, that allow one to compute %V and DC directly from the speech stream. As noted earlier (§3), linguistic rhythm does not correspond to isochrony at the level of different phonological units in speech. In order to nevertheless capture the intuition of rhythmicity, O’Dell and Nieminen (1999) propose a coupled-oscillator model for speech rhythm. In this model, a lack of overt isochrony is seen as a result of a tension between two rhythmic oscillators, one for stress-groups (roughly, feet)

Stress-timed vs. Syllable-timed Languages

9

and one for syllables. In a general mathematical model of two coupled oscillators, a single parameter determines which of the two oscillators (e.g. the foot or the syllable oscillator) is dominant. It turns out that this parameter can be directly estimated from speech: for stress-timed languages it is large (>1, corresponding to a dominant foot oscillator), and for syllable-timed languages it is small (≤1, corresponding to a dominant syllable oscillator). Thus, in this approach, the idea is that simple, overt isochrony in speech might be constrained by the requirement for temporal coordination across hierarchically organized phonological units in speech (see also Cummins and Port 1998).

7

Rhythm and related properties of grammar: Implications for language acquisition

Given infants’ sensitivity to basic rhythmic properties of languages, as proposed above, the issue must be addressed of the sort of implications that this sensitivity could have for language acquisition. The questions that should be addressed are: do the different rhythmic cues reflect specific grammatical properties? And, might the infant be able to use such cues to bootstrap the related properties? Although there is no one-to-one mapping between phonology and syntax, the two are correlated (cf., amongst many others, Selkirk 1984; Nespor and Vogel 1986, 2008; Morgan and Demuth 1996). On the one hand, it has been shown that the acoustic correlates of prosodic phenomena that signal phonological constituency can allow disambiguation of otherwise ambiguous sequences of words (Cooper and Paccia-Cooper 1980; Nespor and Vogel 1986, 2008; Price et al. 1991). On the other hand, typologists have documented several aspects of phonology and syntax that go together (e.g. Whaley 1997). Indeed, typological studies have revealed a wealth of correlations between different aspects of language, such as morphology, phonology, and syntax. Thus, Greenberg (e.g. 1963) observes that whether a language is VO (verb–object) or OV (object–verb) is correlated with many other grammatical properties of the language. For example, he notes that verb-final languages almost always have a case system. In addition, Koster (1999) observes that most OV languages have flexible word order. In a typological study, Donegan and Stampe (1983) suggest that languages with simple syllabic structures tend to be verb-final. Similarly, Fenk-Oczlon and Fenk (2005) find that languages with simple syllables tend to have postpositions (i.e. to be OV) and richer case systems. Several authors have proposed functional explanations for these observed correlations (e.g. Comrie 1981; Cutler et al. 1985; DuBois 1987; Hawkins 1988; Fenk-Oczlon and Fenk 2004, amongst others). From the point of view of acquisition, these correlations suggest that any cues in the input that lead to the acquisition of a single property might also provide cues and biases for acquiring all the (functionally) related properties. Thus, a cue to a phonological property might also provide cues to morphology and syntax. Indeed, there have been some concrete proposals for how phonology might allow infants to bootstrap a basic syntactic property like word order (Nespor et al. 1996; Christophe et al. 1997; Nespor et al. 2008). Given that newborns show great sensitivity to rhythmic classes, we can speculate that this ability might be useful in bootstrapping various properties of their target language. As we saw in §5, languages from the different rhythmic

10

Marina Nespor, Mohinish Shukla & Jacques Mehler

classes differ in their syllabic structure: going from a low %V to a high %V, languages go from having more complex to having simpler syllabic structures. Typologists have in fact observed that various morphosyntactic properties are correlated with the complexity of syllables in a language (Gil 1986; Fenk-Oczlon and Fenk 2004) and, in addition, with its rhythmic patterns (Donegan and Stampe 1983). The computation of %V might therefore offer cues to very different properties of the language of exposure. Shukla et al. (in progress) hypothesize that the correlates of linguistic rhythm, %V and DC, have consequences for acquiring correlated morphosyntactic properties like agglutination and word order. These researchers extend the results from Ramus et al. to a larger and more varied set of languages. The results indicate that there is a tendency for languages with a low %V to differ from languages with a high %V in head direction, degree of agglutination, richness of the case system, and flexibility of word order (see Figure 48.1). Thus, it is proposed that a simple syllabic structure is correlated with agglutination: if many suffixes can be attached to a word, complex syllabic structure would make these words excessively long and possibly hard to parse. The question remains why agglutination is found almost exclusively in head-final languages. Two different reasons, both syntactic in nature, have been given. In van Riemsdijk (1998), the explanation for the correlation between headfinality and agglutinative morphology is based on head adjunction, the syntactic device that assembles independent, phonetically realized morphemes in complex words. A principle states that head adjunction can take place only between linearly adjacent heads; since heads are adjacent in head-final languages, while they are separated by intervening specifiers in head-initial languages, head adjunction – and thus agglutination – is expected to take place in OV languages only. More recently, Cecchetto (forthcoming) assumes that morphological conflation, responsible for fusional morphology, requires that a direct syntactic dependency be established between a selecting head and a selected one. However, in headfinal languages this dependency would go backwards, since the selecting head linearly follows the selected one, and backward dependencies are disfavored, for processing reasons (e.g. Fodor 1978). As a consequence, in head-final languages affixes cannot be fused, and result in agglutination instead. If there is indeed a syntactic explanation for the correlation between head direction and agglutination, the identification of the rhythmic class of the language of exposure would be one of the mechanisms that would assist the infant in the bootstrapping of both the favored morphological operations and word order in the language to which they are exposed.

8

Concluding remarks

In conclusion, linguists’ intuitive notion of stress-timed and syllable-timed rhythm is most likely a consequence of the phonological organization of different languages, which can be captured by two relatively simple acoustic-phonetic cues, such as %V and DC. Languages appear to be grouped into three rhythmic classes: one corresponding to so-called stress-timed languages, one to so-called syllable-timed languages, and one to so-called mora-timed languages. The rhythmic class to which a language belongs appears also to determine the segmentation unit used by its

Stress-timed vs. Syllable-timed Languages

11

native speakers. Rhythm tends also to be correlated with a constellation of phonological, morphological, and syntactic properties of the language. The observation that newborns segregate languages on the basis of their rhythmic class suggests that this ability may be utilized in the acquisition of various such properties of their target language directly from the input.

REFERENCES Abercrombie, David. 1967. Elements of general phonetics. Edinburgh: Edinburgh University Press. Blevins, Juliette. 1995. The syllable in phonological theory. In John A. Goldsmith (ed.) The handbook of phonological theory, 206–244. Cambridge, MA & Oxford: Blackwell. Bortolini, Umberta. 1976. Tipologia sillabica dell’Italiano: Studio statistico. In Studi di Fonetica e Fonologia, vol. 1, 2–22. Roma: Bulzoni Editore. Borzone de Manrique, Ana Maria & Angela Signorini. 1983. Segmental durations and the rhythm in Spanish. Journal of Phonetics 11. 117–128. Cecchetto, Carlo. Forthcoming. Backwards dependencies must be short: A unified account of the Final-over-Final and the Right Roof Constraints and its consequences for the syntax/morphology interface. In Theresa Biberauer & Ian G. Roberts (eds.) Challenges to linearization. Berlin & New York: Mouton de Gruyter. Christophe, Anne, Marina Nespor, Maria-Teresa Guasti & Brit van Ooyen. 1997. Reflections on phonological bootstrapping: Its role in lexical and syntactic acquisition. In Gerry T. M. Altmann (ed.) Cognitive models of speech processing: A special issue of language and cognitive processes, 585 –612. Mahwah, NJ: Lawrence Erlbaum. Comrie, Bernard. 1981. Language universals and linguistic typology. Oxford: Blackwell. Cooper, William E. & Jeanne Paccia-Cooper. 1980. Syntax and speech. Cambridge, MA: Harvard University Press. Cummins, Fred & Robert F. Port. 1998. Rhythmic constraints on stress timing in English. Journal of Phonetics 26. 145–171. Cutler, Anne, John A. Hawkins & Gary Gilligan. 1985. The suffixing preference: A processing explanation. Linguistics 23. 723 –758. Cutler, Anne, Jacques Mehler, Dennis Norris & Juan Segui. 1986. The syllable’s differing role in the segmentation of French and English. Journal of Memory and Language 25. 385–400. Dasher, Richard & David Bolinger. 1982. On pre-accentual lengthening. Journal of the International Phonetic Association 12. 58 –71. Dauer, Rebecca M. 1983. Stress-timing and syllable-timing reanalyzed. Journal of Phonetics 11. 51– 62. Donegan, Patricia J. & David Stampe. 1983. Rhythm and the holistic organization of language structure. Papers from the Annual Regional Meeting, Chicago Linguistic Society: Parasession on the interplay of phonology, morphology and syntax, 337–353. DuBois, John. 1987. The discourse basis of ergativity. Language 64. 805 –855. Fasold, Ralf & Jeffrey Connor Linton. 2006. An introduction to language and linguistics. Cambridge: Cambridge University Press. Fenk-Oczlon, Gertraud & August Fenk. 2004. Systemic typology and crosslinguistic regularities. In Valery Solovyev & Vladimir Polyakov (eds.) Text processing and cognitive technologies, 229 –234. Moscow: MISA. Fenk-Oczlon, Gertraud & August Fenk. 2005. Crosslinguistic correlations between size of syllables, number of cases, and adposition order. In Gertraud Fenk-Oczlon & Christian Winkler (eds.) Sprache und Natürlichkeit: Gedenkband für Willi Mayerthaler, 75–86. Tübingen: Gunther Narr. Fodor, Jerry. 1978. Parsing strategies and constraints on transformations. Linguistic Inquiry 9. 427–473.

12

Marina Nespor, Mohinish Shukla & Jacques Mehler

Galves, Antonio, Jesus Garcia, Denise Duarte & Charlotte Galves. 2002. Sonority as a basis for rhythmic class discrimination. In Bernard Bel & Isabel Marlien (eds.) Proceedings of the speech prosody 2002 conference, 323–326. Aix-en-Provence: Laboratoire Parole et Langage. Gil, David. 1986. A prosodic typology of language. Folia Linguistica 20. 165 –231. Grabe, Esther & Ee Ling Low. 2002. Durational variability in speech and the rhythm class hypothesis. In Carlos Gussenhoven & Natasha Warner (eds.) Laboratory phonology 7, 377–401. Berlin & New York: Mouton de Gruyter. Greenberg, Joseph H. 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph H. Greenberg (ed.) Universals of language, 73–113. Cambridge, MA: MIT Press. Hawkins, John A. 1988. Explaining language universals. In John A. Hawkins (ed.) Explaining language universals, 3 –28. Oxford: Blackwell. Koster, Jan. 1999. The word orders of English and Dutch: Collective vs. individual checking. In Werner Abraham (ed.) Groninger Arbeiten zur germanistischen Linguistik, 1–42. Groningen: University of Groningen. Ladefoged, Peter. 1975. A course in phonetics. New York: Harcourt Brace Jovanovich. Lea, Wayne A. 1974. Prosodic aids to speech recognition, vol. 4: A general strategy for prosodicallyguided speech understanding. St Paul, MN: Sperry Univac. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. Lloyd James, Arthur. 1940. Speech signals in telephony. London: Pitman & Sons. Low, Ee Ling, Esther Grabe & Francis Nolan. 2000. Quantitative characterizations of speech rhythm: Syllable-timing in Singapore English. Language and Speech 43. 377–401. Mehler, Jacques, Ghislaine Lambertz, Peter Jusczyk & Claudine Amiel-Tison. 1987. Discrimination de la langue maternelle par le nouveau-né. C. R. Academie des Sciences de Paris 303, 15. 637–640. Mehler, Jacques, Peter Jusczyk, Ghislaine Lambertz, Nilofar Halsted, Josiane Bertoncini & Claudine Amiel-Tison. 1988. A precursor of language acquisition in young infants. Cognition 29. 143 –178. Mehler, Jacques, Emmanuel Dupoux, Thierry Nazzi & Ghislaine Dehaene-Lambertz. 1996. Coping with linguistic diversity: The infant’s viewpoint. In Morgan & Demuth (1996), 101–116. Morgan, James L. & Katherine E. Demuth (eds.) 1996. Signal to syntax: Bootstrapping from speech to grammar in early acquisition. Mahwah, NJ: Lawrence Erlbaum. Nazzi, Thierry, Josiane Bertoncini & Jacques Mehler. 1998. Language discrimination by newborns: toward an understanding of the role of rhythm. Journal of Experimental Psychology: Human Perception and Performance 24. 756 –766. Nazzi, Thierry, Peter Jusczyk & Elizabeth K. Johnson. 2000. Language discrimination by English learning 5-month-olds: Effects of rhythm and familiarity. Journal of Memory and Language 43. 1–19. Nespor, Marina. 1990. On the rhythm parameter in phonology. In Iggy Roca (ed.) The logical problem of language acquisition, 157–175. Dordrecht: Foris. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Nespor, Marina, Maria-Teresa Guasti & Anne Christophe. 1996. Selecting word order: The rhythmic activation principle. In Ursula Kleinhenz (ed.) Interfaces in phonology, 1–26. Berlin: Akademie Verlag. Nespor, Marina, Mohinish Shukla, Ruben van de Vijver, Cinzia Avesani, Hanna Schraudolf & Caterina Donati. 2008. Different phrasal prominence realization in VO and OV languages. Lingue e Linguaggio 7(2). 1–28. Nespor, Marina & Irene Vogel. 1979. Clash avoidance in Italian. Linguistic Inquiry 10. 467–482. Nespor, Marina & Irene Vogel. 1989. On clashes and lapses. Phonology 6. 69 –116. Nespor, Marina & Irene Vogel. 2008. Prosodic phonology. Berlin & New York: Mouton de Gruyter. 1st edn, 1986. Dordrecht: Foris.

Stress-timed vs. Syllable-timed Languages

13

O’Connor, J. D. 1965. The perception of time intervals. London: University College London. O’Dell, Michael L. & Tommi Nieminen. 1999. Coupled oscillator model of speech rhythm. In John J. Ohala, Yoko Hasegawa, Manjari Ohala, Daniel Granville & Ashlee C. Bailey (eds.) Proceedings of the 14th International Congress of Phonetic Sciences, 1075–1078. Berkeley: Department of Linguistics, University of California, Berkeley. Os, Els den. 1988. Rhythm and tempo in Dutch and Italian: A contrastive study. Ph.D. dissertation, University of Utrecht. Otake, Takashi, Giyoo Hatano, Anne Cutler & Jacques Mehler. 1993. Mora or syllable? Speech segmentation in Japanese. Journal of Memory and Language 32. 258 –278. Pike, Kenneth L. 1945. The intonation of American English. Ann Arbor: University of Michigan Press. Plato. The Laws, book II. 93. Price, P. J., M. Ostendorf, S. Shattuck-Hufnagel & C. Fong. 1991. The use of prosody in syntactic disambiguation. Journal of the Acoustical Society of America 90. 2956–2970. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19 –100. Ramus, Franck, Marina Nespor & Jacques Mehler. 1999. Correlates of linguistic rhythm in the speech signal. Cognition 73. 265 –292. Rice, Keren. 2007. Markedness in phonology. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 79 –97. Cambridge: Cambridge University Press. Riemsdijk, Henk van. 1998. Head movement and adjacency. Natural Language and Linguistic Theory 16. 633 –678. Selkirk, Elisabeth. 1984. Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press. Shen, Yao & Giles G. Peterson. 1962. Isochronism in English. Occasional Papers, University of Buffalo Studies in Linguistics 9. 1–36. Shukla, Mohinish, Marina Nespor & Jacques Mehler. In progress. Grammar on a language map. Singhvi, Abhinav & David M. Gomez. In progress. Automatic recognition of language rhythm. Whaley, Lindsay J. 1997. Introduction to typology: The unity and diversity of language. Thousand Oaks, CA: Sage.

49

Sonority Steve Parker

1

Introduction

If an interface between phonetics and phonology really exists (pace Ohala 1990b), then one topic having a long and controversial history in that domain is sonority. Sonority can be defined as a unique type of relative, n-ary (non-binary) featurelike phonological element that potentially categorizes all speech sounds into a hierarchical scale. For example, vowels are more sonorous than liquids, which are higher in sonority than nasals, with obstruents being the least sonorous of all segments. In terms of traditional phonetic systems for categorizing natural classes of sounds, then, the feature encoded by sonority most closely corresponds to the notion manner of articulation (see chapter 13: the stricture features). In this sense, sonority is like most other features: it demarcates groups of segments that behave similarly in cross-linguistically common processes. At the same time, however, sonority is unlike most features in that it exhaustively encompasses all speech sounds simultaneously, i.e. every type of segment has some inherent incremental value for this feature. Sonority is also unique in that it has never been observed to spread (assimilate), in and of itself. A major function of sonority is to organize (order) segments within syllables. Specifically, more sonorous sounds, such as vowels, tend to occur in the nucleus, while less sonorous sounds normally appear in the marginal (non-peak) positions – onsets and codas. This concept has engendered several chronic and frequently discussed research questions: (1)

a. b.

What role, if any, does sonority play in Universal Grammar? How many and what kinds of natural class distinctions need to be made in the sonority hierarchy? c. Are its rankings fixed or permutable (reversible)? d. Which distinctions in the sonority scale, if any, are universal and which, if any, are language-particular? e. Is sonority an abstract phonological mechanism only, or does it also have a consistent, measurable phonetic basis?

To answer (1e) briefly, the main acoustic correlate of sonority is intensity. As Ladefoged (1975: 219) notes, “The sonority of a sound is its loudness relative to The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0049

Sonority

2

that of other sounds with the same length, stress, and pitch.” Nevertheless, although much progress has been made in addressing the issues in (1), little consensus has emerged in understanding many of them. This chapter touches on each of the questions in (1), although not necessarily in the same order or to the same degree. The goal is to summarize the debates and document the types of empirical data that have been presented in arguing for the different positions. This chapter is organized as follows: §2 reviews the cross-linguistic phonological evidence for sonority. Thus §2.1 discusses the Sonority Sequencing Principle, §2.2 Minimum Sonority Distance effects, §2.3 the Syllable Contact Law, and §2.4 the contribution of sonority to the relative weight of the rhyme. §3 describes the Sonority Dispersion Principle, while §4 presents several desirable characteristics that a complete sonority hierarchy should ideally display. Finally, §5 examines the physical basis of sonority, as demonstrated experimentally.

2

Phonological evidence for sonority

This section describes various phonological phenomena that demonstrate that sonority is active in many languages. The exposition summarizes works discussing the issues in more depth, such as Parker (2002) and Cser (2003). Phonotactic constraints and morphophonemic alternations provide the most compelling evidence for establishing the divisions in the sonority hierarchy. Consequently, most of the argumentation here relies on these two factors. Several patterns are attested in enough languages to motivate the hypothesis that some notion of sonority should be considered part of Universal Grammar (UG; the innate linguistic faculty shared by all humans; Chomsky and Halle 1968; Kenstowicz 1994). How sonority is best expressed in UG is a separate topic, not discussed in detail here. In many cases, opposing points of view exist, with some linguists denying that sonority is actually involved in these phenomena. See e.g. Ohala (1974, 1990a, 1990b) and Kawasaki (1982) for arguments against appealing to sonority as an explanation for these data.

2.1

The Sonority Sequencing Principle

The domain in which sonority is most often invoked is the syllable, and related notions such as permissible consonant clusters in onset or coda position. This reflects the analogy that the syllable is like a wave of energy (Sievers 1893; Pike 1943). Specifically, syllables universally tend to abide by the following constraint: (2)

Every syllable exhibits exactly one peak of sonority, contained in the nucleus.

This is known as the Sonority Sequencing Principle (SSP) or the Sonority Sequencing Generalization. Key works assuming this principle as a basis for analysis include Hooper (1976), Selkirk (1984), Blevins (1995), and, in Optimality Theory, Cho and King (2003) and Zec (2007). Ohala (1990a, 1990b) and Wright (2004) note that a rudimentary notion of the SSP is observed in the work of de Brosses (1765). For the purpose of formally encoding and testing the SSP, the most frequently cited sonority scale is the following:

Steve Parker

3

Table 49.1 Cross-linguistic variation in syllabic segments based on sonority Vowels

Liquids

Nasals

Obstruents

✓ ✓ ✓ ✓

– ✓ ✓ ✓

– – ✓ ✓

– – – ✓

Bulgarian, Hawaiian, Kabardian, Latin, Spanish Lendu, Sanskrit, Slovak English (Central) Carrier, (Imdlawn) Tachelhit (Berber)

(3)

Modal sonority hierarchy (e.g. Clements 1990; Kenstowicz 1994; Smolensky 1995) vowels

> glides > liquids > nasals

higher in sonority

> obstruents

lower in sonority

In terms of sonority, the five natural classes in (3) are the easiest ones to motivate and the most useful ones to employ. Assuming the SSP and the hierarchy in (3), hypothetical syllables like [ta], [k7u], [wos], and [phlænt] are well formed, since their sonority slope uniformly rises from the beginning of the syllable to the nuclear vowel, and falls from the nucleus to the end of the syllable. Conversely, syllables containing “sonority reversals” such as [lpa] and [odm] violate (2) and are therefore illicit in most languages. One argument for the SSP is that cross-linguistically the inventory of [+syllabic] segments in particular languages normally forms a continuous range based on a scale like (3). Thus, the propensity for a sound to occur in nuclear position is correlated with how sonorous it is. The typology of permissible syllabic segments across languages is illustrated in Table 49.1, adapted from Blevins (1995) and Zec (2007).1 The generalization is that if a language permits syllabic segments from a lower sonority class, it also allows nuclei from all higher sonority classes. In Tachelhit even voiceless stops occur in nuclear position. However, glides are omitted here since by definition they are non-nuclear. The following example lists forms containing syllabic consonants from two of these languages, where [.] marks a syllable boundary (Parker 2002; Zec 2007; Ridouane 2008).2 (4)

a.

Slovak [ky.vi] [v=.ka]

b. ‘blood’ ‘wolf’

Tachelhit [t».dxt] [ty.g=t] [tv.tw.tztt]

‘gather wood’ ‘lock’ ‘you sprained it (fem)’

Nevertheless, while the pattern in Table 49.1 is a strong tendency, it is not universally obeyed. For example, many languages (especially in Africa) attest syllabic

1

Language names and genetic affiliations follow the Ethnologue (Lewis 2009). In the online version of this chapter, the appendix provides more details about the languages cited here: country, linguistic phylum, primary source of data, etc. 2 The online version of this chapter contains more illustrative data throughout.

Sonority

4

nasals but not syllabic liquids: e.g. Djeebbana, Lele (Chad), and Swahili (Blevins 2006). Thus, factors other than sonority must also be appealed to in some cases. Besides syllabic consonants, another reason to adopt the SSP is that it accounts for tautosyllabic consonant clusters in most languages. The following example illustrates three languages that strictly follow the SSP in onsets and codas (Blevins 2006): (5)

a.

b.

c.

Cheke Holo [kai.ka.fli] [kmai.kma.ji] Djeebbana [è.ka.la] [kalk.bet] Spanish [plan] [t7ans.k7i.ßi7]

‘flash on and off’ ‘eat a varied meal (reduplicated)’ ‘fork’ ‘northern black wallaroo’ ‘plan’ ‘to transcribe’

Again there are exceptions; (6) shows data from two languages in which the underlined consonant clusters apparently violate the SSP (Blevins 2006): (6)

a.

Leti (Indonesia) [pni.nu] [sra(t] [rka(.lu] [rstp.le]

‘fool’ ‘main road’ ‘they shout’ ‘they sail’

b.

Yir Yoront [melt] [patl]

‘animal, bird’ ‘clean, bald’

Counterexamples to the SSP also occur in some Indo-European languages, such as Czech, Romanian, and Russian. Extreme cases are found in Georgian, a Kartvelian language attesting the word-initial clusters /zrd/, /mkrt/, /msøv/ and /m[vrtn/ (Blevins 2006). Such exceptions have led some researchers to conclude that analyses based on sonority are circular in nature (Ohala 1990b: 160). However, no studies exist in which the proportion of languages with sonority reversals is tabulated among a statistically reliable and balanced sample. Therefore, based on available data, it seems safe to conclude that a large percentage of the world’s languages do conform to the SSP. Furthermore, purported counterexamples like Georgian /m[vrtn/ are dubious if the onset cluster in question occurs only word-initially, but not word-internally. This is crucial since, in a given language, a consonant cluster should appear in a position other than at a word edge in order to count as a canonical syllable type. Otherwise, when a greater number of consonants show up next to a word boundary, it is debatable whether this constitutes a true syllable margin. A more principled explanation is to analyze the SSP-violating segment(s) as a degenerate syllable or an extrasyllabic appendix licensed by the prosodic word. See Cho and King (2003) for further discussion. Finally, some alleged SSP violations cannot withstand further scrutiny. For example, Blevins (2006) lists Leti [rka(lu] in (6) above. However, van Engelenhoven (2004) states that word-initially before another consonant the trilled /r/, nasals, /l/ and /s/ are lengthened and “syllabic.” Thus, a more accurate transcription of this word is [y.ka(.lu]. Since the /r/ and the /k/

5

Steve Parker

Table 49.2 Typological range of languages containing sC clusters s+

Spanish

French, Western Keres

Greek

English

Dutch

German

Russian

– – – – –

✓ – – – –

✓ ✓ (–) – –

✓ – ✓ ✓ –

✓ ✓ ✓ ✓ (–)

✓ – ✓ ✓ ✓

✓ ✓ ✓ ✓ ✓

stop fricative nasal lateral rhotic

are not tautosyllabic, this is not a counterexample to the SSP. Rather, it confirms it. Cross-linguistically, the most frequent exceptions to the SSP involve initial /s/ followed by a plosive, as in the English words spill, still, and skill. Morelli (2003) and Goad (chapter 38: the representation of sc clusters) focus on this phenomenon. Table 49.2 is adapted from the latter. Summarizing this table, Goad observes that the lower a consonant is in sonority, the more preferred it is after an initial /s/. She thus posits that s + stop › s + nasal › s + lateral › s + rhotic, where “›” = “is more harmonic than.” This scale (minus the s) follows many sonority hierarchies that posit more natural class distinctions than (3), such as the maximally detailed scale in §4. Morelli (2003) reaches a similar conclusion. She notes that many languages have onset cluster inventories comprising three main types: (1) stop + sonorant, (2) fricative + sonorant and (3) fricative + stop. These first two satisfy the SSP, while the third reverses it (assuming that fricatives are more sonorous than stops; see below). Illustrative languages include Haida, Hindi, Hungarian, Isthmus Zapotec, Italian, Mohave, Swedish, Telugu, Yecuatla Totonac, and Yuchi. To her knowledge, however, no language exists that is analogous to these, yet completely follows the SSP: hypothetically, (1) stop + sonorant, (2) fricative + sonorant, and, crucially, (3) stop + fricative (not counting affricates). Consequently, she posits that among onset clusters consisting of two obstruents, the unmarked type is fricative + stop, where “unmarked” = phonologically default and most common (Kenstowicz and Kisseberth 1973: 3; de Lacy 2006). Summarizing thus far, the SSP is a strong universal tendency but has exceptions. In some cases the reversals in the sonority slope are of the largest possible degree: an onset consisting of a glide followed by a voiceless stop. To illustrate, Santa María Quiegolani Zapotec exhibits many words like the following (Regnier 1993): (7) [wkìt] [wtò(?] [jk7]

‘game’ ‘sell (completive)’ ‘buy (potential)’

Nevertheless, typological generalizations can still be made. For example, most languages have more consonant cluster types and tokens obeying the SSP than violating it. Furthermore, more languages attest obstruent + liquid (OL) onset clusters, for instance, than the opposite (LO). This can be stated even more forcefully as an implicational universal: if a language allows complex onsets of the type

Sonority

6

LO, it must permit OL clusters as well, whereas the inverse is not necessarily true (Greenberg 1978). However, this kind of observation cannot be extended to include all possible natural class combinations. For instance, Texistepec Popoluca does not permit obstruent + nasal (ON) syllable initially, yet it does allow NO clusters: [mbak] ‘my bone’ (Wichmann 2002).3 Consequently, absolute claims about the SSP tend to break down given enough languages. Nevertheless, one apparently exceptionless statement is the following: (8)

No language exists in which all tautosyllabic consonant clusters reverse the SSP.

Returning to the five-category sonority hierarchy in (3), many phonologists expand this by making subdivisions within three of the natural classes: vowels, liquids, and obstruents (cf. (27) in §4). For example, fricatives are often claimed to be more sonorous than stops (Hankamer and Aissen 1974; Steriade 1982, 1988; Kager 1999). To illustrate, in Sanskrit reduplication, when a verb base begins with a consonant cluster, the prefix retains the less sonorous of these two sounds. Thus, in (9), when a stop and a continuant are adjacent in either order, the reduplicant invariably surfaces with the stop: (9)

Sanskrit (from Whitney 1889) /praŒ h/ /swar/ /tsar/ /st ha(/ /tja–/ /œrat h/ /druw/ /mluŒ/ /rdh/

→ → → → → → → → →

[pa-praŒ h] [sa-swar] [ta-tsar] [ta-st ha(] [ta-tja–] [œa-œrat h] [du-druw] [mu-mluŒ] [a(r-di-dham]4

‘ask’ ‘sound’ ‘approach stealthily’ ‘stand’ ‘forsake’ ‘slacken’ ‘run’ ‘set’

In (9) the obvious generalization is that the reduplicant copies the less sonorous consonant from the onset of the base, regardless of its relative position within the cluster. Otherwise, if all obstruents are equal in sonority, the analysis of this process is more complicated to express (Benua 1997; Hironymous 1999). For further data and discussion of Sanskrit reduplication, see chapter 119: reduplication in sanskrit. While the full details are complex, the pattern whereby the least sonorous segment emerges in the prefix is very regular. For a mathematical explanation of this effect, see §3. When underlying representations juxtapose sounds violating the SSP, these are repaired in four different ways cross-linguistically: (1) vowel epenthesis, (2) deletion, (3) syllabic consonants, and (4) metathesis. First, a vowel can be inserted to rescue the unsyllabifiable consonant, a process called stray epenthesis (Itô 1986). This occurs in Serbo-Croatian (Kenstowicz 1994): 3

Prenasalized stops (common in African languages) do not violate the SSP, since they are single phonemic units, not true sequences. Syllabic nasals, such as in hypothetical [è.da], do not constitute tautosyllabic onsets either. 4 Whitney does not gloss this root, but notes that the form is aorist.

Steve Parker

7 (10) a. b. c.

masculine

neuter

pust zelen dobar jasan bogat kriÚan

pusto zeleno dobro jasno bogato kriÚano

‘empty’ ‘green’ ‘good’ ‘clear’ ‘rich’ ‘cross’

In the adjective paradigm in (10), the neuter is marked by the suffix /-o/. The masculine column displays no overt morphological marking. In (10b), [a] alternates with Ø. This vowel surfaces phonetically in the final syllable of the masculine forms, between the last two consonants, but is absent in the neuter column. The contrasting forms in (10c) contain an [a] in the second syllable in both columns. This demonstrates that the alternation in (10b) involves epenthesis of [a] in the masculine forms, not syncope of underlying /a/ in the neuter column. The underlying representations of the roots in (10b) are /dobr/ and /jasn/. These underlying representations end with a consonant cluster consisting of an obstruent followed by a sonorant. If these were syllabified directly into a complex coda, they would violate the SSP. In contrast, the root /pust/ in (10a) ends with a cluster in which sonority falls. Therefore stray epenthesis is not needed since the sequence /pust/ can be exhaustively syllabified while respecting the SSP. The sonority profile of two of these contrasting roots is displayed in the following metrical-like grids (Jespersen 1904; Zec 1988; Clements 1990; Kenstowicz 1994): (11)

vowel glide liquid nasal fricative stop

* * * * * * * * * * p u s t

* * * * * * * * * * * * d o b r

These grids employ the five-category sonority hierarchy in (3), supplemented by obstruents being split between fricatives and stops, motivated by the Sanskrit data in (9). As these figures show, the morpheme /pust/ in isolation (in the masculine column) contains one peak of sonority (the /u/), whereas /dobr/ contains two (the /o/ and the /r/). Consequently, the motivation for inserting a vowel in the second case ([dobar]) is to rescue the /r/, which cannot be incorporated into the same syllable as the /b/ without violating the SSP. A second process used to fix SSP violations is the deletion of an unlicensed (unsyllabifiable) consonant, known as stray erasure (Itô 1986). This process is illustrated by Ancient Greek. The following data show that complex onset and coda clusters are permitted, including word-medially (Steriade 1982; Kenstowicz 1994): (12)

kleph smerd.nos am.blus

‘to steal’ ‘power, force’ ‘dull’

Sonority as.tron a.elp.tos t helk.tron pemp.tos

8

‘star’ ‘unhoped for’ ‘charm’ ‘sent’

The form [t helk.tron] demonstrates that up to four consonants can be concatenated intervocalically, providing the SSP is respected. However, in the reduplicated form /CV-graph-st hai/ → [gegraph t hai] ‘to have been written’, the underlying /s/ at the beginning of the infinitival suffix occurs between two stops. If this word were assigned a sonority profile as in (11), the /s/ would constitute a peak of sonority. However, this /s/ is not syllabic, nor can it be incorporated into a syllable with the preceding /ph/ or the following /t h/ without violating the SSP. Consequently, since it is prosodically unparsable, it is elided. According to Steriade (1982) and Itô (1986), this phenomenon is a default universal mechanism automatically applying at the end of the derivation to clean up any remaining problems (see chapter 68: deletion). A third strategy for dealing with SSP violations is to simply retain the offending consonant, in which case it is automatically realized phonetically as syllabic. English illustrates this with unstressed sonorant consonants in word-final clusters: prism, button , pickle, manner. Another example is Chamicuro (Parker 1989): (13)

[w-usm-i] ‘I sing’ 1sg-sing-epenthetic

[w-usx-kati] ‘I sang’ 1sg-sing-past

Fourth, and most rarely, SSP violations are resolved by metathesis. The most convincing case of this to date is Western Farsi. When a final vowel is deleted by a general process of apocope, an obstruent or nasal in a potential coda cluster metathesizes with a following liquid. Otherwise (without metathesis), the final liquid would constitute a separate sonority peak, which this language does not allow (Hock 1985): (14)

Œaxra suxra vafra asru vazra *namra5

→ → → → → →

Œarx surx barf ars gurz narm

‘wheel’ ‘red’ ‘snow, ice’ ‘tear’ ‘club’ ‘soft’

(cf. suhr-ab ‘ruddy goose’)

Hock (1985) attributes this alternation to the SSP. However, from his description this is primarily a historical process, so it may no longer be synchronically active.

2.2

Minimum Sonority Distance

While the SSP rules out many of the prohibited syllable types in most languages, it is not the full story. For example, the three syllables [kna], [kla], and [kwa] equally

5

The form /namra/ is preceded by *, since it is reconstructed.

Steve Parker

9

Table 49.3 Minimal Sonority Distance language types

MSD MSD MSD MSD

= = = =

0 1 2 3

Maximal inventory of permissible onset clusters

Languages

OO, ON, OL, OG, NN, NL, NG, LL, LG, GG ON, OL, OG, NL, NG, LG OL, OG, NG OG

Bulgarian, Leti Chukchee Gizrra, Kurdish, Spanish Mono, Panobo, Japanese (?), Mandarin Chinese (?)

satisfy the SSP. Nevertheless, although many languages permit onset clusters such as [kl] and/or [kw], syllables like [kna] are much less common. One explanation for this asymmetry is a language-specific parametric requirement that the members of a tautosyllabic consonant cluster be separated by a minimum number of ranks on the sonority scale (Steriade 1982; Selkirk 1984). For example, /k/ and /l/ are sufficiently distinct in relative sonority and may therefore be combined. However, /k/ and /n/ are too close along this scale, and this is not tolerated in many languages. Conversely, a language like Russian, which permits words like /kniga/ ‘book’, has a lower threshold on this parameter. This condition is captured by the following principle: (15)

Minimal Sonority Distance (MSD) Given an onset composed of two segments, C1 and C2, if a = Sonority Index of C1 and b = SI(C2), then b − a ≥ x, where x ∈ {0, 1, 2, 3}.

Assuming the sonority hierarchy in (3), the typology of possible languages shown in Table 49.3 is generated (cf. Zec 2007). The generalization is that if a language permits clusters with a lower sonority distance, it allows clusters of all higher sonority distances as well, ceteris paribus. The inverse of this is not true. The reversed counterparts of these onsets, such as *LO, can be excluded by the independently motivated SSP when necessary (§2.1). The data in (16) illustrate typical consonant sequences from each of the four language types in Table 49.3. Naturally, not every cluster type is fully productive for all phoneme combinations in these languages. Nevertheless, enough representative examples occur to justify the general trends. (16)

a.

Leti (van Engelenhoven 2004) [ptu.na] ‘star’ [tmu.ra] ‘tin’ [kru.ki] ‘crab (sp.)’ [x.kwo.ri] ‘you (sg) lift’

b.

Chukchee (Kämpfe and Volodin 1995) [plHtkuk] ‘end, finish, conclude’ [qlikkin] ‘twenty’ [t7eŒejwH:?e] ‘I will go’ [lju7] ‘suddenly’

Sonority c.

10

Gizrra (van Bodegraven and van Bodegraven 2005) [gles] ‘dew’ [ta.p7az.dH] ‘on (his) fangs’ [djao] ‘palm (sp.)’ [u7.mjao] ‘tree (sp.)’

d. Panobo (Parker 1992) [hwhn.ti] ‘heart’ [ßwi.ni.k]1] ‘they are taking, carrying’ [pja.ka] ‘nephew, niece’ [wa.ta.tjan] ‘last year’ As Table 49.3 displays, a significant implication of the MSD approach is that the ideal onset cluster consists of an obstruent plus a glide, all else being equal. Thus, if a language allows complex onsets and has glides in its phonemic inventory, it must permit stop + glide clusters. This is the only onset sequence occurring in all four language types in Table 49.3. An explanation for this is that these two natural classes (stops and glides) are maximally separated in terms of their relative sonority, since they occupy the extreme ends of the scale (among consonants). Baertsch (2002) proposes one way to capture MSD effects in Optimality Theory (OT: Prince and Smolensky 1993). The corresponding prediction is that some languages should exist which permit OG but no other onsets. Two such cases are Mono (Democratic Republic of the Congo; Olson 2005) and Panobo. Other possibilities are Japanese (Vance 1987, 2008) and Mandarin Chinese (Yuan 1989). The latter two are listed in Table 49.3, followed by question marks to highlight their controversial status. Also, in Hindi (Ohala 1983) and Koluwawa (Guderian and Guderian 2005), the only initial clusters are OG and NG, but not *OL. There is, however, a problem. In a sequence like [kwa], the [w] is potentially ambiguous since it allows different phonological interpretations. A priori it could pertain to a diphthongal nucleus rather than the onset: [k\]. Alternatively, it might be a secondary articulation (labialization) of the preceding /k/: [kwa]. If so, then there really is no consonant cluster, just a single complex phonemic unit. The third possibility is that [kw] simply constitutes a true onset cluster, as in Panobo. Teasing apart these different conclusions is complicated, and often the language-specific evidence is not compelling either way. Unfortunately, then, when no other canonical onset clusters (such as OL) exist in a language, the argumentation is in danger of circularity regardless of which segmentation is posited. See §3 for an alternative model that claims that the unmarked initial cluster is not OG but OL. Finally, in the MSD approach, the sonority distance between the second onset consonant and the vowel is not crucial, because phonotactic restrictions rarely obtain across onset–nucleus junctures (Blevins 1995). However, see §3 for an approach in which the nature of this sequence (C2 + V) does matter. See also chapter 15: glides for further discussion of glides, and chapter 55: onsets for an expanded treatment of onsets.

2.3

The Syllable Contact Law

Another sonority-based principle active in many languages is the Syllable Contact Law (SCL). Some seminal references are Hooper (1976), Murray and Vennemann (1983), and Clements (1990). More recent treatments of the SCL as a

Steve Parker

11

Table 49.4 Alternations motivated by the Syllable Contact Law Process

Illustration Language

Example

coda weakening

g.n → w.n

Hausa

/hagni/ → [haw.ni] ‘left side’

Kyrgyz Kazakh Kazakh Kazakh

/konok-lar/ → [konok.tar] ‘guest-pl’ /kol-lar/ → [kol.dar] ‘hand-pl’ /koIQz-lar/ → [koIQz.dar] ‘bug-pl’ /koIQz-ma/ → [koIQz.ba] ‘bug-int’ [tek-.l>ç] ‘daily’, [e.kl>ç] ‘disgusting’

onset strengthening k.l → k.t (desonorization) l.l → l.d z.l → z.d z.m → z.b tautosyllabification

k.l → .kl

Germanic

gemination

b.r → b.br

Latin > Italian /lab7um/ → [lab.b7o] ‘lip’

epenthesis

n.r → n.dr

Spanish

regressive assimilation

k.m → I.m Korean

/kuk-mul/ → [kuI.mul] ‘broth’

progressive total assimilation

g.n → g.g

Pali

/lag-na/ → [lag.ga] ‘attach (past part)’

regressive total assimilation

n.l → l.l

Korean

/non-li/ → [nol.li] ‘logic’

anaptyxis

p.r → pV.r

Ho-Chunk

/hipres/ → [hiperes] ‘know’

metathesis

d.n → n.d

Sidamo

/gud-nonni/ → [gun.donni] ‘they finished’

/beni7-a/ → [ben.d7a] ‘(s/he) will come’

family of OT constraints include Davis (1998), Gouskova (2004), and Zec (2007). The following are two typical formulations of the SCL: (17) Syllable Contact Law a. b.

A heterosyllabic juncture of two consonants A.B is more harmonic (ideal) the higher the sonority of A and the lower the sonority of B. In any heterosyllabic sequence of two consonants A.B, the sonority of A is preferably greater than the sonority of B.

By (17), for example, the sequence [l.k] is inherently less marked than [k.l]. Vennemann (1988: 50) provides a list of sample repair strategies that languages employ to improve satisfaction of the SCL. These are summarized in Table 49.4, as annotated by Davis (1998: 183) and supplemented by Seo (chapter 53: syllable contact), which offers more data and discussion. Based on a survey of 31 languages with SCL effects, her results give a better idea of the range of typological generalizations and their relative robustness. For example, Kazakh tolerates a /j-l/ juncture, as in [mandaj.lar] ‘foreheads’, since sonority drops slightly from /j/ to /l/. Kyrgyz, nevertheless, requires a greater fall in sonority and maps /aj-lar/ to [aj.dar] ‘moons’ (Davis 1998). Examining the details of SCL phenomena in particular languages allows us to establish subtle differences in sonority ranks. For instance, Spanish attests words such as [pe7.la] ‘pearl’ and [al.re.Ïe.Ïo7] ‘around’, yet the hypothetical sequence *[l.7] systematically does not occur. When such a juncture would be created, an intrusive stop appears instead. This happens when the future tense is derived by dropping the infinitival theme vowel: /sali7/ ‘to leave’ → *[sal.7a] → [sal.d7a] ‘(s/he)

Sonority

12

will leave’. These facts motivate the following sonority hierarchy among Spanish liquids, based on the second of the two definitions in (17): flap > lateral > trill (Bonet and Mascaró 1997; Parker 2008). Nevertheless, there are problems with the SCL too. For example, it predicts that obstruent + sonorant junctures should be “fixed” more often than sonorant + sonorant clusters, ceteris paribus. The opposite in fact is true (chapter 53: syllable contact). Furthermore, in Akan both /O-N/ and /N-O/ sequences result in phonetic [NN]. The latter is a mirror image of Korean nasal assimilation (Table 49.4), even though this makes syllable contact worse: /óU-dú/ → [óU.nú?] ‘he should arrive’ (Schachter and Fromkin 1968).

2.4

Rhyme weight

It is well known that the heavier a syllable is, the more it tends to attract stress (Hayes 1980; Prince 1990). For example, open syllables are light, but closed syllables are usually bimoraic. Thus, they may be obligatorily stressed (see also chapter 57: quantity-sensitivity). Also, in some languages, rhymes headed by /e/ or /o/ attract stress more than those with /i/ and /u/, indicating that mid vowels have more weight than high vowels in these systems. Furthermore, the propensity for a coda consonant to project a mora is correlated with how sonorous it is (Zec 1988, 1995). An adequate theory of phonology should provide a unified (non-accidental) explanation for these facts. Appealing to a scalar feature like sonority allows us to do that. Based on case studies examining the relationship between segmental quality and syllable weight effects, the following hierarchy of vowel sonority has been posited (Kenstowicz 1997; de Lacy 2002, 2004, 2006, 2007a). Specific languages may choose to exploit different subsets among these natural classes: (18)

Relative sonority of vowels a > e, o > i, u > H > q

To illustrate, Kobon vowels are divided into four groups in terms of stress assignment: /a/ > /e o/ > /i u/ > /H q/. In this case the potential distinction between /q/ and /H/ is underexploited. In unaffixed Kobon words, stress predictably falls on the most sonorous nucleus within a disyllabic window at the right edge (Davies 1980, 1981): (19)

a>e a>i a>H a>q o>u o>i o>q i>H i>q

[haI’gaße] [kh i.’a] [kh HßH’ja] [’aJqm-’aJqm] [’mo.u] [si.’oIkh] [gq’7o-gq’7o] [ga’ÈinHI] [’jimbqÈ]

‘blood’ ‘tree (sp.)’ ‘rat’ ‘lightning’ ‘thus’ ‘bird (sp.)’ ‘talk (mother pig to piglet)’ ‘bird (sp.)’ ‘very’

The generalization in (18) and (19) is that vowels which are more peripheral in the acoustic space are more sonorous than central ones. Furthermore, within

Steve Parker

13

these two sets, segments involving a lower jaw configuration outrank their higher counterparts (Coetzee 2006). At the bottom of this hierarchy, /H/ is higher in sonority than /q/. This is due to languages such as Lushootseed, in which /H/ can be stressed (unlike English) when it is the only nucleus in a root. Stress falls on the first “full” vowel of the stem; otherwise on the first schwa (Urbanczyk 2006): (20)

Stressable /H/ in Lushootseed [’?itut] [dzH’xixw ] [’Œuqwu-d] [ŒH’gwas] [k’H’daju] [’–HsHd] [’bHŒ]

‘sleep’ ‘creek’ ‘to whittle something’ ‘wife’ ‘rat’ ‘foot’ ‘fall down’

Other languages in which sonority is crucial to stress assignment include Pichis Ashéninka (Payne 1990; Hayes 1995), Komi (Hayes 1995; de Lacy 1997), and Finnish (Anttila 1995; Hayes 1995). However, a reviewer notes that contrastive /H/ in languages like Lushootseed may be different in quality from phonetic [H] resulting from reduction in English and analogous languages. For example, the former is probably longer in duration than the latter. This is a valid point that must be controlled for in cross-linguistic comparisons of this sort. For Lushootseed, Urbanczyk posits a phonemic /H/ in underlying forms. In motivating a constraint against stressed schwas she writes: *p has the distributional hallmarks of a markedness constraint because there are languages which never stress schwa, languages which avoid stressing schwa, and languages which permit schwa, along with other vowels, to be stressed, but no language enforces the stressing of schwa in preference to other vowels. (Urbanczyk 2006: 210)

She then lists other Salishan languages in which this prohibition is active (2006: 211, fn. 24). See chapter 26: schwa for further discussion of schwa in general. Concerning the relative weight of coda consonants, the typological range of languages is also dependent on sonority. Table 49.5 is adapted from Zec (1995, 2007). The generalization is that if a lower-sonority class is moraic in a particular language, then all higher-sonority categories are also moraic in syllable-final position. Zec (2007) knows of no language in which coda liquids count as heavy but nasals do not; she considers this an accidental gap. In addition to stress attraction, other diagnostics for consonant moraicity are: (1) the ability to bear a contrastive tone (Tiv; Zec 1995), (2) prosodic minimality (Fijian; Dixon 1988), and

Table 49.5 Inventories of moraic segments Natural classes contributing to syllable weight

Languages

vowels only vowels and liquids vowels, liquids, and nasals vowels, liquids, nasals, and obstruents

Fijian, Halh Mongolian, Lardil, Yidiny ? Gonja, Kwakiutl, Lithuanian, Tiv Egyptian Arabic, English, Latin, Maithili

Sonority

14

(3) blocking of processes such as vowel reduction (Maithili; Hayes 1995) (see also chapter 33: syllable-internal structure). (21) gives sample data from the three attested language types in Table 49.5: (21)

a.

Fijian (Dixon 1988) Stress the syllable containing the penultimate mora (closed syllables do not occur). [’siIa] ‘day’ m [ bu’ta?o] ‘steal’ m [ buta’?oÏa] ‘steal-trans’ [?i’laa] ‘know-trans’ [’raiÏa] ‘see-trans’ [’lu.a] ‘vomit (vb)’

b.

Tiv (Zec 1995) Only sonorant consonants occur in codas, where they bear tone. [bág] ‘salt’ [fá!3] ‘rainy season’ [rùmù5] ‘agreed, confessed’

c.

Egyptian (Cairene) Arabic (Hayes 1995) Stress the ultima if superheavy (trimoraic), otherwise the penult if heavy, otherwise the antepenult. [ka’tabt] ‘I wrote’ [ha–’–a(t] ‘pilgrimages’ [’be(tak] ‘your (masc sg) house’ [ka’tab.ta] ‘you (masc sg) wrote’ [mu’dar.ris] ‘teacher’ [?in’kasara] ‘it got broken’ [’kataba] ‘he wrote’

If fricatives are more sonorous than stops (§2.1), by implication some languages should exist in which fricatives occur in coda position but stops do not. This is exemplified by Panobo. Syllable-final consonants include glides, nasals, and fricatives. However, this is complicated by the fact that the flap /7/ occurs in onsets, yet not in codas. Evidence that coda consonants are moraic in Panobo is that in word-final position they attract stress. Otherwise, the default quantity-sensitive foot type, a moraic trochee, assigns stress to the penultimate syllable (Parker 1992): (22)

Heavy final syllables in Panobo [’atsa] [ka’noti] [ja’wiœ] [tah’põI] [pih’k]1]

3

‘manioc’ ‘bow (weapon)’ ‘opossum’ ‘root’ ‘(they) will eat’

The Sonority Dispersion Principle

Clements (1990) proposes an approach to syllable phonotactics that is also based on sonority. In his model, syllables are divided into two parts. The initial

Steve Parker

15

demisyllable consists of any onset consonants (if present) plus the nucleus, and the final demisyllable contains the nucleus plus the coda, i.e. the rhyme. The term demisyllable is borrowed from Fujimura and Lovins (1978). The nucleus crucially resides in both demisyllables simultaneously. For example, the word /plom/ contains the demisyllables /plo/ and /om/. The essence of the Sonority Dispersion Principle (SDP) is that initial demisyllables are preferred when their constituents are maximally and evenly dispersed in sonority; e.g. /ta/. The same tendency is inverted for final demisyllables, favoring open rhymes (those ending with a vowel). More precisely, initial demisyllables of the same length (number of segments) are more harmonic to the degree that they minimize D in (23) below. Conversely, final demisyllables are more harmonic to the degree that they maximize D, all else being equal. The formula for D comes from the realms of physics and geometry, where it governs the distribution of mutually repelling forces in potential fields (like electrons). Its linguistic use originates with the work of Liljencrants and Lindblom (1972) on perceptual distance between segments in the acoustic vowel space. Hooper (1976) and Vennemann (1988: 13–14) anticipate its application to sonority and syllable structure. (23)

Sonority Dispersion Principle m

D=

1

∑d i=1

where

2 i

d = distance between the sonority indices of each pair of segments, and m = number of pairs of segments (including non-adjacent ones), where m = n(n − 1)/2, and where n = number of segments.

Clements (1990: 304) paraphrases (23) as follows: “D . . . varies according to the sum of the inverse of the squared values of the sonority distances between the members of each pair of segments within” a demisyllable. D, then, is the reciprocal of dispersion. To illustrate the application of (23), Clements assumes a sonority scale with the five categories from (3): sonority index

(24) vowels glides liquids nasals obstruents

(V) (G) (L) (N) (O)

5 4 3 2 1

When D is computed for demisyllables containing exactly one or two consonants, it yields the following values (ignoring types that violate the SSP): (25)

Sonority Dispersion demisyllable values a.

OV NV LV GV

= = = =

.06 .11 .25 1.00

most natural onset

least natural onset

b.

OLV OGV ONV NGV NLV LGV

= = = = = =

.56 1.17 1.17 1.36 1.36 2.25

Sonority c.

VO VN VL VG

= = = =

.06 .11 .25 1.00

least natural coda

most natural coda

d. VLO VGO VNO VGN VLN VGL

= = = = = =

16

.56 1.17 1.17 1.36 1.36 2.25

In (25a) the SDP favors a single syllable-initial consonant that maximizes the sonority slope between the onset and the vowel. Therefore, it correctly predicts that the preferred syllable of type CV has an onset consonant as low in sonority as possible (D = .06). This results in the following scale of relative unmarkedness: /ta/ > /na/ > /la/ > /ja/. This is foreshadowed in the discussion of Sanskrit reduplication (§2.1). Recall that the pattern there demonstrates that fricatives can outrank stops in sonority. Therefore, not all obstruents are necessarily equal in sonority (see §4). Summarizing thus far, a unique consequence of the formula for D is that in an initial demisyllable of two segments (CV), the segments should differ from each other in sonority as much as possible. This is somewhat analogous to the MSD approach for onset clusters (§2.2). However, when the initial demisyllable contains three segments (CCV), what matters (given D) is that the aggregate total of the sonority distances between all of these together be maximized. This is accomplished by spacing apart the segments as evenly as possible (in sonority). This results in the best evaluation for OLV in (25b), since it has the lowest obtained value (.56). This is because liquids fall precisely midway between obstruents and vowels in terms of their sonority indices in Clements’s five-category scale in (24). As evidence for the SDP, Clements notes that underlyingly French permits complex demisyllables of the type OLV only. However, in surface forms, some instances of OGV also exist. This raises an important typological question: which onset cluster is universally unmarked, OL or OG? On the one hand (as noted in §2.2), the MSD principle predicts that some languages should permit OG but not *OL. This seems to be correct, but see the caveats in §2.2. On the other hand, the SDP claims that OL is preferred. One piece of evidence that could help resolve this would be an alternation mapping underlying OLV to OGV, or vice versa. Unfortunately, no such process has yet been observed. A cross-linguistic survey documenting the number of languages with one type of cluster but not the other would also be enlightening. It may be that both kinds of onsets (OG and OL) need to be optimal simultaneously, i.e. in the grammars of different languages. Rod Casali and Ken Olson (personal communication) note that apparent OG-only languages are especially common in Africa. Finally, the SDP assigns an equal evaluation to the two demisyllable types OGV and ONV in (25b). This may be problematic, since many languages exhibit the initial sequence OG, but not *ON. For example, Table 49.3 mentions Gizrra, Kurdish, Mono, Panobo, and Spanish. However, Clements does not claim that demisyllables of the same rank necessarily co-occur in any language containing one of them. At the same time, no languages appear to allow ONV but not *OGV. If this is a systematic gap, it is troubling for the predictions of the SDP.

Steve Parker

17

4

The complete sonority hierarchy

Perhaps no issue in phonological theory has led to more competing proposals than the internal structure of the sonority hierarchy, i.e. the numbers and types of natural classes, and their corresponding ranks. Parker (2002) notes that more than 100 distinct sonority scales are found in the literature. The purpose of this section is to lay out several desirable characteristics that a full and final sonority hierarchy should possess, and then present one specific model that arguably comes closest to fulfilling those goals. Briefly, an adequate sonority scale should display the a priori traits in (26). In principle these criteria apply not just to sonority, but to all phonological features, that is, classic binary features like [±voice], [±round], etc. (26)

All else being equal, an ideal sonority scale would have these characteristics: Universal: It potentially applies to all languages. Exhaustive: It encompasses all categories of speech sounds. Impermutable: Its rankings cannot be reversed (although they may be collapsed or ignored). d. Phonetically grounded: It corresponds to some consistent, measurable physical parameter shared by all languages.

a. b. c.

Each of the points in (26) will now be discussed. First, ideally we can establish a single, unique sonority hierarchy to analyze all known languages. This is not to say that any particular language actually exploits every one of the natural class rankings in the sonority scale. On the contrary, it would be quite amazing (although fortuitous) to discover such a case. Nevertheless, the explanatory power of sonority is maximized if we ascribe it to UG, making it equally available to all humans. Second, an adequate theory of sonority should include every known type of phonological segment. Many hierarchies omit recalcitrant natural classes such as glottal consonants (/h/ and /?/), affricates, etc., perhaps because of their inherent complications. Such scales then cannot apply to all languages. This undermines their universality. Third, the rankings in the sonority scale should be impermutable. This is a beneficial characteristic since it is the most restrictive hypothesis possible, i.e. it severely limits the types of processes directly attributable to sonority. In addition to avoiding overgeneration of non-attested language types, impermutability makes claims about sonority easier to falsify. This in turn reduces the danger of circular argumentation. For example, once it is established that laterals, for instance, are more sonorous than nasals, the entailment is that there is no language in which nasals pattern as more sonorous than laterals by the same criteria. At the same time, however, potential divisions between sonority ranks are frequently underexploited in many languages. See de Lacy (2002, 2004, 2006, 2007a) for a formal approach to “underspecification” of sonority classes in OT. Fourth, in an ideal world we can show that sonority is based on concrete articulatory gestures and/or their acoustic counterparts. This is the topic of the next section.

Sonority

18

Although no sonority scale to date perfectly fulfills every desideratum sketched above, one that perhaps comes closest is that of Parker (2008), reproduced below: (27) Final hierarchy of relative sonority Natural class low vowels mid peripheral vowels (not [H]) high peripheral vowels (not [q]) mid interior vowels ([H]) high interior vowels ([q]) glides rhotic approximants ([P]) flaps laterals trills nasals voiced fricatives voiced affricates voiced stops voiceless fricatives (including [h]) voiceless affricates voiceless stops (including [?])

Sonority index 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

Space does not permit a detailed justification of (27). Nevertheless, to highlight a few positive aspects of this scale, the evidence for most of the natural classes is fairly robust and secure. Parker (2002, 2008) summarizes the debates and provides at least one argument to motivate every ranking in this hierarchy (every pair of adjacent categories). Much of this is reviewed in §2 of this chapter. To give another example, Koine (Ancient) Greek permits the clusters /pn/ and /kn/ (/pniktos/ ‘(things) strangled’, /kne.o/ ‘have itching, tickled’; Mounce 1993), but proscribes */bn/ and */dn/. This can be explained as an MSD effect if voiced stops are closer to nasals (in sonority) than voiceless stops are. Furthermore, flaps are higher in sonority than trills in Spanish, as established by the SCL in §2.3. Three other facts confirm this. First, in word-initial position the contrast between /r/ and /7/ is neutralized to /r/ ([rana] ‘frog’). Second, in codas it is neutralized in favor of /7/: [Œa7la7] ‘to chat’. These two points follow from the SDP (Bonet and Mascaró 1997). Third, /7/ and /l/ appear as the second member of complex onsets, yet /r/ does not (see (5)). This is another MSD effect if /r/ is less sonorous than /7/ and /l/ (Bakovio 1994). These distributional facts indicate that liquids do not always pattern as a monolithic class in Spanish. Finally, the rhotic approximant /P/ is higher in sonority than /l/ in English: (1) the contrast between Carl (one syllable) vs. caller (two syllables) follows from the SSP if /P/ outranks /l/ (Borowsky 1986); (2) /P/ is the default epenthetic coda in Eastern Massachusetts speech (McCarthy 1993). This follows from the SDP if /P/ is the most sonorous consonant available in this position. And (3) syllabic /P/ may bear stress (bird, curtain), but /l/ never does (Zec 2003). Another strength of the scale in (27) is its breadth, i.e. the number of different types of segment classes it encompasses. Nevertheless, it is still not exhaustive,

19

Steve Parker

because it leaves out a few rarer kinds of sounds, such as clicks and implosives. Since no known sonority hierarchy includes these, more research in this area would be welcome. A third advantage of (27) is that it has been rigorously tested and found to provide a good fit with all phonetic segments in seven specific languages. This is discussed in §5. However, a few problems persist. For example, the placement of affricates between stops and fricatives is a controversial issue, remaining open to disagreement. Many scales either leave affricates out entirely or group them with plosives, using a term such as stops. Such proposals may simply assume that affricates behave phonologically like stops, as they do in many languages. For instance, Kang (chapter 95: loanword phonology) concludes that affricates are just stops with a special place specification, e.g. [strident]. See chapter 16: affricates for more discussion of affricates in general. Similarly, the ranking of voiced stops over voiceless fricatives is harder to justify than most aspects of this hierarchy. A major reason for this is that many languages require consonant clusters to agree in voicing. Therefore, crucial diagnostic examples are rare, but one such token is the English word midst. Since this form is monosyllabic, /d/ is higher in sonority than /s/ by the SSP since /d/ is closer to the nucleus. Finally, the question of whether glottal consonants are sonorants or obstruents is also contested. Clearly /h/ and /?/ pattern phonologically with prototypical sonorants in some languages, yet behave like obstruents in others. In (27) they are classified as obstruents. One piece of evidence supporting this is that in Panobo, /h/ groups with /ß p t/ in exclusively obstruent + glide clusters (see (16)). Also, in many languages [?] is inserted as a default onset, where segments of low sonority are preferred by the SDP (Lombardi 2002). Finally, in the P-base sample of 549 languages, there are 65 distinct phonological processes in which /h/ and/or /?/ pattern solely with consonants that are unambiguously obstruents. In 21 other cases they group with sonorants (Mielke 2008).6 In the scale in (27) the tendency is obviously to “split” rather than to “lump together” natural classes. The motivation for this is as follows. There is ample evidence that fine-grade distinctions in sonority need to be made in some languages, such as between fricatives and stops in Sanskrit (§2.1). UG then must allow for these options, and hence the potential exists for other languages to exploit them as well. If we start with a hierarchy that assigns a unique rank to every distinct manner of articulation (like (27)), it is a trivial matter to formally compress (conflate) ranks together in order to analyze languages not invoking those splits. This procedure applies to every language in one way or another. However, if we assume a maximal sonority hierarchy with just the five groups in (3), no mechanism exists to “decompose” these, making more narrow distinctions in the scale when necessary. Consequently, only a fully detailed hierarchy such as (27) is flexible enough to generate the range of variation attested among the languages of the world with respect to processes involving sonority. Nevertheless, based on acoustic studies of many languages, Zhang (2001) and Gordon (2006) deny that a universal sonority scale is theoretically the most parsimonious option. Rather, they reject the existence of invariant sound classes. Gordon, for instance, concludes that syllable weight effects are not a unified

6

Thanks to Jeff Mielke (personal communication) for these counts.

Sonority

20

phenomenon. He claims that nasals, for example, display slightly different phonetic behavior from one language to another. This can then influence their phonological patterning in terms of sonority.

5

The physical substance of sonority

As summarized in §2, the function of sonority in phonological systems is fairly well understood. Nevertheless, these phenomena raise an important, related question that has provoked much contention and speculation: is there any coherent notion of sonority grounded in evidence external to the phonotactic facts that sonority is assumed to account for? In other words, what is the articulatory, acoustic, and/or perceptual source of sonority in the speech signal? To date at least 98 different correlates of sonority have been posited, documented in Parker (2002). The most frequently proposed phonetic definition of sonority is probably openness (of the vocal tract) or (supralaryngeal) aperture (Bloomfield 1914; Jespersen 1922; Goldsmith 1990; Kirchner 1998), or its inverse, (supraglottal) stricture, closure, impedance, etc. (Halle and Clements 1983; Kenstowicz 1994; Hume and Odden 1996). However, notions such as impedance are difficult to quantify. A more promising correlate of sonority is amplitude/intensity, or its perceptual counterpart, loudness (Bloomfield 1914; Laver 1994; cf. §1 and see also chapter 98: speech perception and phonology). Recently a major instrumental study was carried out measuring relative sound levels or RMS (root mean square) intensity of all phonemes of Peruvian Spanish, Cusco Quechua, and Midwestern American English (Parker 2008). See Jany et al. (2007) for a similar investigation of four other languages (Egyptian Arabic, Hindi, Mongolian, and Malayalam). In Parker (2008), the obtained intensity values for all segments yield an overall mean Spearman’s correlation of .91 with the sonority indices proposed in (27). Given those results, it is proposed there that the best way to characterize the physical basis of sonority is via a linear regression equation such as (28) below. This is calculated from the mean intensity measurements of all English coda consonants pertaining to nine of the natural classes from (27). These were pronounced five times each by five male native speakers of English. (28)

sonority = 13.9 + .48 × dB (dB = decibel; r 2 = .95)

This formula predicts an estimated sonority index based on a hypothetical intensity value. It characterizes the best-fitting line corresponding to the relative sonority of English coda consonants in phrasally stressed words. The obtained intensity value of each of these segments was compared to that of a stressed, utterance-initial low vowel (/A/) in a fixed carrier sentence. In (28) the Y intercept is 13.9. This is the projected value of Y (sonority) when X (dB) equals 0. Here it is significantly higher than the theoretical null value. This is because the obtained intensity of the reference vowel /A/, whose sonority index is 17, was subtracted from that of the target consonant for each utterance measured. This is a type of normalization procedure often performed to control for random fluctuations in loudness across speakers and tokens. The slope in (28), .48, indicates the rate of change in the dependent variable Y (sonority) per unit change in the independent variable X (dB). Its obtained value allows us to approximate the mathematical

21

Steve Parker

nature of the relationship between intensity and sonority. Specifically, for every decibel by which the relative sound level is increased, the corresponding sonority rank increases by about .48 units (for this sample of five English speakers). Also in (28), r 2 = .95. This is the coefficient of determination. It indicates the proportional reduction of error gained by using the linear regression model. Given this r 2 value, we can conclude that the single factor sonority accounts for (predicts) about 95 percent of the systematic variability in the intensity measurements of that dataset. Compared with previous accounts of sonority, the definition in (28) has several advantages (Parker 2008): (1) it is precise; (2) it is non-arbitrary; (3) it is phonetically grounded; (4) it is empirically verifiable and replicable; (5) it can be calculated for other speakers and languages; and (6) the underlying methodology (regression) is compatible with different (competing) sonority scales. However, while studies of this type represent progress, some problems remain. For example, in Parker (2008) the majority of the mismatches between sonority ranks and segmental intensity values (in all three languages) involve the sonorant consonants, particularly the approximants (laterals and glides). The reason for this is not clear at this point and merits further investigation. Finally, other researchers appeal to more functional aspects of the speech signal to avoid invoking sonority altogether. For example, building on the phonetically based work of Mattingly (1981) and Silverman (1995), Wright (2004: 35) reformulates the SSP as “a perceptually motivated and scalar constraint in which an optimal ordering of segments is one that maximises robustness of encoding of perceptual cues to the segmental make-up of the utterance.” Similarly, Ohala (1990a) claims that what drives the phonological phenomena discussed here is not really sonority but simply a need for adequate modulation in the acoustic wave.

6

Conclusion

Despite its problems, sonority makes sense. If it did not exist, it would be invented (Parker 2008). In this chapter a number of important issues have been examined. Nevertheless, certain topics need to be left for future work. For example, in §3 a possible contradiction between the claims of the MSD approach and the SDP is noted, involving the relative unmarkedness of OGV vs. OLV demisyllables. An in-depth typological study of these clusters would be helpful. Also, more attention should be given to the phonetic and/or functional bases of principles such as the SSP, the SCL, and the MSD. The question of why these hold true is potentially intriguing. Finally, another interesting point not discussed here is whether sonority scales are necessarily the same across different domains, such as phonotactics vs. the calculation of syllable weight.

ACKNOWLEDGMENTS Thanks to the following people for helpful comments: Marc van Oostendorp, Keren Rice, Michael Boutin, Mike Cahill, Ken Olson, two anonymous reviewers, and the students in my Frontiers in Phonology course taught at the Graduate Institute of Applied Linguistics in the fall of 2009. This work was partially funded by NSF grant #0003947.

Sonority

22

REFERENCES Anttila, Arto. 1995. Deriving variation from grammar: A study of Finnish genitives. Unpublished ms., Stanford University (ROA-63). Baertsch, Karen. 2002. An optimality theoretic approach to syllable structure: The split margin hierarchy. Ph.D. dissertation, Indiana University. Bakovio, Eric. 1994. Strong onsets and Spanish fortition. MIT Working Papers in Linguistics 23. 21–39. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst (ROA-259). Blevins, Juliette. 1995. The syllable in phonological theory. In John A. Goldsmith (ed.) The handbook of phonological theory, 206–244. Cambridge, MA & Oxford: Blackwell. Blevins, Juliette. 2006. Syllable: Typology. In Keith Brown (ed.) Encyclopedia of language and linguistics. 2nd edn. vol. 12, 333–337. Amsterdam: Elsevier. Bloomfield, Leonard. 1914. An introduction to the study of language. New York: Henry Holt & Co. Bodegraven, Nico van & Elly van Bodegraven. 2005. Phonology essentials: Gizrra language. In Parker (2005), 191–210. Bonet, Eulalia & Joan Mascaró. 1997. On the representation of contrasting rhotics. In Fernando Martínez-Gil & Alfonso Morales-Front (eds.) Issues in the phonology and morphology of the major Iberian languages, 103–126. Washington, DC: Georgetown University Press. Borowsky, Toni. 1986. Topics in the lexical phonology of English. Ph.D. dissertation, University of Massachusetts, Amherst. Brosses, Charles de. 1765. Traité de la formation méchanique des langues, et des principes physiques de l’étymologie, vol. 1. 130 –133. Paris: Saillant, Vincent, Desaint. Cho, Young-mee Yu & Tracy Holloway King. 2003. Semisyllables and universal syllabification. In Féry & van de Vijver (2003), 183–212. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1990. The role of the sonority cycle in core syllabification. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 283–333. Cambridge: Cambridge University Press. Coetzee, Andries W. 2006. Variation as accessing “non-optimal” candidates. Phonology 23. 337–385. Cser, András. 2003. The typology and modelling of obstruent lenition and fortition processes. Budapest: Akadémiai Kiadó. Davies, John. 1980. Kobon phonology. (Pacific Linguistics B68.) Canberra: Australian National University. Davies, John. 1981. Kobon. Amsterdam: North-Holland. Davis, Stuart. 1998. Syllable contact in Optimality Theory. Korean Journal of Linguistics 23. 181–211. de Lacy, Paul. 1997. Prosodic categorisation. M.A. thesis, University of Auckland (ROA-236). de Lacy, Paul. 2002. The formal expression of markedness. Ph.D. dissertation, University of Massachusetts, Amherst. de Lacy, Paul. 2004. Markedness conflation in Optimality Theory. Phonology 21. 145 –199. de Lacy, Paul. 2006. Markedness: Reduction and preservation in phonology. Cambridge: Cambridge University Press. de Lacy, Paul. 2007a. The interaction of tone, sonority, and prosodic structure. In de Lacy (2007b), 281–307. de Lacy, Paul (ed.) 2007b. The Cambridge handbook of phonology. Cambridge: Cambridge University Press.

23

Steve Parker

Dixon, R. M. W. 1988. A grammar of Boumaa Fijian. Chicago: University of Chicago Press. Engelenhoven, Aone van. 2004. Leti, a language of Southwest Maluku. Leiden: KITLV Press. Féry, Caroline & Ruben van de Vijver (eds.) 2003. The syllable in Optimality Theory. Cambridge: Cambridge University Press. Fujimura, Osamu & Julie B. Lovins. 1978. Syllables as concatenative phonetic units. In Alan Bell & Joan B. Hooper (eds.) Syllables and segments, 107–120. Amsterdam: North-Holland. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Gordon, Matthew. 2006. Syllable weight: Phonetics, phonology, typology. London: Routledge. Gouskova, Maria. 2004. Relational hierarchies in Optimality Theory: The case of syllable contact. Phonology 21. 201–250. Greenberg, Joseph H. 1978. Some generalizations concerning initial and final consonant clusters. In Joseph H. Greenberg, Charles A. Ferguson & Edith A. Moravcsik (eds.) Universals of human language, vol. 2: Phonology, 243–279. Stanford: Stanford University Press. Guderian, Brad & Toni Guderian. 2005. Organised phonology data supplement, Koluwawa language. In Parker (2005), 75 –86. Halle, Morris & G. N. Clements. 1983. Problem book in phonology. Cambridge, MA: MIT Press. Hankamer, Jorge & Judith Aissen. 1974. The sonority hierarchy. Papers from the Annual Regional Meeting, Chicago Linguistic Society: Parasession on natural phonology, 131–145. Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Published 1985, New York: Garland. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hironymous, Patricia. 1999. Selection of the optimal syllable in an alignment-based theory of sonority. Ph.D. dissertation, University of Maryland at College Park. Hock, Hans Henrich. 1985. Regular metathesis. Linguistics 23. 529 –546. Hooper, Joan B. 1976. An introduction to natural generative phonology. New York: Academic Press. Hume, Elizabeth & David Odden. 1996. Reconsidering [consonantal]. Phonology 13. 345–376. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts, Amherst. Published 1988, New York: Garland. Jany, Carmen, Matthew Gordon, Carlos M. Nash & Nobutaka Takara. 2007. How universal is the sonority hierarchy? A cross-linguistic acoustic study. Paper presented at the 16th International Congress of Phonetic Sciences, Saarbrücken. Jespersen, Otto. 1904. Lehrbuch der Phonetik. Leipzig & Berlin: Teubner. Jespersen, Otto. 1922. A Modern English grammar on historical principles. Part 1: Sounds and spelling. 3rd edn. Heidelberg: Carl Winter’s Universitätsbuchhandlung. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kawasaki, Haruko. 1982. An acoustical basis for universal constraints on sound sequences. Ph.D. dissertation, University of California, Berkeley. Kämpfe, Hans-Rainer & Alexander P. Volodin. 1995. Abriß der tschuktschischen Grammatik auf der Basis der Schriftsprache. Wiesbaden: Harrassowitz. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kenstowicz, Michael. 1997. Quality-sensitive stress. Rivista di Linguistica 9. 157–187. Kenstowicz, Michael & Charles W. Kisseberth. 1973. Unmarked bleeding orders. Studies in the Linguistic Sciences 1(1), 8–28. Reprinted in Charles W. Kisseberth (ed.) 1973. Studies in generative phonology, 1–12. Champaign, IL: Linguistic Research. Kirchner, Robert. 1998. An effort-based approach to consonant lenition. Ph.D. dissertation, UCLA (ROA-276).

Sonority

24

Ladefoged, Peter. 1975. A course in phonetics. New York: Harcourt Brace Jovanovich. Laver, John. 1994. Principles of phonetics. Cambridge: Cambridge University Press. Lewis, M. Paul (ed.) 2009. Ethnologue: Languages of the world. 16th edn. Dallas: SIL International. Liljencrants, Johan & Björn Lindblom. 1972. Numerical simulation of vowel quality systems: The role of perceptual contrast. Language 48. 839 –862. Lombardi, Linda. 2002. Coronal epenthesis and markedness. Phonology 19. 219 –251. Mattingly, Ignatius G. 1981. Phonetic representation and speech synthesis by rule. In Terry Myers, John Laver & John Anderson (eds.) The cognitive representation of speech, 415–420. Amsterdam: North Holland. McCarthy, John J. 1993. A case of surface constraint violation. Canadian Journal of Linguistics 38. 169 –195. Mielke, Jeff. 2008. The emergence of distinctive features. Oxford: Oxford University Press. Morelli, Frida. 2003. The relative harmony of /s+stop/ onsets: Obstruent clusters and the Sonority Sequencing Principle. In Féry & van de Vijver (2003), 356–371. Mounce, William D. 1993. The analytical lexicon to the Greek New Testament. Grand Rapids, MI: Zondervan Publishing House. Murray, Robert W. & Theo Vennemann. 1983. Sound change and syllable structure in Germanic phonology. Language 59. 514 –528. Ohala, John J. 1974. Phonetic explanation in phonology. Papers from the Annual Regional Meeting, Chicago Linguistic Society: Parasession on natural phonology, 251–274. Ohala, John J. 1990a. Alternatives to the sonority hierarchy for explaining segmental sequential constraints. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26(2). 319–338. Ohala, John J. 1990b. There is no interface between phonology and phonetics: A personal view. Journal of Phonetics 18. 153–171. Ohala, Manjari. 1983. Aspects of Hindi phonology. New Delhi: Motilal Banarsidass. Olson, Kenneth S. 2005. The phonology of Mono. Dallas: SIL International & University of Texas at Arlington. Parker, Steve. 1989. The sonority grid in Chamicuro phonology. Linguistic Analysis 19. 3–58. Parker, Steve. 1992. Datos del idioma huariapano. Yarinacocha, Pucallpa, Peru: Ministerio de Educación & Instituto Lingüístico de Verano. Parker, Steve. 2002. Quantifying the sonority hierarchy. Ph.D. dissertation, University of Massachusetts, Amherst. Parker, Steve (ed.) 2005. Data papers on Papua New Guinea languages, vol. 47: Phonological descriptions of PNG languages. Ukarumpa, Papua New Guinea: Summer Institute of Linguistics. Parker, Steve. 2008. Sound level protrusions as physical correlates of sonority. Journal of Phonetics 36. 55 –90. Payne, Judith. 1990. Asheninca stress patterns. In Doris L. Payne (ed.) Amazonian linguistics: Studies in lowland South American languages, 185 –209. Austin: University of Texas Press. Pike, Kenneth L. 1943. Phonetics: A critical analysis of phonetic theory and a technic for the practical description of sounds. Ann Arbor: University of Michigan Press. Prince, Alan. 1990. Quantitative consequences of rhythmic organization. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26(2). 355–398. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Regnier, Sue. 1993. Quiegolani Zapotec phonology. Work Papers of the Summer Institute of Linguistics, University of North Dakota Session 37. 37– 63. Ridouane, Rachid. 2008. Syllables without vowels: Phonetic and phonological evidence from Tashlhiyt Berber. Phonology 25. 321–359.

25

Steve Parker

Schachter, Paul & Victoria Fromkin. 1968. A phonology of Akan: Akuapem, Asante, Fante. UCLA Working Papers in Phonetics 9. Selkirk, Elisabeth. 1984. On the major class features and syllable theory. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 107–136. Cambridge, MA: MIT Press. Sievers, Eduard. 1893. Grundzüge der Phonetik zur Einführung in das Studium der Lautlehre der indogermanischen Sprachen. Leipzig: Breitkopf & Härtel. Originally published 1881. Silverman, Daniel. 1995. Phasing and recoverability. Ph.D. dissertation, University of California, Los Angeles. Published 1997, New York: Garland. Smolensky, Paul. 1995. On the structure of the constraint component Con of UG (ROA-86). Steriade, Donca. 1982. Greek prosodies and the nature of syllabification. Ph.D. dissertation, MIT. Published 1990, New York: Garland. Steriade, Donca. 1988. Reduplication and syllable transfer in Sanskrit and elsewhere. Phonology 5. 73–155. Urbanczyk, Suzanne. 2006. Reduplicative form and the root-affix asymmetry. Natural Language and Linguistic Theory 24. 179 –240. Vance, Timothy J. 1987. An introduction to Japanese phonology. Albany: State University of New York Press. Vance, Timothy J. 2008. The sounds of Japanese. Cambridge: Cambridge University Press. Vennemann, Theo. 1988. Preference laws for syllable structure and the explanation of sound change: With special reference to German, Germanic, Italian, and Latin. Berlin: Mouton de Gruyter. Whitney, William Dwight. 1889. Sanskrit grammar, including both the classical language, and the older dialects, of Veda and Brahmana. 2nd edn. Cambridge, MA: Harvard University Press. Wichmann, Søren. 2002. Diccionario analítico del popoluca de Texistepec. México, D.F.: Universidad Nacional Autónoma de México. Wright, Richard. 2004. A review of perceptual cues and cue robustness. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 34–57. Cambridge: Cambridge University Press. Yuan, Jiahua. 1989. Hanyu fangyan gaiyao [An introduction to Chinese dialects]. 2nd edn. Beijing: Wenzi Gaige Chubanshe. Zec, Draga. 1988. Sonority constraints on prosodic structure. Ph.D. dissertation, Stanford University. Published 1994, New York: Garland. Zec, Draga. 1995. Sonority constraints on syllable structure. Phonology 12. 85 –129. Zec, Draga. 2003. Prosodic weight. In Féry & van de Vijver (2003), 123–143. Zec, Draga. 2007. The syllable. In de Lacy (2007b), 161–194. Zhang, Jie. 2001. The effects of duration and sonority on contour tone distribution: Typological survey and formal analysis. Ph.D. dissertation, University of California, Los Angeles. Published 2002, New York: Routledge.

50

Tonal Alignment Pilar Prieto

1

Introduction

In recent decades, the issue of tonal alignment has been at the forefront of several phonological and phonetic debates in the analysis of intonation. Since the groundbreaking work of Bruce (1977), the autosegmental metrical approach to intonation proposed that intonational patterns were to be represented as autosegmental tone melodies (Pierrehumbert 1980; Beckman and Pierrehumbert 1986; Ladd 1996; and others). Given that melodies are independent from the segments which realize them in this theory (chapter 45: the representation of tone; chapter 14: autosegments), and since the tones are realized potentially over quite long strings, it is a central research issue to find a set of principles for mapping tones to segments. The term tonal alignment thus refers to the temporal implementation of fundamental frequency (F0) movements with respect to the segmental string. Tonal alignment has not only been used in crucial ways as an argument in favor of a given phonological framework, but has also been the focus of debate in itself. This notion has played an important role in current theories of intonational phonology, since relative alignment of tones with the segmentals has been shown to be a crucial piece of information when describing the phonological make-up of the melodic contour. This chapter reviews four important topics in the recent history of phonology in the discussion of which tonal alignment has been a crucial component. One of the important issues in intonational phonology is the investigation of the acoustic correlates that encode intonational categories. Since the beginning of the autosegmental metrical approach to intonation, tonal alignment has been claimed to play a central role in encoding intonational contrasts. Pierrehumbert (1980) and Pierrehumbert and Steele (1989) showed that the timing of F0 peaks or valleys with segments functions contrastively in English, and that early-aligned pitch accents are phonologically distinct from late-aligned pitch accents. In the decades since the publication of these studies, a body of experimental research has shown that tonal alignment cues semantic distinctions in a number of languages and that it can be perceived in a near-categorical fashion (e.g. Kohler 1987; Niebuhr 2007 for German; D’Imperio and House 1997 and D’Imperio 2000 for Neapolitan Italian; Gili-Fivela 2009 for Pisa Italian; Pierrehumbert and Steele The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0050

2

Pilar Prieto

1989 and Dilley 2007 for English). In §2 we will review recent experimental evidence that elucidates the role of tonal alignment in encoding intonational distinctions in a number of languages. The relationship between tonal association and tonal alignment has been a central issue in the tonal representation debates within the autosegmental metrical theory of intonation. Though the autosegmental metrical representational proposal has met with considerable success in accounting for melodic patterns in a variety of languages, the literature on tonal representation has identified a few phenomena that resist transparent analysis. Two such phenomena have to do with the metrical part of the model and the standard interpretation of the relationship between phonological association and phonetic alignment. It has recently been claimed that the theoretical concept of starredness is somewhat unclear and that its definition cannot be based solely on phonetic alignment (Arvaniti et al. 2000; Prieto et al. 2005). In §3 we describe the standard view of the relationship between phonological association of tones and phonetic alignment and then review some recent proposals on the topic. Another important goal of several models of intonation has been to develop a phonetic model of tonal alignment. Within these models, it is a central issue to determine what part of the variation in the realization of the tune-to-text mapping is due to phonetic implementation and what part is phonological and is accounted for in a phonological representation (either of the tone melodies or of prosodic or segmental anchors for tones). A body of work on tune–text alignment has shown that, apart from phonological distinctions in alignment, a variety of phonetic factors, such as tonal crowding, speech rate and syllable structure influence the fine-grained patterns of F0 location in predictable ways. For example, it has been demonstrated that time pressure from the right-hand prosodic context (i.e. the proximity of an upcoming accent or boundary tones) is crucial in determining the location of H peaks (see e.g. Silverman and Pierrehumbert 1990 for English and Prieto et al. 1995 for Spanish). Recent work has shown that when such right-hand prosodic effects are excluded (i.e. when the tonal features under investigation are not in the vicinity of pitch accents or boundary tones), the alignment of F0 peak targets is consistently governed by segmental anchoring (Arvaniti et al. 1998 for Greek; Ladd et al. 1999 for English). Similarly, other work on production and perception supports the hypothesis that prosodic structure must play an essential part in our understanding of the coordination of pitch gestures with the segmentals and that listeners are able to employ these fine details of H tonal alignment due to syllable structure or within-word position to identify lexical items (D’Imperio et al. 2007b; Prieto et al., forthcoming). In §4 we review recent proposals regarding phonetic models of tonal alignment and the role of prosodic structure in the implementation of F0 tonal alignment patterns. Finally, tonal alignment studies have also been used to test specific predictions by different phonological models of prosody and intonation. Arvaniti and Ladd (2009) provide a useful example of how a production study on alignment can be used to test specific predictions by target-based vs. configuration-based models of intonation (chapter 32: the representation of intonation). As we will see below, Arvaniti and Ladd undertook a very detailed phonetic study of the intonation of Greek wh-questions and tested different predictions about tonal implementation. The F0 alignment data showed predictable adjustments in alignment depending on the location of adjacent tonal targets. The authors conclude

Tonal Alignment

3

that models that specify the F0 of all syllables, and models that specify F0 by superposing contour shapes for shorter and longer domains, cannot account for predictable variation without resorting to ad hoc tonal specifications, which, in turn, do not allow for phonological generalizations about contours applying to utterances of different lengths. In §5 we review the evidence coming from a variety of tonal alignment studies that test specific predictions from different phonological models of intonation. In the following sections, we present and discuss each of these four topics, providing the relevant data and highlighting some of the unresolved issues.

2

The role of tonal alignment in distinguishing intonational categories

One of the key discoveries within work on intonation is the fact that tones in intonational languages are associated with either metrically prominent syllables (pitch accents) or prosodic edges (boundary tones). Many theories of intonational phonology thus draw a clear distinction between the two sorts of tonal units, i.e. tonal entities associated with prominent or metrically strong syllables and tonal entities associated with edges of prosodic domains. Within the autosegmental metrical (AM) approach to intonation initially developed by Pierrehumbert (1980), she argues that the English intonation system consists of an inventory of tonal units, each consisting of either one or two tones, which can be High (H) or Low (L) (see chapter 14: autosegments; chapter 116: sentential prominence in english). These tones can either be associated with metrically strong syllables (and represented with a *, i.e. H* and L*) or be associated with prosodic edges (and represented with a %, i.e. H% and L%). Tonal units can be monotonal or bitonal. In the case of tonal units associated with prominent syllables, or pitch accents, Pierrehumbert proposed a phonological inventory of six pitch-accent shapes for English (H*, L*, H*+L, H+L*, L*+H, L+H*), some of them encoding alignment differences. Crucially, the AM model started to make use of the star notation (*) in bitonal pitch accents to indicate tonal association with metrically strong syllables and relative alignment – see §3 for a review of the starredness concept. The autosegmental representations in (1) capture the fact that the LH shape is aligned differently in the two contrastive pitch accents exemplified in figure 50.1. While L*+H has a low tone (L) on the stressed syllable and a high tone (H) trailing it, L+H* has a high tone on the stressed syllable with a low tone leading it: (1)

a.

Only a millionaire L*+H

b. Only a millionaire L+H*

In sum, an important proposal of the AM model of intonation, based on Bruce’s (1977) analysis of the tonal alignment contrast between Accent I and Accent II in

4

Pilar Prieto

F0

(a) O n l y

450 400 350 300 250 200 150

m i l l i o n a i r e

a

L* + H 0

0.5 Seconds

1

F0

(b) O n l y

450 400 350 300 250 200 150

a

m i l l i o n a i r e

L 0

+

H*

0.5 Seconds

1

Figure 50.1 The fundamental frequency contour of the utterance Only a millionaire spoken with two different pitch accents on millionaire: the late-aligned pitch accent, which indicates incredulity or uncertainty (a), and the early-aligned pitch pattern, which indicates assertion (b). The vertical cursor is placed at the [m] release in millionaire. Figure reproduced from Pierrehumbert and Steele (1989: 182)

Swedish, is that pitch accent types can be phonologically distinguished by their relative alignment with the metrically prominent syllable. Pierrehumbert (1980) shows that tonal alignment functions contrastively in English and that earlyaligned pitch accents are phonologically distinct from late-aligned pitch accents. Figure 50.1 shows two intonation patterns of the utterance Only a millionaire spoken with two different pitch accents on millionaire: the late-aligned pitch accent, which indicates incredulity or uncertainty (a), and the early-aligned pitch pattern, which indicates assertion (b). In their seminal paper, Pierrehumbert and Steele (1989) performed an imitation task with the two intonation patterns of the abovementioned utterance Only a millionaire (see Figure 50.1). They created a synthesized continuum of several steps of alignment between the two, and asked subjects to imitate the utterance. The results of the imitation task revealed the existence of two separate phonological categories. The authors argued that if the subjects had been able to reproduce the full range of the continuum in their imitation, peak alignment differences could be regarded as gradient. However, since they found that by and large the distribution

Tonal Alignment

5

of peak alignment was bimodal in the imitation data, they therefore concluded that the distinction between early and late peak alignment was categorically distinct. Pierrehumbert and Steele’s paper represented an important first step in a series of experimental investigations on the perception of tonal alignment (see chapter 98: speech perception and phonology). Since then, a body of experimental research has demonstrated that tonal alignment cues intonational meaning distinctions in a number of languages (e.g. Kohler 1987 and Niebuhr 2007 for German; D’Imperio and House 1997 and D’Imperio 2000 for Neapolitan Italian; Gili-Fivela 2009 for Pisa Italian; Dilley 2007 for English). The issue of whether a certain pair of intonational contrasts can be accompanied by categorical differences in meaning and whether these contrasts are perceived in a discrete or a gradient fashion has been an important research question in the field of intonation. A number of experimental methods have been used to study what is categorical or linguistic in intonation and what is paralinguistic and gradient (see the reviews in Gussenhoven 2004, 2006; also chapter 89: gradience and categoricality in phonological theory). In what follows we review recent studies that have provided evidence from a number of languages on the role of tonal alignment in encoding intonational distinctions. All in all, these articles provide robust experimental evidence for the claim that changes in F0 alignment of peaks and valleys are especially salient and cue phonological distinctions across languages. This evidence has been generally interpreted as direct support for AM theory, as tonal alignment differences in this model are encoded phonologically at the pitch accent level. Kohler’s (1987) paper was the first to apply the Categorical Perception paradigm to alignment data and to show that alignment contrasts can be perceived categorically. The Categorical Perception paradigm involves firstly an identification/classification task in which the listeners have to categorize stimuli taken from a continuum, and secondly a discrimination task in which listeners are asked to judge pairs of stimuli as being either the same or different. For perception to be considered categorical, a peak of discrimination is expected at the point in the acoustic domain that separates the two categories (for a review, see Dilley 2007). Kohler (1987) employed the complete paradigm to investigate the perception of a set of F0 contours in German involving rises with a continuum created between early and medial peaks. He found that the early peak was associated with finality (“knowing,” “coming to the end of an argument”), and the medial peak with openness (“observing,” “starting a new argument”). The results of both tasks of the paradigm revealed categorical changes in the identification of early vs. medial peaks, with a discrimination maximum across the category boundary. More recently, Niebuhr (2007) carried out a series of perception experiments with the same German alignment contrasts and showed that the function-based identification of the peak categories is influenced not only by peak synchronization, but also by peak shape and height. In general, though, his findings corroborate the existence of the two categories in German intonation and support the idea that the timing of the peak movements with regard to the accented vowel is important for their perceptual differentiation. Similar results have been obtained for American English tonal alignment contrasts. Following Pierrehumbert and Steele’s (1989) investigation, a number of studies have examined the distinction between an early-aligned pitch accent (L+H*) and a late-aligned pitch accent (L*+H) in American English. In the most comprehensive study, Dilley (2007) conducted a series of perception experiments

6

Pilar Prieto

with the two pairs of accents attested in American English (H* and H+L*, and L* and L+H*), an identification task, two types of discrimination tasks and an imitation task. Evidence of discrimination maxima that aligned well with identification crossover points in the identification task demonstrated categorical perception for intonation and provided converging evidence with earlier results by Kohler (1987). Moreover, converging evidence for the categorical perception of intonation categories was obtained from the imitation study. Though Kohler (1987) and Dilley (2007) are advocates of the application of the Categorical Perception paradigm to intonation, few other studies have shown clear evidence of categorical perception, i.e. with a clear discrimination peak in the expected position. The discrimination functions observed differ between studies, and in the majority of cases no discrimination peaks appear at the category crossover point revealed by the identification test. One such example is described in Gili-Fivela’s (2009) article. She investigated the contrast between narrow focus and narrow contrastive focus in Pisa Italian, represented as H* and H*+L. In Pisa Italian, as in other languages, narrow contrastive focus is expressed through the use of retracted pitch peaks and an increase in pitch height. Gili-Fivela applied the Categorical Perception paradigm to the data, with both identification and discrimination tasks being performed, and also an imitation task. She manipulated both the alignment and scaling patterns of a rising pitch accent in narrow focus and a rising-falling pitch accent in contrastive narrow focus. The results showed that while there is a clear difference between a narrow focus pattern and a contrastive focus pattern in production, the contrast might not be categorically perceived, as the identification and discrimination functions do not correspond to an abrupt shift in identification aligned with a discrimination peak. Other studies have shown that the slope of the rise and the shape of the peak also contribute to tonal contrast identification. D’Imperio and House (1997) and D’Imperio (2000) investigated the distinction between questions and statements in Neapolitan Italian. In Neapolitan Italian, questions and statements are characterized by a rise in pitch that occurs in the vicinity of the accented syllable. The materials in D’Imperio and House (1997) consisted of a series of stimuli in which the F0 peak of a rising-falling pitch accent was shifted forward and backwards within the accented syllable. Neapolitan listeners performed an identification task in which they listened to the stimuli and then classified each of them as either a question or a statement. The results showed that questions and statements are primarily distinguished by the relative alignment of the rise in a rise-fall pattern in the accented syllable. In subsequent experiments using this same contrast, D’Imperio (2000) showed that both details of the temporal alignment of target tones and the shape of the peak contribute to the identification of the contrast between questions and statements in this language. Moreover, she found that syllable structure detail modifies acoustic target alignment but does not modify the crossover point between the two categories (for more details, see §4). New experimental paradigms have been recently applied to study the role of tonal alignment in spoken language processing. Chen et al. (2007) adopted the eye-tracking paradigm to investigate the role of pitch accent type and deaccentuation in online processing of information status in British English.1 It was found that 1

For a review of the eye-tracking paradigm applied to prosody research, see Watson et al. (2006) and Watson et al. (2008).

Tonal Alignment

7

two types of pitch accents (H*L and L*HL) create a strong bias toward newness, whereas deaccentuation and the L*H pitch accent create a strong bias toward givenness. Watson et al. (2008) also used the eye-tracking paradigm to investigate whether the presence of a pitch accent difference between L+H* and H* in English biases listeners toward interpreting a temporarily ambiguous noun as referring to either a discourse-given or a discourse-new entity. Participants had to perform a word-recognition task (for example, candle vs. candy) and pick up one of the two competing objects, while their eye movements were being monitored. They found that although listeners interpreted these accents differently, their interpretive domains overlapped. L+H* created a strong bias toward contrast referents, whereas H* was compatible with both new and contrast referents. The electro-encephalography (EEG) technique, a procedure which measures electrical activity of the brain and which allows for the non-invasive measuring of brain activity during cognitive processing, has also been used to study pitch processing. For example, Fournier et al. (2010) used this technique to investigate the tonal and intonational pitch processing of some tonal contrasts (some of them alignment contrasts) by native speakers of the tonal dialect of Roermond Dutch as compared to a control group of speakers of Standard Dutch, a non-tone language. A set of words with identical phoneme sequences but distinct pitch contours, which represented different lexical meanings or discourse meanings (e.g. statement vs. question), were presented to both groups. The stimuli were arranged in a mismatch paradigm, under several experimental conditions: in the first condition (lexical), the pitch contour differences between stimuli reflected differences between lexical meanings; in the second condition (intonational), the stimuli differed in their discourse meaning. In these two conditions, both native and non-native responses showed a clear magnetic mismatch negativity in a time window from 150 to 250 msecs after the divergence point of standard and deviant pitch contours. In the lexical condition, a stronger response was found over the left temporal cortex of speakers of standard as well as non-standard Dutch. Crucially, in the intonational condition, the same activation pattern was observed in the control group, but not in the group of Roermond Dutch speakers, who showed a right-hemisphere dominance instead. Thus the lateralization of pitch processing was condition-dependent in the Roermond Dutch group only, suggesting that processes are distributed over both temporal cortices according to the functions available in the grammar. Finally, researchers have begun to consider the role of potential articulatory landmarks and the coordination or alignment between tonal gestures (measured as F0 turning points) and oral constriction gestures. Recent work by Mücke et al. (2006), D’Imperio et al. (2007a), and Mücke et al. (2009) has investigated alignment patterns for three different languages (Italian, German, and Catalan) by using electromagnetic mid-sagittal articulography (EMMA) for capturing oral constriction gestures alongside acoustic recordings. The end of pitch movements in bitonal pitch accents co-occurs with the minima and maxima of the closing gesture of C2 in C1V.C2 and C1VC2 sequences. In all these studies, such pitch targets were seen to be more closely aligned in time with articulatory landmarks than with acoustic ones. However, there was some variation as to the articulatory landmark which served as an anchor for the tonal target. For example, in German nuclear LH accents, the H peaks co-occurred with the intervocalic C target, whereas in pre-nuclear accents peaks co-occurred with the target for the following vowel (what is called “accent shift”; Mücke et al. 2009). In Catalan it was the consonantal

8

Pilar Prieto

peak velocity rather than the consonantal target which served as the landmark. Such an apparently small alignment difference in the articulatory anchor type may be used by speakers to make (or contribute toward making) phonological distinctions, as in Neapolitan, where H in L*+H (questions) aligns with the maximum constriction, and H in L+H* (statements) aligns with peak velocity (see D’Imperio et al. 2007a).

3

Phonological encoding: Tonal association and tonal alignment

The topic of this section is the relation between phonological association and phonetic alignment of tones and how it is encoded in a representational system. The starting point is provided by the autosegmental metrical approach to intonation, which has developed an explicit phonological representational approach that has been applied to a variety of languages (Pierrehumbert 1980; Pierrehumbert and Beckman 1988; Ladd 1996; Gussenhoven 2004; among others). Though the AM representational proposal can account for melodic patterns in a variety of languages, there are a number of areas that remain unresolved. Two of these issues relate to how to interpret the relationship between tones and metrically strong syllables in the AM model, namely the concept of starredness on the one hand and the interpretation of the relationship between phonological association and phonetic alignment on the other. The AM phonological representation of pitch accents encodes “autosegmental” information (or pitch accent shapes, LH or HL) and “metrical” information, i.e. information about the association of tones with metrical constituents and the relative alignment of tones with the metrically prominent syllable. The surface alignment of tones is basically derived from the use of the star notation (*). The star notation encodes two complementary things: (i) phonological association between pitch accent shapes and stressed syllables – in other words, a tone gets a star when it is associated to a metrically strong position; (ii) relative alignment in bitonal pitch accents – i.e. the tone that gets the star is the one that is directly linked to the metrically strong position. In bitonal accents, the question of which tone in LH or HL accent shapes should be assigned a star is not completely straightforward. On this issue, Pierrehumbert’s original definition states that “a strength relationship is defined on the two tones of bitonal accents and that it is the stronger tone which lines up with the accented syllable” (Pierrehumbert 1980: 76–77). According to this definition, it is ambiguous whether the star notation * indicates phonetic alignment between the tonal unit and the stressed syllable or just a “looser” phonological association. Similarly, Pierrehumbert and Beckman (1988: 234) note that “the * diacritic marks which tone of a bitonal accent is aligned with stress.” Arvaniti et al. (2000: 120) state that “phonetically this use of the star is to be interpreted as signifying that the starred tone is aligned in time with the stressed syllable.” In subsequent work, one of the most common interpretations of the star notation is that the starred tone is phonetically aligned with the stressed syllable, and thus a strict temporal alignment between the tone and its tone-bearing unit is expected. Recently, attention has been drawn to the various problems created by the representational ambiguity of the star notation. One of them is that it can be difficult to decide between competing AM analyses of bitonal accents, because the same

Tonal Alignment

9

contours can be transcribed in different ways (Prieto et al. 2005). For example, let us compare the surface alignment of the tones described by the English and Spanish L+H* – L*+H contrasts according to, respectively, Pierrehumbert (1980) and Sosa (1999). Even though the two phonological units capture the two-way phonological contrast present in both languages, the same labels L+H* and L*+H refer to different phonetic realizations (or alignment patterns) in the two languages. In fact, English L+H* corresponds to Spanish L*+H. This difference between the notational systems is caused by different interpretations of the star notion: while in the English notation the star is interpreted as an indication of phonological association between the tone and the prominent syllable, in Spanish it is interpreted as phonetic alignment, that is, the star is indicating whether the H peak is aligned (H*) or not aligned (L*) with the stressed syllable. (2)

Schematic representation of L+H* and L*+H a.

English (after Pierrehumbert 1980) L+H* L*+H

b.

Spanish (after Sosa 1999) L+H* L*+H

In addition, some authors have pointed out that the theoretical concept of starredness is ill-defined and cannot be based solely on phonetic alignment (Arvaniti et al. 2000). Arvaniti et al. present evidence from Greek of the types of problems that arise when phonetic alignment to the accented syllable is taken to be the exponent of association of tones with segments. As they note: we show that there exist pitch accents that are clearly bitonal but neither tone is, strictly speaking, aligned with the accented syllable. We argue from this fact that association cannot be based on phonetic alignment in any straightforward way and that a more abstract and rigorously defined notion of starredness is required.

In Greek rising pitch accents in pre-nuclear position, typically, neither L nor H is phonetically aligned with the stressed syllable: in most cases, the L is consistently aligned before the beginning of the accented syllable (5 msecs on average before the onset), and H displays more variability and is typically located in the posttonic. Thus, these authors conclude that “if alignment is the sole exponent of the association of tones to segments, phonetic variability in this domain becomes a crucial issue when the phonological structure of a bitonal accent is in question” (Arvaniti et al. 2000: 121). We take it as essentially correct that a one-to-one relationship between phonological association and phonetic alignment is difficult to maintain in the current AM model. In a recent proposal, Prieto et al. (2005) describe the contrastive possibilities of alignment of rising accents in three Romance languages, Central Catalan, Neapolitan Italian, and Pisa Italian (see also chapter 2: contrast). According to these authors, these Romance languages provide evidence that small differences in alignment in rising accents must be encoded phonologically. To account for such facts within the AM model, they develop the notion of “phonological anchoring” as an extension of the concept of secondary association originally proposed by Pierrehumbert and Beckman (1988). They propose that the

10

Pilar Prieto

phonological representation of pitch accents needs to include two independent mechanisms to encode alignment properties with metrical structure: (i) encoding of the primary phonological association (or affiliation) between the tone and its tone-bearing unit; and (ii), for some specific cases, encoding of the secondary phonological anchoring of tones to prosodic edges (i.e. moras, syllables, and prosodic words). (3) shows the schematic representation of the primary and secondary associations of a phrasal H within the accentual phrase in Japanese (Pierrehumbert and Beckman 1988: 129). The solid line indicates primary association to the accentual phrase a and the dashed line secondary association to the second sonorant mora [ within the accentual phrase. (3)

Japanese (after Pierrehumbert and Beckman 1988: 129)

[

a

accentual phrase

[

mora

H

tone tier [+son] [+son]

phoneme tier

The Romance data provide crucial evidence of mora-edge, syllable-edge, and wordedge H tonal associations and suggest that not only peripheral edge tones seek secondary associations. In this way, the specification of metrical anchoring points in the phonological representation offers a more transparent analysis of the alignment contrasts found in Romance languages and, ultimately, can help in the task of defining a more transparent pitch accent typology. Finally, Prieto et al. (2005) argue that such an approach makes the mapping from phonological representation to surface alignment patterns more explicit, and that it thus allows for more straightforward cross-linguistic comparisons. The evidence described above shows that even though AM representations are adequate when it comes to characterizing the minimal contrasts in pitch accent types found in different languages, the proper procedures by which to map phonological representations and the surface alignment of tones (through the use of the star notation) are still somewhat unclear. This is because the specific details of the coordination between tones and the segments that are linked to the structural unit are not part of the phonological representation itself. We thus agree with Arvaniti et al.’s (2000: 130) suggestion “that the task for the future is to refine the notion of the phonological association of tones in intonational systems.” In the near future, the contrastive possibilities of alignment found cross-linguistically need to be explored. This will provide firm ground from which to advocate a further refinement of the metrical side of the AM model.

4

Phonetic models of tonal alignment

Apart from changes in tonal alignment which have phonological effects, i.e. which encode a difference in meaning (see §2 and §3), tonal alignment is influenced by

Tonal Alignment

11

a variety of phonetic factors, such as tonal crowding, speech rate, segmental composition, and syllable structure composition. These fine-grained F0 alignment differences do not affect meaning or representation, and are instead considered to arise from differences in phonetic implementation rather than phonological representation. In this section we review some of the production studies that have investigated the influence of such factors on tonal alignment patterns and the perception studies that have demonstrated that some of these effects are employed by native speakers in lexical access tasks. Cross-linguistically, the location of fundamental frequency peaks (or H values) has been shown to be greatly affected by the right-hand prosodic context, in such a way that the peak is retracted before upcoming pitch accents and boundary tones (see Silverman and Pierrehumbert 1990 for English and Prieto et al. 1995 for Spanish, for example). Prieto et al. (1995) examined the peak placement patterns in rising accents in Spanish and found the following: (i) the location of the start of the F0 rise is fairly constant (generally at the onset of the accented syllable); (ii) as in English, the duration of the rising gesture is highly correlated with syllable duration. These results show that the slope and/or duration of a speech F0 movement are not constant, as claimed by the fixed rise-time hypothesis (Fujisaki 1983; ’t Hart et al. 1990; and others), but are instead governed by the coordination of the movement with the segmental string. Both studies demonstrated that a successful quantitative model of peak placement must contain at least two factors, namely the duration of the accented syllable and the distance in syllables to upcoming pitch accents or boundary tones. The Segmental Anchoring Hypothesis (henceforth SAH), as articulated by Ladd et al. (1999) on the basis of work by Prieto et al. (1995) and Arvaniti et al. (1998), refers to the idea that the slope of tonal movements is not invariant, but rather is specifically related to segmental anchors. Arvaniti et al. (1998) found an unexpected and consistent stability effect when little or no tonal pressure was exerted on the pitch accent. In a Greek word such as [pa’ranoma] ‘illegal’, the H target in the LH pitch accent associated with the test stressed syllable [’ra] was consistently aligned over – or “anchored to” – the frontier between the post-accentual onset and the following vowel ([n] and [o]). This clearly contradicts the traditional “constant slope” and “constant duration” hypotheses (i.e. the fixed rise-time hypothesis: Fujisaki 1983; ’t Hart et al. 1990; and others). The SAH says that both the beginning and the end of a rising or falling F0 movement are anchored to specific points in the segmental string, such as the beginning of the stressed syllable or the following unstressed vowel, and consequently the duration of the F0 movement is strongly dependent on the duration of the segmental interval between the anchor points. As we will see below, work on the effects of lower prosodic structure levels such as the syllable or the prosodic word on tonal alignment shows that we need to refine the SAH to incorporate these findings. Recent work on tonal alignment in different languages has shown that the position of the peak tends to change across syllable structure types (e.g. Rietveld and Gussenhoven 1995 for Dutch; D’Imperio 2000 for Neapolitan Italian; Prieto and Torreira 2007 for Peninsular Spanish; Prieto 2009 for Catalan). For example, D’Imperio (2000) found that the peak was located closer to the vowel offset in closed syllables in Neapolitan Italian. While in open syllables the peak was aligned with the end of the accented vowel, in closed syllables the peak was somewhat retracted and located within the coda consonant. This same effect of coda

12

Pilar Prieto

consonants on alignment has been detected in both rising accents in various languages (see citations above) and falling nuclear accents in Catalan (Prieto 2009). The results indicate that while the beginning of the falling accent gesture (H) is tightly synchronized with the onset of the accented syllable, the end of the falling gesture (L) is more variable and is affected by syllable structure: in general, while in open syllables the end of the fall is aligned roughly with the end of the accented syllable, in closed syllables it is aligned well before the coda consonant. D’Imperio et al. (2007b) hypothesized that Neapolitan listeners might capitalize on the alignment regularity for the perception of lexical contrast. Specifically, their hypothesis was that listeners of Neapolitan Italian might identify more closed syllable items when tonal alignment details are congruent with those for this type of syllable structure (see also Petrone 2008). In order to test this hypothesis, two natural productions of the words nono ‘ninth’ and nonno ‘grandfather’, both carrying a yes/no question nuclear accent, were manipulated in two ways. First, the researchers modified the length of the stressed vowel and the following consonant in five steps, in order to shift the perception of each item from nono to nonno and vice versa. Then, tonal alignment was shifted earlier, in four steps, without changing the percept of the question to that of a statement but merely creating question patterns that would be more or less congruent with the syllabic structure of the base. Thirteen Neapolitan listeners identified the stimuli as either nono or nonno. Significantly, the results showed that the alignment manipulation produced a category boundary shift in the nonno base stimulus series, but no effect in the open syllable series, supporting the hypothesis that fine detail of tonal alignment not only is employed to signal pragmatic contrast but may also be stored as part of the phonological specification of lexical items. Similarly, acoustic work on a variety of languages has shown that H peaks are consistently affected by the position of the accented syllable within the word (for English, see Silverman and Pierrehumbert 1990, and for Spanish Prieto et al. 1995). In general, peaks tend to shift backwards as their associated syllables approach the end of the word: in other words, the distance from the beginning of the accented syllable to the peak is longer in words with antepenultimate stress than in words with penultimate stress, which in turn show a longer distance than in words with final stress. In order to correct for the potentially confounding effects of stress clash (or distance to the next accented syllable), Prieto et al. (1995) analyzed a subset of the data obtained from test syllables in different positions in the word (e.g. número ‘number’, numero ‘I number’, numeró ‘(s)he numbered’). Their materials consisted of word sequences in which there was a distance of two unstressed syllables between one accented syllable and the next (e.g. número rápido ‘quick number’, numero nervioso ‘I number nervously’, and numeró regular ‘(s)he numbered in a regular way’). The three diagrams in Figure 50.2 show a schematic representation of the difference in F0 timing patterns in the three conditions, número rápido, numero nervioso, and numeró regular. A significant effect of word position on different measures of peak alignment was found in all the comparisons. Similarly, in Silverman and Pierrehumbert’s (1990) model of F0 peak location, the dropping of the variable “Word-Boundary” (while leaving the variable “Stress Clash” as a main predictor) significantly worsened the fit of the model. Prosodic word effects seem to suggest the possibility that the end of the word (and not only the presence of upcoming accents or boundary tones) is acting as a kind of prosodic boundary that exerts prosodic pressure on H tonal targets and

Tonal Alignment

n

ú m

e

r

o

m e r o

n

e r

r

ó

r

e

g

13

u

Figure 50.2 A schematic representation of the difference in F0 timing patterns in the three conditions, número rápido, numero nervioso, and numeró regular

that this effect can be exploited in word boundary identification tasks. Prieto et al. (forthcoming) performed a set of production and perception experiments that dealt with potentially ambiguous utterances distinguished by word boundary location in Catalan and Spanish (e.g. Catalan mirà batalles ‘(s)he looked at battles’ vs. mirava talles ‘I/(s)he used to look at carvings’; Spanish da balazos ‘(s)he fires shots’ vs. daba lazos ‘I/(s)he gave ribbons’). For the perception experiments, they hypothesized that relative peak location would help Catalan and Spanish listeners in terms of lexical access. The results of the production experiments showed that the prosodic word domain has a significant shifting effect on F0 peak location, and the results of the perception experiments showed that these alignment patterns are actively used by listeners in word identification tasks. In general, the results of studies on lexical access (D’Imperio et al. 2007b; Prieto et al., forthcoming) support the hypothesis that listeners are able to employ fine allophonic details of H tonal alignment due to syllable structure or within-word position to identify lexical items. This empirical evidence demonstrates that prosodic structure must play an essential role in our understanding of the coordination of pitch gestures with the segmentals and argues in favor of a view supported by other work that prosodic structure is manifested in details of articulation.

5

Tonal alignment: Evidence for target- vs. configuration-based theories of intonation

As pointed out in §2, work on tonal alignment has provided robust experimental evidence that changes in the synchronization of peaks and valleys with segmental landmarks are key perceptual cues for phonological distinctions across languages. This evidence has been interpreted as direct support for AM theory, which is widely held to afford a number of advantages over other discrete tone theories, as tonal alignment differences in this model are encoded phonologically in pitch accent units. Alignment studies have also been used to test specific predictions about different phonological models of prosody and intonation. For example, one of the old controversies in intonation studies surrounds the relative merits of the target-based vs. configuration-based theories of intonational primitives (see Ladd 1996: §1.2 for a review; also Arvaniti and Ladd 2009). The target-based model (also called target-and-interpolation model by Arvaniti and Ladd 2009) is the phonetic basis of AM intonational phonology, which has become the dominant phonological framework for analyzing intonation. This model assumes that certain points

14

Pilar Prieto

in the contour (e.g. local targets or F0 maxima and F0 minima) reflect phonologically specified targets and thus derive the intonational contour by defining the tonal targets and then connecting those through an interpolating F0 curve that goes from one target to the next. In recent years there has been accumulating evidence from tonal alignment studies that L and H tones behave as static targets and that they align with the segmental string in extremely consistent ways. Typically, in a variety of languages, the L valley of pre-nuclear rises is precisely aligned with the beginning of the accented syllable (see Prieto et al. 1995 for Spanish, Arvaniti et al. 1998 for Greek, and Ladd et al. 1999 for English, for example). Moreover, some studies have shown that this precise L intonational alignment with word or syllable boundaries is used by listeners in lexical identification tasks. For example, Ladd and Schepman (2003) showed that the different alignment of L in minimal pairs like Norman Elson/Norma Nelson is a useful cue to the word-boundary distinction between them. If L alignment was modified experimentally in such ambiguous phrases, this affected the listeners’ judgments in the identification task. Similarly, a recent study on the tonal marking of the French Accentual Phrase (AP) by Welby (2003) showed that the L tone associated with the left edge of the first content word of the AP is aligned at the boundary between the last function word and the first syllable of the first content word. Welby’s results for perception showed that French listeners use the alignment of the L tone as a cue for lexical access (in pairs such as mes galops ‘my gallops’ and mégalo ‘megalomaniac’). All in all, these alignment results, as well as many scaling results, have been interpreted in favor of the target-based hypothesis (for a review, see Ladd 1996). On the other hand, configuration-based theories (also called concatenation models by Arvaniti and Ladd 2009) treat the contour as the result of stringing together entire tonal sequences (not necessarily straight lines) of various lengths. Traditional intonational descriptions of the so-called “British school” (e.g. Crystal 1969; O’Connor and Arnold 1973) and the approach adopted by the Eindhoven-based Instituut voor Perceptie Onderzoek (IPO; e.g. ’t Hart et al. 1990) are of this sort, as is the more recent syllable-concatenation model proposed by Xu and colleagues (e.g. Xu and Wang 2001; Xu and Xu 2005). There have been several results reported in the literature that provide support for a configuration-based theory of intonation. For example, as mentioned above, D’Imperio and House (1997) undertook a perception experiment that investigated the contrast between questions and statements in Neapolitan Italian. They wanted to determine whether the major perceptual cue to this category distinction involved only the temporal alignment of the high-level target with the syllable or if instead the category percept also depended on the presence of a rising or falling melodic movement within the syllable nucleus. The results showed that the primary perceptual cue for questions is a rise through the vowel, while the primary cue for statements is a fall through the vowel. D’Imperio and House claimed that their results confirmed the second hypothesis, in that perceptually a rise in the vowel was the most important cue for the question, while a fall in the vowel was the most important cue for the statement, thus supporting the notion that pitch movements through areas of stability are perceptually important for identifying tonal categories. Contrasting results were obtained by Arvaniti and Ladd (2009), who carried out a production study in which they used acoustic alignment measures to test specific predictions about different phonological models of intonation. This involved

Tonal Alignment

apopu

na

NV PNV

me ton

menelo

PNSV

FV

frequency (Hz)

275

milane

15

75

IL

NL NH 0.2

0.4

L1 0.6

0.8 time (sec)

L2FH 1.0

1.2

1.4

Figure 50.3 The waveform, spectrogram, and F0 contour of [apo’pu na mi’lane me ton ’menelo] ‘Where could they be speaking to Menelos from?’ (speaker KP), illustrating the measurements taken on the F0 contour and relevant segmental onsets. Figure reproduced from Arvaniti and Ladd (2009: 55)

undertaking a very detailed phonetic study of the Greek wh-question melody. According to their results, certain points in the Greek wh-question melody show little variability in scaling and predictable variability in alignment. A close analysis of the F0 alignment data showed that (i) the exact contour shape depended on the length of the question, and (ii) the position of the first peak and the low plateau depended on the position of the prominent anchor syllables. The study also showed predictable adjustments in alignment depending on the proximity of adjacent tonal targets. Figure 50.3 shows the F0 contour of a long wh-question. In long wh-questions, the contour starts with a rise from a low F0 point, the fall from the peak is relatively shallow, and the following low F0 stretch is long. By contrast, short wh-questions consist of a high tone associated in time with the stressed syllable of the wh-word, followed by a rapid fall to a stretch of low F0, followed by a small rise. Arvaniti and Ladd (2009) argue that the Greek wh-question data strongly argue in favor of a target-based model of intonational phonology like that proposed by the autosegmental metrical framework of intonational phonology, and in particular in favor of the notion of sparse tonal specification. This is because one key assumption of the autosegmental metrical framework is that there is not necessarily any role for the syllable in modeling utterance contours. Rather, F0 targets can be temporally anchored to the segmental string in a variety of ways. This is exactly what we find in the wh-contour data in Greek, as the alignment and scaling adjustments observed in the contour are totally predictable, and depend on the length and tonal crowding manipulations in the target utterance. Arvaniti and Ladd claim that these predictable effects cannot be explained by superposition models of intonation, such as Fujisaki’s (1983) command–response model, or by configuration-based models that specify F0 by superposing contour shapes for shorter and longer domains, since both of them lack the mechanisms to account

16

Pilar Prieto

for effects such as the truncation of targets or asymmetrical adjustments to the larger tonal domains. Similarly, models that specify the F0 of all syllables (like Xu and colleagues’ model), and thus assume that all syllables are specified for tone, cannot account for lawful variation except by using ad hoc tonal specifications, which, in turn, do not allow for phonological generalizations about contours applying to utterances of different lengths.

6

Conclusion

In recent decades, the issue of tonal alignment has been a key focus of phonological research in intonational phonology. We now have solid evidence coming from different languages that F0 alignment differences can convey intonational contrasts, and that these alignment differences can be perceived in a near-categorical way. In this chapter, we have reviewed this work, and the use of several techniques in the investigation of tonal alignment processing (§2). As we have seen, a wide range of methodological paradigms have been applied to alignment research, including acoustic and articulatory analyses of speech productions, judgments and reaction times obtained during identification and discrimination tasks, measurements of brain activity, and eye movements. A recent debate within the autosegmental metrical approach to intonation has been how to represent these phonological contrasts in tonal alignment. As has been reported before, this theory does an especially good job of accounting for why tone alignment differences can convey intonational contrasts. In the AM framework, the star notation encodes both phonological association of the tones with a stressed syllable and the relative alignment in bitonal pitch accents. However, though the AM representations can adequately characterize the minimal contrasts in pitch accent types found in different languages, the procedures for mapping the surface alignment of tones through the use of the star notation onto phonological representations are still somewhat unclear. This chapter has reviewed some recent proposals regarding this issue which highlight the need to further investigate the contrastive possibilities of alignment found cross-linguistically. Apart from the phonological contrasts induced by tonal alignment, F0 tonal patterns are influenced by a variety of phonetic factors, such as prosodic crowding, speech rate, segmental composition, upcoming syllable structure, and prosodic word boundaries. In this case these fine-grained F0 alignment differences do not affect intonational meaning. This chapter has reviewed some of the production and perception studies that have informed the current phonetic models of tonal alignment. This work has highlighted principles of stability and also of adaptation to neighboring prosodic structure as basic pillars of phonetic models of tonal alignment. Importantly, some of these alignment patterns have been shown to be actively used by listeners in word identification tasks and lexical access. Finally, tonal alignment issues have historically been used as arguments to test the predictions of phonological models of intonation and to bear upon current theories of intonational phonology. The last section of this chapter has offered a selection of the arguments put forth in favor of the target-based model of intonation. As a final note, we believe that the full exploitation of recent methodological advances will provide important answers to the role of tonal alignment in phonological and phonetic models of intonation.

Tonal Alignment

17

ACKNOWLEDGMENTS Preparation of this chapter was supported by two grants awarded by the Spanish Ministerio de Ciencia e Innovación: projects FFI2009-07648/FILO and CONSOLIDER-INGENIO 2010 “Bilingüismo y Neurociencia Cognitiva CSD2007–00012.” I am grateful to Rafèu Sichel, two anonymous reviewers, and the editors of this volume, Colin Ewen, Beth Hume, Marc van Oostendorp, and Keren Rice, for all their help and very useful comments on an earlier version of the manuscript.

REFERENCES Arvaniti, Amalia & D. Robert Ladd. 2009. Greek wh-questions and the phonology of intonation. Phonology 26. 43–74. Arvaniti, Amalia, D. Robert Ladd & Ineke Mennen. 1998. Stability of tonal alignment: The case of Greek prenuclear accents. Journal of Phonetics 26. 3–25. Arvaniti, Amalia, D. Robert Ladd & Ineke Mennen. 2000. What is a starred tone? Evidence from Greek. In Michael B. Broe & Janet B. Pierrehumbert (eds.) Papers in laboratory phonology V: Acquisition and the lexicon, 119–131. Cambridge: Cambridge University Press. Beckman, Mary E. & Janet B. Pierrehumbert. 1986. Intonational structure in Japanese and English. Phonology Yearbook 3. 255–309. Bruce, Gösta. 1977. Swedish word accents in sentence perspective. Lund: Gleerup. Chen, Aoju, Els den Os & Jan Peter de Ruiter. 2007. Pitch accent type matters for online processing of information status: Evidence from natural and synthetic speech. Linguistic Review 24. 317–344. Crystal, David. 1969. Prosodic systems and intonation in English. Cambridge: Cambridge University Press. Dilley, Laura C. 2007. The role of F0 alignment in distinguishing categories in American English intonation. Unpublished ms., Bowling Green State University. D’Imperio, Mariapaola. 2000. On defining tonal targets from a perception perspective. Ph.D. dissertation, Ohio State University. D’Imperio, Mariapaola & David House. 1997. Perception of questions and statements in Neapolitan Italian. Proceedings of the 5th European Conference on Speech Communication and Technology (Eurospeech 1997), vol. 1, 251–254. Rhodes, Greece. D’Imperio, Mariapaola, Robert Espesser, Hélène Lœvenbruck, Caroline Menezes, Noël Nguyen & Pauline Welby. 2007a. Are tones aligned with articulatory events? Evidence from Italian and French. In Jennifer Cole & José Ignacio Hualde (eds.) Laboratory phonology 9, 577–608. Berlin & New York: Mouton de Gruyter. D’Imperio, Mariapaola, Caterina Petrone & Noël Nguyen. 2007b. Effects of tonal alignment on lexical identification in Italian. In Carlos Gussenhoven & Tomas Riad (eds.) Tones and tunes, vol. 2: Experimental studies in word and sentence prosody, 79–106. Berlin & New York: Mouton de Gruyter. Fournier, R., C. Gussenhoven, O. Jensen & P. Hagoort. 2010. Lateralization of tonal and intonational pitch processing: An MEG study. Brain Research 1328. 79–88. Fujisaki, Hiroya. 1983. Dynamic characteristics of voice fundamental frequency in speech and singing. In Peter F. MacNeilage (ed.) The production of speech, 39–55. New York: Springer. Gili-Fivela, Barbara. 2009. From production to perception and back: An analysis of two pitch accents. In Susanne Fuchs, Hélène Lœvenbruck, Daniel Pape & Pascal Perrier (eds.) Some aspects of speech and the brain, 363–405. Frankfurt am Main: Peter Lang. Gussenhoven, Carlos. 2004. The phonology of tone and intonation. Cambridge: Cambridge University Press.

18

Pilar Prieto

Gussenhoven, Carlos. 2006. Experimental approaches to establishing discreteness of intonational contrasts. In Sudhoff et al. (2006), 321–334. Hart, Johan ’t, René Collier & Antonie Cohen. 1990. A perceptual study of intonation: An experimental-phonetic approach. Cambridge: Cambridge University Press. Kohler, Klaus J. 1987. Categorical pitch perception. In Thomas V. Gamkrelidze (ed.) Proceedings of the 11th International Congress of Phonetic Sciences, vol. 5, 331–333. Tallinn: Academy of Sciences of the Estonian SSR. Ladd, D. Robert. 1996. Intonational phonology. Cambridge: Cambridge University Press. Ladd, D. Robert & Astrid Schepman. 2003. Sagging transitions between high pitch accents in English: Experimental evidence. Journal of Phonetics 31. 81–112. Ladd, D. Robert, D. Faulkner, H. Faulkner & A. Schepman. 1999. Constant “segmental” anchoring of F0 movements under changes in speech rate. Journal of the Acoustical Society of America 106. 1543–1554. Mücke, Doris, Martine Grice, Johannes Becker, Anne Hermes & Stefan Baumann. 2006. Articulatory and acoustic correlates of prenuclear and nuclear accents. In Rüdiger Hoffmann & Hansjörg Mixdorff (eds.) Proceedings of Speech Prosody 2006, 297–300. Dresden: TUDpress. Mücke, Doris, Martine Grice, Johannes Becker & Anne Hermes. 2009. Sources of variation in tonal alignment: Evidence from acoustic and kinematic data. Journal of Phonetics 37. 321–338. Niebuhr, Oliver. 2007. The signalling of German rising-falling intonation categories: The interplay of synchronization, shape, and height. Phonetica 64. 174–193. O’Connor, J. D. & G. F. Arnold. 1973. Intonation of colloquial English. London: Longman. Petrone, Caterina. 2008. From targets to tunes: Nuclear and prenuclear contribution in the identification of intonation contours in Italian. Ph.D. dissertation, Université de Provence. Pierrehumbert, Janet B. 1980. The phonetics and phonology of English intonation. Ph.D. dissertation, MIT. Pierrehumbert, Janet B. & Mary E. Beckman. 1988. Japanese tone structure. Cambridge, MA: MIT Press. Pierrehumbert, Janet B. & Shirley Steele. 1989. Categories of tonal alignment in English. Phonetica 46. 181–196. Prieto, Pilar. 2009. Tonal alignment patterns in Catalan nuclear falls. Lingua 119. 865–880. Prieto, Pilar & Francisco Torreira. 2007. The segmental anchoring hypothesis revisited: Syllable structure and speech rate effects on peak timing in Spanish. Journal of Phonetics 35. 473–500. Prieto, Pilar, Jan van Santen & Julia Hirschberg. 1995. Tonal alignment patterns in Spanish. Journal of Phonetics 23. 429–451. Prieto, Pilar, Mariapaola D’Imperio & Barbara Gili-Fivela. 2005. Pitch accent alignment in Romance: Primary and secondary associations with metrical structure. Language and Speech 48. 359–396. Prieto, Pilar, Eva Estebas-Vilaplana & Maria del Mar Vanrell. Forthcoming. The relevance of prosodic structure in tonal articulation: Edge effects at the prosodic word level in Catalan and Spanish. Journal of Phonetics. Rietveld, Toni & Carlos Gussenhoven. 1995. Aligning pitch targets in speech synthesis: Effects of syllable structure. Journal of Phonetics 23. 375–385. Silverman, Kim E. A. & Janet B. Pierrehumbert. 1990. The timing of prenuclear high accents in English. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 72–106. Cambridge: Cambridge University Press. Sosa, Juan Manuel. 1999. La entonación del español: Su estructura fónica, variabilidad y dialectología. Madrid: Cátedra.

Tonal Alignment

19

Sudhoff, Stefan, Denisa Lenertová, Roland Meyer, Sandra Pappert, Petra Augurzky, Ina Mleinek, Nicole Richter & Johannes Schließer (eds.) 2006. Methods in empirical prosody research. Berlin & New York: Mouton de Gruyter. Watson, Duane G., Christine A. Gunlogson & Michael K. Tanenhaus. 2006. Online methods for the investigation of prosody. In Sudhoff et al. (2006), 259–282. Watson, Duane G., Michael K. Tanenhaus & Christine A. Gunlogson. 2008. Interpreting pitch accents in online comprehension: H* vs. L+H*. Cognitive Science 32. 1232–1244. Welby, Pauline. 2003. The slaying of Lady Mondegreen, being a study of French tonal association and alignment and their role in speech segmentation. Ph.D. dissertation, Ohio State University. Xu, Yi & Emily Wang. 2001. Pitch targets and their realization: Evidence from Mandarin Chinese. Speech Communication 33. 319–337. Xu, Yi & Ching X. Xu. 2005. Phonetic realization of focus in English declarative intonation. Journal of Phonetics 33. 159–197.

51

The Phonological Word Anthi Revithiadou

1

Introduction

In the past few decades, the field of phonology has witnessed the development of an assortment of phonological theories and their offshoots. Seminal among them is the theory of Prosodic Phonology, which explores how prosodic structure is built in relation to morphosyntactic structure. Prosodic Phonology employs mapping rules that aim at organizing chunks of structure (e.g. strings smaller or larger than the grammatical word) into hierarchically ordered layers of prosodic units which, in turn, form the domains within which phonological rules apply. Such phonological domains need not be isomorphic to morphosyntactic constituents. More importantly, the existence of a mapping mechanism entails that rules of phonology proper (i.e. rules inducing changes in the phonological shape and pattern of a string of elements) do not make direct reference to morphosyntactic constituents.1 In general, the basic tenet of Prosodic Phonology is that phonological rules cannot see nor refer to any structure other than the units of the Prosodic Hierarchy (Selkirk 1978b, 1980, 1981a, 1981b, 1984, 1986, 1995; Nespor and Vogel 1982, 1986; Hayes 1989; see also chapter 33: syllable-internal structure, chapter 40: the foot, chapter 48: stress-timed VS. syllable-timed languages, chapter 54: the skeleton, chapter 104: root–affix asymmetries, and chapter 56: sign syllables for some other aspects of the Prosodic Hierarchy):

1

In this sense, therefore, Prosodic Phonology sharply contrasts with direct reference approaches to the interface, most articulately expressed in the work of Kaisse (1983, 1985) and Odden (1987, 1990). The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0051

The Phonological Word

2

(1) Prosodic Hierarchy Utterance (U) Intonational Phrase (IP) Phonological Phrase (PPh) (CG)2 Phonological Word (PW) Foot (F) Syllable (q) The Phonological Word (PW) is one of the best-established constituents of the Prosodic Hierarchy, and has received a great deal of attention. Despite its wide acceptance, certain aspects of the PW are still under debate in the literature. Part of the controversy surrounding its name pertains to the fact that the PW has broadly been established as the constituent that mediates the interface of phonology with morphology (i.e. lexical component) and syntax (i.e. post-lexical component), even though originally it was not intended to encapsulate both aspects of the interface. Moreover, it is the prosodic constituent which, roughly, corresponds to a grammatical word, and hence, naturally, cannot escape the conflicts and ambiguities associated with this notion. This chapter, therefore, aims to address all the major issues pertaining to the PW. More specifically, I will first explore the exact nature of the mapping mechanism(s) involved in the construction of this prosodic constituent. The pivotal questions center on the ways the mapping rules operate to group a specific chunk of morphosyntactic structure into a PW and the way this is formally expressed. It will become apparent from the discussion that, despite the thirty or so years of research in this area, certain aspects of the mapping mechanism are still poorly understood. Second, I will present the methodology and, in particular, the diagnostic criteria employed by researchers in identifying the domain of the PW. This survey will provide the opportunity to investigate and, more importantly, assess the amount of knowledge that has been accumulated over the years regarding the nature of rules that identify this prosodic constituent. Third, I will review the empirical situation and, more specifically, whether cross-linguistic evidence renders the PW universally viable or not. To this end, alternative proposals – which range from the formation of extended versions of the PW to the introduction of smaller or larger reincarnations of it (e.g. the small word, etc.) – will also be reviewed. Such enriched versions of the Prosodic Hierarchy have been proposed to accommodate 2

The CG is the most debatable prosodic constituent, and is usually not included in the Prosodic Hierarchy. It was originally introduced by Hayes (1989) and later adopted and further established by Nespor and Vogel (1986) (see also chapter 84: clitics).

3

Anthi Revithiadou

complex constructions involving function elements (e.g. particles, clitics, etc.), compounds, and complex predicates, which yield somewhat “looser” versions of the notion “word” and hence raise challenging questions for the mode in which the mapping is performed. The remainder of this chapter is organized as follows: §2 sketches out the main advancements that led to the development of a hierarchical model of prosodic constituency, an integral part of which is the PW. §3 examines how prosodic units relate to constituents of morphosyntactic structure and the places where this mapping is performed. §4 sets out the diagnostic criteria for identifying phonological wordhood. §5 focuses on the properties of an extended version of the PW, and §6 concludes this chapter.

2

The birth of the Prosodic Hierarchy: From boundaries to prosodic domains

In the Sound Pattern of English (SPE; Chomsky and Halle 1968) two concepts of syntactic surface structure are acknowledged: (a) output of the syntactic component, and (b) input to the phonological component. Re-adjustment rules are employed to handle discrepancies between these two types of structure. Their main task is to convert the syntactic string into a form that can later be read off and interpreted by phonology. More specifically, syntactic information is encoded in phonology by rules that insert boundary symbols at the edges of syntactic constituents. Such boundaries are considered to be segment-like elements that lack any phonetic manifestation. Two are relevant for the discussion here: (a) the syntactic boundary, #, used to indicate edges of major syntactic categories (N, V, A, etc.), phrasal categories (e.g. NP, VP, etc.) and stress-neutral affixes (Chomsky and Halle 1968: 12, 366), and (b) the morphological boundary, +, which indicates lexically assigned morphological boundaries (Chomsky and Halle 1968: 94). Essentially, the introduction of boundaries initiates an indirect mode of interaction between the components of grammar: phonological rules can either refer to boundaries in their structural description or be blocked by them, but they can never refer directly to syntactic edges. Much of the post-SPE era has been devoted to defining the exact number of boundaries and their relative strength prominence (e.g. McCawley 1968; Selkirk 1972). Gradually, however, the focus of attention shifted from linearly ordered boundary-defined domains to hierarchically organized prosodic ones.3 Based on the seminal work of Liberman (1975) and Liberman and Prince (1977), Selkirk (1978b, 1980, 1981a) proposes that phonological representations, like syntactic ones, are hierarchical in nature. She provides compelling arguments in favor of a “suprasegmental, hierarchically arranged organization” of the utterance (Selkirk 1981a: 111). Selkirk aptly points out the “problem of nestedness”: a phonological rule that applies across a “stronger” boundary can also apply across all “weaker” ones. Analogously, a phonological rule that applies before or after a certain

3

The increasing debate against boundaries was also stimulated by their clearly diacritic character. Boundaries do not constitute linguistic objects and, as such, lack any formal existence in mental representations (see e.g. Pyle 1972; Rotenberg 1978).

The Phonological Word

4

boundary can do so with all stronger ones. For instance, in Sanskrit a rule of voicing affects stops in intervocalic position (Selkirk 1980: 115). Crucially, the rule applies at the domain of the Utterance and affects only consonants at # boundaries (2b), ignoring the ones residing at a + boundary (2a). (2) a. #marut+i# → b. #parivraÍ# #ajam# →

maruti parivra“ ajam

In the boundary theory, the typological predictions in the domain of application of phonological rules described above are a mere stipulation. However, they come for free in a theory that assumes a hierarchically organized set of prosodic categories, i.e. sub-units of prosodic structure such as the syllable, the foot, the phonological word, the phonological phrase, the intonational phrase, and the utterance. This hierarchical constellation, the Prosodic Hierarchy, signals the birth of Prosodic Phonology. The advantage of the new theory is that the principle in (3) (Selkirk 1984; Hayes 1989) can easily capture the “nested” effect in the application of phonological rules mentioned above. (3) Strict Layer Hypothesis (SLH) (Hayes 1989: 204) The categories of the Prosodic Hierarchy may be ranked in a sequence C 1, C2 , . . . Cn , such that a. all segmental material is directly dominated by the category C n , and b. for all categories C i , i ≠ n, C i directly dominates all and only constituents of the category C i+1. Selkirk strengthens the argumentation in favor of prosodic constituency by showing that prosodic domains have independent motivation, besides the interface (Selkirk 1980: 110, 126–129; 1981a: 125; 1984: 8ff.). For instance, the PW in English is a category with an internal organization that yields patterns of relative (i.e. strong vs. weak) stress prominence, e.g. (‘/r/)Fw (’spekt/v)Fs , while at the same time constituting the domain of application of various phonological rules (e.g. nonlow vowels are tensed in PW-final position). After an exploratory period (Nespor and Vogel 1982, 1983; Nespor 1985, 1986; Vogel 1985, 1986), Nespor and Vogel (1986) extended Selkirk’s work and further enriched it with novel data from a wide array of languages (e.g. Hungarian, Greek, Turkish, and Italian). The focus of research in the following years was on defining the basic premises of Prosodic Phonology (e.g. the nature of the mapping algorithm, the way it is performed, etc.) and lending further support to the theory with empirical evidence from cross-linguistic research (e.g. Booij 1983, 1985a, 1985b; Booij and Rubach 1984; Hayes 1989; Itô and Mester 2003). In the Prosodic Phonology literature, however, the main motivation for prosodic constituency is non-isomorphism, most commonly expressed as a mismatch between phonological and morphosyntactic boundaries (e.g. Selkirk 1981a; Nespor and Vogel 1982). Nespor and Vogel (1986) argue that in Hungarian, for instance, vowel harmony takes place within a stem and a string of suffixes – (4a) and (4b) – but fails to apply in strings that contain sequences of stems (4c) or a prefix plus a stem (4d) (see chapter 123: hungarian vowel harmony). The reason for the disparity in the application of the process relies on the nature of

Anthi Revithiadou

5

the phonological domain formed by the elements involved. Vowel harmony applies within the domain of the PW, and a stem + suffix string is mapped into one PW. Significantly, compounds and prefixed constructions form two separate PWs, as shown in (4c) and (4d), despite the fact that they constitute single grammatical words. (4) Hungarian vowel harmony (Nespor and Vogel 1986) ölelés hajó b. ölelés-nek hajó-nak c. Buda-Pest könyv-tar book-collection d. be-utazni in-commute oda-menni there-go a.

[ölelés]PW [hajó]PW [ölelésnek]PW [hajónak]PW [Buda]PW [Pest]PW [könyv]PW[tar]PW

‘embracement’ ‘ship’ ‘embracement-dat sg’ ‘ship-dat sg’ ‘Budapest’ ‘library’

[be]PW [utazni]PW

‘to commute in’

[oda]PW [menni]PW ‘to go there’

The reference to prosodic constituents such as the PW allows us to describe the rule of vowel harmony in a unified way. The issue of (non-)isomorphism is central in defining and exploring the nature of phonological wordhood and is addressed in detail in subsequent sections of this chapter. To summarize, each one of the prosodic categories in the Prosodic Hierarchy is governed by its own principles of internal constituency, and each one forms a domain for the application of phonological rules. Moreover, certain prosodic domains are introduced in order to capture the non-isomorphic character of the interface. Boundaries, on the other hand, are entities that “assist” the interface but have a strong diacritic flavor. The ensuing sections explore the mode in which a particular prosodic constituent, namely the PW, relates to constituents of the morphosyntactic structure and the place this mapping occurs.

3

The Phonological Word at the interface

The classical Prosodic Phonology theory (Selkirk 1981a, 1981b, 1984, 1986, 1995; Nespor and Vogel 1982, 1986) assumes the organization of grammar in (5). Thus, the rules that undertake the mapping of morphosyntactic structure into a PW are part of the post-lexical phonology. Selkirk (1984: 82) dubs this a syntax-first model. (5)

syntactic surface structure → mapping rules → prosodic representations → phonological rules → phonetic representations

This view, however, has been challenged by several researchers (e.g. Booij 1983, 1988; van der Hulst 1984; Inkelas 1989; Booij and Lieber 1993). Nowadays it is widely accepted that constituents up to the level of the PW are built lexically, and that phonological and morphological structure is constructed at the same time, from bottom up. This issue is addressed in detail in §3.3.

The Phonological Word

6

3.1 Mapping rules under the Strict Layer Hypothesis Nespor and Vogel (1986) recognize that there are several options available for the definition of PW. They attribute the attested variation to an assortment of morphological notions that mapping rules can be sensitive to (e.g. stems, roots, sequences of stems, and suffixes, etc.). There are languages, like Greek for instance, in which a lexical word, i.e. the terminal element of the syntactic tree, whether a compound or simple word, is mapped into a PW. On the other hand, there are languages like Hungarian, in which mapping rules are sensitive to smaller elements such as stems, prefixes, or sequences of stems and suffixes. Finally, there are languages such as Dutch, in which certain elements (e.g. suffixes or prefixes) are endowed with a diacritic that grants them independent PW status regardless of the general dictates of the mapping mechanism. For instance, in Italian vowel-final prefixes are assigned independent PW status due to language-specific syllable well-formedness conditions, a restriction that the mapping rule must read and comply with. In Nespor and Vogel’s (1986) model of mapping there is yet another source of phonological wordhood, namely the SLH in (3). The principle of proper nesting requires all “unparsed” elements to prosodify into a PW in the absence of a neighboring host. The definition of PW in (6) encapsulates the aforementioned mapping possibilities. (6) PW domain (Nespor and Vogel 1986: 141) A. The domain of PW is Q (= any terminal element of the syntactic tree). or B. I. The domain of PW consists of a. a stem; b. any element identified by specific phonological and/or morphological criteria; c. any element marked with the diacritic [+W]. II. Any unattached elements within Q form part of the adjacent PW closest to the stem; if no such PW exists, they form a PW on their own. The above set of statements on the application of mapping rules makes some predictions. First, PWs cannot be larger than a terminal element in the syntactic tree. Second, no single stem can be mapped into more than one PW, and, third, affixes will always be part of the PW of their base unless they bear a diacritic. Interestingly, the definition in (6) encompasses diacritic information as well as special restrictions imposed by the phonological and/or morphological components of the grammar. Such a mapping algorithm, therefore, is far too powerful to be insightfully implemented in an interface theory which aspires at advancing the understanding of the factors involved in PW formation. Selkirk (1986), based on Chen (1987), proposes an end-based mapping theory, which operates on the edges of X-bar constituents. The basic idea is that a prosodic constituent is demarcated by the right or the left edge of selected syntactic constituents (i.e. X 0, X′, X″) (Selkirk and Shen 1990: 319): (7) The syntax–phonology mapping For each category C n of the prosodic structure of a language there is a two-part parameter of the form C n: {Right/Left; X m}, where X m is a category type in the X-bar hierarchy.

Anthi Revithiadou

7

It is a parametric choice of a language as to which edge of the X 0 (= word), for instance, will serve as the beginning or the end point of the PW domain. An advantage of the model is that the mapping algorithm is cross-categorical: the end rules can apply at some level of the X-bar hierarchy in order to form the appropriate prosodic domains. For instance, they apply at the X 0 to derive PWs and at the X′ or X″ to derive PPhs. A comparison between the two algorithms reveals a significant difference in their descriptive and predictive power. The prosodic patterns established between a PW and a morphological word (MW)4 by Nespor and Vogel’s (1986) algorithm are given in (8). Recall that this algorithm grants independent PW status to subminimal elements such as function words (fnc), prefixes (prf), etc., either as a result of a diacritic or due to the SLH. Crucially, a PW can never be larger than an MW. (8)

a. b. c.

PW = MW or x PW < MW *PW > MW

where x = fnc/prf

Selkirk’s (1986) end-based mapping algorithm, on the other hand, assigns a PW status to MWs but, importantly, permits a PW larger than an MW as a prosodic output. This domain can be derived when function elements are trapped in between two MWs, as shown in (9). Selkirk (1986, 1995) explicitly states that only lexical categories – not functional ones – and their projections are visible to the mapping rules. This entails, therefore, that, depending on the end rule parameter setting, the function words will prosodify together with the preceding or the following MW (= X 0). In the following abstract example, the PW will extend from the left (9b) or the right (9c) edge of one lexical item to the left or the right edge, respectively, of the next, incorporating in the process any intervening function words. (9)

a. b. c.

X 0 fnc fnc X 0 [X 0 fnc fnc]PW [X 0]PW [X 0]PW [fnc fnc X 0]PW

end rule: left end rule: right

Importantly, the original formulation of the end-based mapping rule does not allow elements that are smaller than the MW to constitute independent PWs. As a consequence, clitics, prefixes, and suffixes are deprived of this prosodic possibility. Given the SLH, only one option is available: incorporation of the sub-minimal element into the PW of a neighboring MW. The parsing options for PWs derived by the end-based algorithm are summarized in (10): (10)

4

a. b. c.

PW = MW *PW < MW PW > MW

In linguistics, the notion “morphological or grammatical word” is not uncontroversial. Dixon and Aikhenvald (2002: 18ff.) provide different types of criteria to define it and also distinguish it from PW. The most important three are that the elements a grammatical word consists of must always occur together, in a fixed order, and have a conventionalized coherence and meaning.

The Phonological Word

8

Interestingly, the two models diverge in the cases of non-isomorphism: Nespor and Vogel’s algorithm allows a PW to be smaller but not larger than an MW, whereas the end-based mapping sanctions the opposite. Furthermore, each theory makes different predictions with respect to the prosodization of sub-minimal elements. Due to the SLH, the end-based algorithm will force such elements to incorporate into an adjacent PW, whereas Nespor and Vogel’s algorithm will allow them to form an independent PW. The end-based algorithm is silent regarding the prosodization of non-terminal syntactic elements such as stems, prefixes, and suffixes. The empirical facts provide only partial support for each model. Let us start with the pattern PW < MW, predicted only by Nespor and Vogel’s algorithm. In northern Italian, prefixes and the stems of compounds, but not suffixes, form separate PW-domains for the rule of intervocalic s-voicing (Nespor and Vogel 1986: 125). As evidenced by the examples in (11), this rule applies in monomorphemic and inflected words, (11a) and (11b), which constitute one PW. It is blocked, however, between a prefix and stem (11c) or between the stems of a compound (11d), suggesting that such constructions are parsed into two PWs. (11)

Northern Italian: Intervocalic s-voicing a. b. c. d.

a/s/ola ca/s/-e a-/s/ociale tocca-/s/ana

a[z]ola ca[z]e a[s]ociale tocca[s]ana

[azola]PW [caze]PW [a]PW [sociale]PW [tocca]PW [sana]PW

‘button hole’ ‘house-pl’ ‘asocial’ ‘cure all’

During the exploratory period of Prosodic Phonology, a growing body of crosslinguistic evidence revealed that affixes may form independent PWs (e.g. Booij and Rubach 1984 for Polish and English; Hannahs 1991, 1995a, 1995b for French; amongst many others). Cross-linguistic evidence, therefore, gives an empirical advantage to the Nespor and Vogel mapping algorithm compared to the end-based one. However, several researchers have shown that the latter can easily handle these facts if end rules are appropriately modified so that they can read off edges of elements smaller than the word, such as stems (Kang 1992), or even edges of functional categories, e.g. agreement (Rice 1993). Furthermore, the end-based algorithm seems to have the advantage on the empirical side in cases where the PW is larger than the MW. Booij (1988) draws attention to clitic constructions from Latin and Dutch where clitics – even though they are independent grammatical words – are phonologically dependent on an adjacent word. This mapping is predicted by the end-based algorithm, but not by the Nespor and Vogel one. It is important to emphasize here that Nespor and Vogel (1986) posit a different prosodic constituent for the prosodic organization of such clitic constructions, namely the Clitic Group (CG). Within this larger constituent, however, the Nespor and Vogel mapping algorithm, under the dictates of SLH, is forced to elevate each independent element, i.e. clitic, into a PW. Below I exemplify from Greek the results of each model of mapping. In Greek, weak object pronouns precede the non-imperative verb form (12a) but follow the imperative (12b) (Revithiadou and Spyropoulos 2008, and references cited therein).

Anthi Revithiadou

9 (12)

Greek object clitics a.

o ’petr-os to ’Ïjava-se the Peter-nom.sg clt.3n.sg.acc read-pst.3sg ‘Peter read it.’

b. ‘Ïjava-’se to read-2sg.imp clt.3n.sg.acc ‘Read it!’

(< ’Ïjava-se)

Under a Nespor and Vogel-type mapping, each clitic will be granted PW status, as shown in (13). The problem with this structure, however, is that the PW of the clitic exhibits a different behavior from that of the PW of the host element. A clitic may “trigger” stress on the final syllable of the preceding PW (14a) or carry stress itself (14b). More importantly, examples like (14a) pose a serious threat to the SLH. The addition of the clitic causes the form to be further footed so that the threesyllable restriction imposed by the language can be salvaged: (‘Ïjava)(’se to) mu. The problem now is that the newly formed foot cuts across two PW boundaries: [(‘Ïjava)(’se]PW [to)]PW [mu]PW, in total disrespect of proper containment.5 (13)

(14)

PW

PW

PW

‘Ïjava’se

to

mu

a. ‘Ïjava-’se to mu read-2sg.imp clt.3n.sg.acc clt.1sg.gen ‘Read it to me!’ b. ‘par-e ’mu to take-2sg.imp clt.1sg.gen clt.3n.sg.acc ‘Take it for me!’

Furthermore, there is compelling evidence that postverbal clitics are incorporated into the PW of the verb (see footnote 5 and Revithiadou and Spyropoulos 2008 for more evidence), as correctly predicted by the end-based algorithm PW: {Left; X 0}. Thus, the comparison so far gives a descriptive advantage to the end-based model. A more careful examination of the facts, however, reveals that this algorithm is not unproblematic either. Given the left-end orientation of the rule, pre-verbal clitics are expected to encliticize to the preceding PW, but they do not, as shown in (15). If the clitics incorporated into the PW of the NP /o ’.oÏoros/ ‘the Theodorenom.sg’, the window restriction would have forced the development of a new stress on the last syllable of the noun, yielding [o .oÏo’ros tu to]PW ‘the Theodore-nom.sg clt.3sg.gen clt.3n.sg.acc.’ Clearly, this expectation is not confirmed by the data.

5

Nespor and Vogel (1986: 154–155) escape this problem by employing a (grid-based) Stress Readjustment rule. There is independent evidence, however, that clitics in Greek are footed. For instance, imperatives opt to incorporate the enclitic by deleting their final vowel, e.g. /:rapse ton/ (’:rapston)F ‘write-imp.2sg clt.3m.sg.acc’. Alternatively, the clitic may be augmented so that it can form its own foot, e.g. (‘:rapse)F (’tone)F. Similar augmentation phenomena are independently enforced by foot well-formedness conditions, e.g. /ro’ta-n/ (with inherent accent) ‘ask-3pl’ ro(’tane) ~ ro(’tan).

The Phonological Word (15)

10

o ’.oÏor-os tu to Ïja’vaz-i the Theodore-nom.sg clt.3sg.gen clt.3n.sg.acc read-3sg ‘Theodore reads it to him.’

Similar cases have been found in a variety of languages, underlining, among other things, the asymmetric behavior often witnessed in the prosodic organization of constructions with function elements (e.g. particles, clitics, etc.), compounds, and other complex expressions (e.g. Inkelas 1989; Booij 1996; Leben and Ahoua 1997; Peperkamp 1997; Vigário 1999, 2003).6 These findings therefore challenged the descriptive and explanatory efficiency of both mapping algorithms, and opened up new directions for research in this area.

3.2 The Weak Layer Hypothesis: Mapping as constraint interaction A number of researchers have sought a solution to the problems mentioned above in the architecture of the Prosodic Hierarchy. In particular, they questioned the necessity of the SLH as a well-formedness principle of the arboreal prosodic structure. The proposed modifications involve the introduction of (a) recursive structures, in which a prosodic constituent of a certain type dominates another constituent of the same type (16a), and (b) non-exhaustively parsed structures, in which a constituent of one type is permitted to skip intermediate levels and dominate a constituent of more than one level lower in the Prosodic Hierarchy (16b). The relaxed version of the SLH is referred to as the Weak Layer Hypothesis (e.g. Booij 1988, 1995, 1996, 1999; Itô and Mester 2003). (16)

Weak layering: Recursion and non-exhaustivity a.

b.

Cn

Cn Cn−1

Cn Cn−1

Cn−1

Cn−1

Cn−2

Cn−2

Cn−2

Under the influence of Optimality Theory (Prince and Smolensky 1993), Selkirk (1995) goes one step further and proposes that the SLH should be decomposed into its primitive components, which take the form of the prosodic domination constraints in (17). (17) Constraints on prosodic domination (Selkirk 1995: 443) (where C n = some prosodic category) a. Layeredness: No C i dominates a C j, j > i. b. Headedness: Any C i must dominate a C i−1. c. Exhaustivity: No C i immediately dominates C j, j < i–1. d. Non-Recursivity: No C i dominates C j, j = i. 6

The asymmetry in the prosodization of weak elements constitutes one of the main arguments against the CG as a prosodic constituent.

Anthi Revithiadou

11

Layeredness and Headedness are argued to be universally inviolable and therefore undominated in the constraint ranking of all languages. The structures in (16) result from the relative ranking of Exhaustivity and Non-Recursivity to the other constraints of the system. A significant role in the construction of prosodic constituents has been played by the alignment family of constraints, which basically undertakes the mapping of morphosyntactic constituents to prosodic structure (McCarthy and Prince 1993a, 1993b): (18)

Alignment constraints a. MW-Constraint (WCon): Align(MW, L/R; PW, L/R) b. PW-Constraint (PCon): Align(PW, L/R; MW, L/R)

These interface constraints are translations of Selkirk’s (1986) parameterized endbased theory of mapping into Optimality Theory (OT) constraint-based terminology. They have a uniform general scheme, which can easily account for cross-categorical mappings, since units from different levels of the Prosodic Hierarchy can coincide with morphosyntactic units. For instance, the constraint in (18a) requires the left or right edge of every MW to coincide with the left or right edge of some PW. Differences in the ranking of the relevant constraints and/or the morphosyntactic structure of an input string are taken to be responsible for the attested variation in the prosodization of weak elements (e.g. function words, prefixes, etc.) cross-linguistically. The interaction of the above constraints yields the following prosodic patterns for the abstract string /x x V/ (where x is a weak element): (19) a. b. c. d.

structures

typology

rankings

[[V x]PW [V]PW]PPh [x x [V]PW]PPh [x x [V]PW]PW [x x V]PW

PW free recursive internal

Exh, NonRec, MCon >> PCon MCon, PWCon >> NonRec >> Exh Exh, MCon >> NonR, PCon Exh, NonRec >> MCon, PWCon

It remains an open question whether all predicted typologies receive empirical support or whether they lead to vast overgeneration of PW patterns. There is, however, ample cross-linguistic evidence in support of the typology of function words in (19). In a cross-dialectal study of Italian clitics, for example, Peperkamp (1997) shows that the patterns in (19b) and (19d) correspond to specific Italian dialects. Similarly, Revithiadou (2008) provides evidence from a cross-dialectal survey of object clitics in Greek that all four prosodic patterns are empirically attested. Moreover, she demonstrates on the basis of diachronic evidence that there is a transition from “looser” types of constructions – e.g. patterns (19a) and (19b) – to nested ones – pattern (19c) – and from there to total integration of the clitic to its host, pattern (19d) (see also chapter 84: clitics).

3.3 Split between two worlds After discussing various approaches as to how a mapping rule assigns a PW make-up to a portion of the morphosyntactic string, I now turn to addressing where and when the mapping takes place. Such questions are of course meaningful only

The Phonological Word

12

within a theory that assumes a division of labor between the components of grammar and a procedural view of the interface.7 Selkirk takes a clear stance on this issue and argues that both word structure and sentence structure are sensitive to the same mechanism, and must therefore be treated alike, i.e. after syntax at the post-lexical component (Selkirk 1984: 415). Any additional tools would simply lead to an unnecessary proliferation of the machinery phonology has at its disposal. Nespor and Vogel (1986), on the other hand, take a more moderate approach. They draw a distinction between two sets of rules: (a) those that refer to phonological domains only and (b) those that make direct reference to morphological information. By doing so, they implicitly adopt a compartmentalized view of phonology. First, there is a section of phonology that may access directly “morphological structure and/or specific morphological elements” (Nespor and Vogel 1986: 18). Its rules are handled by a different mechanism, possibly Lexical Phonology (Kiparsky 1982a, 1982b; Mohanan 1982; chapter 94: lexical phonology and the lexical syndrome). Second, there is another section that accesses the interface indirectly, via the constituents of the Prosodic Hierarchy, and operates strictly on Prosodic Phonology rules proper. The question as to how these blocks of rules are ordered is left open to further research (see also chapter 103: phonological sensitivity to morphological structure). Inkelas (1989) attempts to salvage the indirect nature of the interface alluded to by Nespor and Vogel (1986) with the introduction of a radical move: she eliminates the q and the F from the Prosodic Hierarchy and introduces prosodic constituency below the PW in the Lexicon. The ranks below the PW in (1), which roughly correspond to levels in Lexical Phonology, can now accommodate strings of structure into prosodic units that are smaller than the PW. They basically provide the appropriate domain within which phonological rules can apply without having to directly refer to morphological constituents. Booij and Rubach (1984) and Booij (1988) concur that a lexically constructed PW is necessitated on empirical grounds, e.g. the prefix a- in Italian; see example (11). They further argue that it can offer solutions to bracketing paradoxes, special rules of allomorphy, and so on. For instance, the word [[un[grammatical]A]Aity]N poses a problem, because the prefix un- is stress-neutral (i.e. level 2 in Lexical Phonology terminology), and hence should be added after the stress-shifting suffix -ity (i.e. level 1). This implies, however, that the prefix un- should attach to grammaticality and not to grammatical, which is also problematic, because un- attaches only to adjectival bases. At the phonological level, the suffix -ity shifts the stress of the base ungrammatical, which nevertheless contains a stress-neutral prefix. This is problematic too, because only stress-neutral suffixes can be added after that prefix (i.e. a level 1 affix may not follow a level 2 affix). This bracketing paradox receives a straightforward explanation once a prefix is assigned an independent PW status: [un]PW [grammaticality]PW. Each PW constitutes an independent domain of stress assignment. As a result, the shifting property of the suffix can never interfere with the prefix, simply because the latter belongs to a different PW. Elements that impose such prosodic restrictions are often called non-cohering.

7

In non-serial theories of phonology such as OT, where mapping is the job of interface constraints that operate simultaneously on both modules, such questions are almost redundant, unless one embraces the distinction between a lexical and a post-lexical component in phonology.

13

Anthi Revithiadou

If this property is not derivable from the size and the segmental shape of the morpheme in question, it must be specified in the subcategorization frame of the relevant lexical item (Booij and Lieber 1993). Clitics may have similar prosodic selection requirements. For instance, ie ‘he’ in Dutch subcategorizes for a right PW boundary: ]PW __ (Booij and Lieber 1993: 39). However, there is a major drawback in assigning PW status to morphological elements in the lexicon: it reintroduces via the back door the problem of the PW boundary as a diacritic, thus posing a serious threat to the very nature of the Prosodic Hierarchy. Another major consequence of moving the construction of PWs into the lexicon is that the mapping now applies twice: once in the lexicon and again after syntax, i.e. post-lexically. This suggests that there are two types of PWs, lexical and post-lexical, and, consequently, the interface between phonology and morphology is of a different nature than that between phonology and syntax. If we admit this, we need to jettison the idea of a unified phonology that Prosodic Phonology – especially through the work of Selkirk – originally pursued. Yet another problem with implementing prosodic rules in the lexicon is that it blurs the division between Prosodic Phonology and Lexical Phonology (see chapter 94: lexical phonology and the lexical syndrome). The former theory is designed to deal with the interface of phonology and morphosyntax postlexically; the latter relies on a procedural mode of interaction of phonology and morphology in order to account for “idiosyncratic,” morphology-dependent phonological behavior. If the two models meet in the lexicon, then one may wonder which type of phenomenon each model targets and, moreover, on which grounds the division of labor is decided. Do their rules apply simultaneously or in an ordered fashion? Even though the model of Lexical Phonology has gradually lost headway, the distinction between lexical and post-lexical PW has endured over time (Booij and Lieber 1993; see also Booij 1999 for Dutch; Nespor 1990, Vogel 1991, Peperkamp 1997 for Italian; amongst others). The lasting nature of this division emphasizes all the more the truly interface nature of PW and further establishes it as the prosodic constituent that intersects the interface of phonology with morphology and syntax. However, more work needs to be done in order to acquire a better understanding of the factors that dictate the formation of PWs, and the workspace where this is done.

4

Diagnostics for the Phonological Word as a prosodic constituent

In the previous sections, we examined ways and places within which PWs are built. Here, the focus will be on what type of phonological evidence has been put forward to substantiate the PW as an integral part of the Prosodic Hierarchy. Nespor and Vogel (1986: 58ff.) identify certain diagnostics for establishing the concept of constituent in phonology. Phonological rules must refer to a string of elements in their formulation and have this string as their domain of application. The same piece of structure may also serve as the domain of phonotactic restrictions and stress prominence relations. Vogel (2009) aptly remarks that in establishing a prosodic constituent a number of phenomena must cluster together in using a particular string of elements as their domain. In the almost thirty years of research

The Phonological Word

1217

on Prosodic Phonology, various diagnostic criteria have been put forward to identify the domain of the PW. In the ensuing paragraphs, we will closely examine the most important ones, at the same time drawing attention to the existence of conflicting evidence and its repercussions for the status of PW as part of Universal Grammar.

4.1 Segmental rules In previous sections, we have seen that it has been proposed that both vowel harmony in Hungarian and intervocalic s-voicing in Italian apply within the domain of the PW. The Prosodic Phonology literature is replete with examples of segmental rules used as diagnostics for the definition of the PW domain in a variety of languages (see e.g. Hannahs 1991, 1995a, 1995b; Kang 1992; Peperkamp 1997; Kleinhenz 1998; Hall and Kleinhenz 1999; Vigário 1999, 2003). To illustrate with an example from French, Hannahs (1995a, 1995b) argues that glide formation and vowel nasalization can be safely used as diagnostics of the PW domain. For instance, underlying high vowels such as /i y/ semi-vocalize intervocalically to [j] and [8], respectively. Crucially, this rule applies only in stem + suffix strings (20a) but never when the vowel in question occurs at the end of a prefix (20b) or the first element of a compound (20c) (Hannahs 1995b: 1131). The blocking of the rule in the latter environments is taken as evidence that the stem plus suffix string constitutes a different PW from the remainder of the word. This is a typical example of non-isomorphism between prosodic and morphological structure. (20)

b.

colonie colonial anti-alcoolique

c.

tissue-éponge

a.

[kDlDni] [kDlDnjal] [ftialkDlik] *[ftjalkDlik] [tisyep6Ú] *[tis8ep6Ú]

‘colony’ ‘colonial’ ‘anti-alcohol’ ‘terry-cloth’

Similarly, Vigário (2003) employs a great variety of both lexical and post-lexical rules to substantiate the existence of the PW in European Portuguese. One such rule is vowel reduction, which affects all stressless vowels of a word except for the word-initial one, e.g. promo’ver → promu’ver ‘(to) promote’. A more careful examination of the data nevertheless reveals that the vowel is protected even when it is not in absolute word-initial position (21b). For Vigário (1999: 272–273), this constitutes evidence for the presence of a PW boundary at the left of the base ocu’par. The exact prosodic structure of such prefixed formations will be discussed in §5. (21)

a. ocu’par b. desocu’par

‘(to) occupy’ ‘(to) not occupy’

4.2 Stress and tone Stress is prototypically considered to be an infallible diagnostic for the PW domain: a PW must bear only one primary word stress (see chapter 41: the representation

Anthi Revithiadou

1218

of word stress). In Greek, for example, the phonological wordhood of lexical words and stem–word compounds is signaled by the presence of one primary stress: (22)

a.

/an.rop-os/ man-nom.sg b. /pali-o-ma:az-o/ bad-linkV-shop-nom.sg

[’an.ropos]

‘man’

[paljo’ma:azo]

‘lousy shop’

Dixon (2002) reports that in Jarawara, a Madi dialect of the Arawá family of southern Amazonian, primary stress is on the penultimate syllable, and rhythmic stress occurs on every other syllable to the left of the main-stressed foot (23a). Curiously, compounds and reduplicative formations display two stress peaks, which, crucially, are not on the second and the fourth mora from the end of the complex word but rather on the penultimate syllable of each of their constituents; (23b) and (23c). The attested stress patterns, therefore, suggest that the relevant formations form two PWs, e.g. [’bani]PW [ka’sako]PW (Dixon 2002: 128). (23)

a. b. c.

to-’wa-ka-’tima-’maro away-applic-in.motion-upstream-FPef ’bani-ka’sako / *ba’nika’sako ’kete-ke’tebe / *ke’teke’tebe < ke’tebe ‘run, follow’

‘took upstream’ ‘wild dog species’ ‘run a lot’

Tonal information may also serve as a criterion for delimiting the PW boundaries (see chapter 42: pitch accent systems). Leben and Ahoua (1997) provide an instructive example from Baule, a Bia language of the central Tano group. The language has both a High and a Low tone. Interestingly, a sequence of High tones shows an upstepping pattern, which involves a gradual rise in pitch from a level phonetically close to Low to a Super-High level. The domain of the upsweep, as this rule is commonly referred to, is the PW. The examples in (24) demonstrate that the rule operates in monomorphemic words (24a) and noun–noun compounds (24b), but it is blocked in possessor–possessed (24c) and subject–predicate phrases (24d) (Leben and Ahoua 1997: 117–118). The difference between these two sets of formations is attributed to a difference in their respective prosodic constituencies. The former constitute a single PW, e.g. [bólí nFnnFn]PW (24b), whereas the latter are organized into two separate PWs, e.g. [bólí]PW [mángún]PW (24c). The proposed prosodic structures are enhanced with additional evidence from segmental processes (Leben and Ahoua 1997: 122ff.). (24)

a.

Ákísí [– – – ] b. b ó l í n F n n F n – ] [ – – – c. b ó l í m á n g ú n – ] [ – – ][ – d. Á y á b ó l í [– – ][ – –]

‘Akisi’ ‘goat milk’ ‘goat’s friend’ ‘Aya is a goat’

The Phonological Word

1219

4.3 Phonotactics and syllabification The PW has been considered an important domain for phonotactic constraints, together with the syllable and the foot. In Dutch, for instance, the schwa as a featureless vowel cannot be preceded and followed by the same consonant; /a(lHl/, for example, is an impossible morpheme. Across PW boundaries, however, as in compounds, such sequences are tolerated, e.g. formu/ lH-l/ijst ‘formula list’ suggesting that the relevant Obligatory Contour Principle restriction is confined by the PW (Booij 1999: 56–57). The boundaries of PWs can also be subject to certain restrictions. In German, a lax vowel constraint (i.e. *[–tense, –low, –long]) limits the distributional freedom of vowels that occur at the right edge of the PW (Hall 1999). In European Portuguese (Vigário 2003), a PW cannot begin with a flap [7]. Furthermore, Sanskrit shrinks its consonant clusters at the right edge of the PW (Selkirk 1980), sharply contrasting in this respect with Dutch, which tolerates the occurrence of extra consonants and heavier rhymes in the same position (Booij 1995, 1999). In several, but not all, languages syllabification is taken to be a reliable diagnostic for establishing the PW domain (e.g. Kang 1992 for Korean; Booij 1995, 1999 for Dutch; Hall 1999 for German; Peperkamp 1997 for Italian; Raffelsiefen 1999 for English). Kang (1992) argues that in Korean, aspirated consonants, which normally neutralize to stops in coda position, syllabify together with derivational suffixes (25a). This option, however, is not available when the relevant segment is in between the two stems of a compound (25b), suggesting that resyllabification is blocked by an intervening PW-boundary. (25)

a. [[kiph]V i]N [ki.ph i] *[kip.i] deep-nml b. [[[aph]N [aph]N i]Adv [ap.a.ph i] *[a.pha.ph i] front front advs

[kiph i]PW ‘depth’ [ap]PW [aph i]PW ‘to each person’

4.4 Other diagnostics Given that PWs dominate feet and feet are usually branching constituents, PWs should be minimally either bimoraic or disyllabic, depending on whether they are composed of moraic or syllabic feet. Thus, foot binarity together with Headedness (17b) derive the notion minimal word (McCarthy and Prince 1986, 1993a, 1993b, 1998). Prosodic minimality is also enforced in morphological processes such as nickname truncations and various types of clippings (see McCarthy and Prince 1998 and references cited therein). In Greek, for instance, name truncations are minimal words (Topintzi 2003): (26)

a. Ni’kolaos b. Poli’kseni c. Aspa’sia

’nikos ’poli, ’kseni ’aspa

An often cited example in the literature for the identification of the PW is reduction under coordination (e.g. Booij 1985b for Dutch; Kleinhenz 1998 for German). For instance, Booij (1985b) demonstrates, on the basis of the examples in (27) that parts of complex words – i.e. compounds (27a) and complex formations with the so-called non-cohering affixes (27b) – can be cut off in coordinate constructions:

Anthi Revithiadou

1220

natuurkunde en scheikunde ‘nature knowledge and analysis knowledge’ b. zichtbaar en tastbaar ‘visible and tangible’

(27) a.

The condition is that this rule applies to trim off only parts of grammatical words which constitute separate PWs. That is, it can never eliminate an affix that is included in the PW of the stem it combines with: *rodig of groenig ‘reddish or greenish’. All that the rule “sees” is phonological structure, i.e. PW boundaries, and not the internal morphonological structure of words. Language games constitute a resourceful supplier of evidence for the PW. Henderson (2002) reports on a play language called “Rabbit Talk” in Arrernte, a central Australian language. In Rabbit Talk, the first syllable of the word is removed from its original position, and transposed to the end of the PW (28a). Monosyllabic words skip the transposition rule, and the syllable /ej/ is prefixed instead (28b). For the purposes of this discussion, I follow Henderson in assuming that the syllable structure is VC(C). Rabbit Talk suggests that disyllabic case clitics constitute a separate PW. In (28c), the two elements of the construction count as separate domains: the first element is treated as a monosyllabic word and hence receives the /ej/ prefix. The clitic /-akerte/, on the other hand, constitutes a PW by itself and, as such, it is subjected to the transposition rule. (28) a.

ampangk+eme moan+pres b. artwe man c. irlpe-akerte ear-comit

ordinary speech

Rabbit Talk

/amp.angk.em/

/angk.em.amp/

/artw/

/ej.artw/ /ej.irlp.ert.ak/

4.5 Conflicting evidence and methodological issues in defining the Phonological Word domain Within the Prosodic Phonology framework, the methodology applied in establishing prosodic constituents in general and the PW in particular involves four basic steps: first, discovering the domain within which a given phonological process applies or is blocked; second, establishing that more rules have the same domain as their locus of application; third, matching the string of elements with a particular unit of the Prosodic Hierarchy; and fourth, giving the particulars of the mapping mechanism that determines which chunk of morphosyntactic structure is organized into that particular unit. A good theory should have a mapping mechanism with a certain degree of descriptive power. In practice, however, the weight of investigation falls primarily on substantiating a specific prosodic constituent on the basis of the phonological rules alone, with much less attention paid to the specifics of the morphosyntax. This proves to be quite problematic when the phonological component sends conflicting signals. Cetnarowska (2000) reports that the distribution of secondary (rhythmic) stress in proclitic + host strings in Polish presumes the existence of a foot that straddles

The Phonological Word

1221

a PW boundary which is established on the basis of segmental evidence (Rubach and Booij 1990). In this case, the stress facts of the language conflict head on with the evidence provided by segmental rules. Likewise, Raffelsiefen (1999) argues that assimilation rules in English do not constitute a reliable criterion for establishing PW structure in English. This lack of agreement among different types of diagnostic criteria has led some researchers to propose units smaller or even larger than the PW in order to accommodate the “problematic” or “misbehaving” data. For instance, Rice (1993) acknowledges the existence of a small word in Slave, which is subject to certain rhymal constraints, as opposed to the PW, which is the domain of foot-based processes and various segmental rules. What is clear from the discussion so far is that the proponents of Prosodic Phonology have been confronted with challenging issues, and have opted for different solutions to deal with them. The first solution is to acknowledge that different types of criteria can serve as diagnostics of phonological wordhood in different languages and that each language decides on how to prioritize these criteria or even discard some of them on the basis of some sort of “scale” of relative importance. The question that naturally arises in this case is the way and the context in which this decision is made. The second solution resorts to the proliferation of prosodic domains by inserting pieces of structure with analogous prosodic behavior to distinct slots below the PW. The problem in this case is whether there is an upper bound to the proliferation of domains and, more importantly, whether these domains are universal. There is a third approach to (partially) tackling the problem described above. Recall from §3.2 that the Weak Layering Hypothesis offers the option of constructing a recursive PW. Such an extended version of the PW has been employed to capture non-isomorphic aspects of the interface from where some (but not all) of the challenging data stem, as we will see in the following section.

5

Extending the Phonological Word

The relaxation of the SLH allowed the emergence of recursive structures. It is doubtful whether constituents lower than the PW can be recursive (Itô and Mester 2009; Kabak and Revithiadou 2009). However, recursion is commonly assumed for higher levels of the Prosodic Hierarchy, such as the PW and the PPh. Selkirk (1995), in a study on the typology of clitics, motivates PW recursion as resulting from morphosyntactic recursion. In (29), phonology “mimics” the nested structure of the morphosyntactic representation by assigning PW boundaries to the edges of constituent X (see also Kabak and Revithiadou 2009 for more examples and argumentation). X

(29)

PW

X [[a

PW

→ b]

c]

[[a

b]

c]

A recursive PW (PW-Rec) comes in two shapes: as a result of adjunction (30a) and as a prosodic compound (30b). The latter is typically associated with word–word

Anthi Revithiadou

1222

compounds and sequences of function words, the first of which may be inherently stressed or stressed due to binarity. (30)

a.

adjunction

b.

prosodic compound PW

PW

[a fnc prf stem

b] word word word

PW

PW

PW c]]

[[a b] ’fnc fnc word

[c

d]] word word

In the Prosodic Phonology literature, PW recursion does not always have a morphosyntactic motivation. It is often assumed to arise either as a parsing choice of a particular language or as the result of the subcategorization requirements of individual elements (e.g. Booij and Lieber 1993 on certain Dutch clitics and prefixes). Evidence in support of recursion is primarily drawn from phonology and, specifically, the fashion in which phonological rules apply within the lower and upper PW. The main motivation for the existence of a PW-Rec is the blocking8 or the optional application of a PW-level phonological process (e.g. Booij 1995, 1996; Peperkamp 1997; Vigário 1999, 2003). In Greek, for example, proclitics and certain prefixes are subject to the same segmental rules that typically apply within the PW (31a), such as s-voicing before a nasal or a voiced fricative: (31)

a.

/:eras-’menos/ old-part b. /mas ’Ïinis/ clt.1pl.gen give-2sg c. /Ïis-’mirii/ twice-ten thousand-pl

:era’zmenos ‘aged’ maz.’Ïinis ‘you give us’ Ïiz.’mirii ‘twenty thousand ones’

The fact that such sub-minimal elements are part of the extended PW and not of the PW is evidenced by the blocking of resyllabification, e.g. :era.’zme.nos vs. maz.’Ïi.nis, which indicates the existence of a boundary at the left edge of the word. The PW boundary prevents the proclitic/prefix from fully incorporating into the PW of its host/base, suggesting that the sub-minimal element adjoins recursively to the PW of the word: [cl/prf [X0]PW]PW. The recursive PW has proved to be extremely useful in accounting for attested asymmetries in the degree of cohesion that clitics, affixes, and other dependent elements show in relation to their host. For instance, in many languages enclitics incorporate to their host, whereas proclitics adjoin recursively to it (e.g. Booij 1996 for Dutch; Peperkamp 1997 for Italian; Vigário 2003 for Portuguese; Revithiadou 8

The blocking of rule application is considered as an immediate result of adjunction. Elements of the outer layer of the PW inherit the properties of the mother constituent, but, because they are not dominated by all of its segments (Chomsky 1986), they can escape (some of) the rules (cf. Booij 1996).

The Phonological Word

1223

and Spyropoulos 2008 for Greek). Similarly, languages may choose to incorporate suffixes but not prefixes into the PW of their base. Despite its broad use, however, the recursive PW has been called into question as a legitimate prosodic constituent mainly because it is inherently incompatible with the non-recursive nature of phonology (see e.g. Neeleman and Koot 2006; Scheer 2008; Vogel 2009).

6

Conclusions

It is clear from the discussion so far that the PW, as a constituent that lies at the heart of the interface of phonology with morphology and syntax, cannot fully escape the problems naturally associated with the complex nature of the mapping. Four key aspects of PW have been the focus of attention in this chapter: (a) the mechanism that maps a string of elements into a PW and the principles that govern it, (b) the distinction between lexical and post-lexical PW, which essentially reflects the split nature of the mapping itself, (c) the type of criteria used to motivate the PW domain, and (d) the solutions proposed to account for conflicting evidence. The discussion has revealed that all of these issues are surrounded by a number of sometimes thorny problems, and has left numerous questions open for further research. For instance, it is still undecided whether the distinction between lexical and post-lexical PW can be dispensed with or not, or whether certain types of processes are universally associated with the PW. On the other hand, PW, as a theoretical construct, has been shown to play an important role in language acquisition (e.g. Fikkert 1994; Gerken 1994) and in language change (e.g. Lahiri 2000). Furthermore, psycholinguistic research has established a strong relation between the PW and units of production and perception (Wheeldon and Lahiri 1997, 2002), thus lending further support to the PW as a functionally useful constituent of the Prosodic Hierarchy. Future research will hopefully shed light on less clear aspects of the properties of the PW and the mode in which it is constructed and hence advance our understanding of this pivotal constituent of the Prosodic Hierarchy and, by extension, of the prosodic organization of grammatical elements.

ACKNOWLEDGMENTS I wish to thank two anonymous reviewers, as well as the editors of the Companion to Phonology, especially Marc van Oostendorp and Keren Rice, for their insightful comments. All errors are of course my own.

REFERENCES Andersen, Henning (ed.) 1986. Sandhi phenomena in the languages of Europe. Berlin: Mouton de Gruyter. Booij, Geert. 1983. Principles and parameters in prosodic phonology. Linguistics 21. 249–280. Booij, Geert. 1985a. The interaction of phonology and morphology in prosodic phonology. In Edmund Gussmann (ed.) Phono-morphology: Studies in the interaction of phonology and morphology, 23 –34. Lublin: Katolicki Uniwersytet Lubelski.

1224

Anthi Revithiadou

Booij, Geert. 1985b. Coordination reduction in complex words: A case for prosodic phonology. In van der Hulst & Smith (1985), 143–160. Booij, Geert. 1988. Review of Nespor & Vogel (1986). Journal of Linguistics 24. 515 –525. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Booij, Geert. 1996. Cliticization as prosodic integration: The case of Dutch. The Linguistic Review 13. 219 –242. Booij, Geert. 1999. The role of the prosodic word in phonotactic generalizations. In Hall & Kleinhenz (1999), 47–72. Booij, Geert & Rochelle Lieber. 1993. On the simultaneity of morphological and prosodic structure. In Sharon Hargus & Ellen M. Kaisse (eds.) Studies in Lexical Phonology, 23–44. San Diego: Academic Press. Booij, Geert & Jerzy Rubach. 1984. Morphological and prosodic domains in Lexical Phonology. Phonology Yearbook 1. 1–27. Booij, Geert & Jerzy Rubach. 1990. Edge of constituent effects in Polish. Natural Language and Linguistic Theory 8. 427–463. Cetnarowska, Bozena. 2000. On the (non)recursivity of the prosodic word in Polish. ZAS Papers in Linguistics 19. 1–21. Chen, Matthew Y. 1987. The syntax of Xiamen tone sandhi. Phonology Yearbook 4. 109–149. Chomsky, Noam. 1986. Barriers. Cambridge, MA: MIT Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Dixon, R. M. W. 2002. The eclectic morphology of Jarawara, and the status of word. In Dixon & Aikhenvald (2002). 125 –152. Dixon, R. M. W. & Alexandra Y. Aikhenvald (eds.) 2002. Word: A cross-linguistic typology. Cambridge: Cambridge University Press. Fikkert, Paula. 1994. On the acquisition of prosodic structure. Ph.D. dissertation, University of Leiden. Gerken, LouAnn. 1994. Young children’s representation of prosodic phonology: Evidence from English speakers; Weak syllable omissions. Journal of Memory and Language 33. 19–38. Grijzenhout, Janet & Barı; Kabak (eds.) 2009. Phonological domains: Universals and deviations. Berlin: Mouton de Gruyter. Hall, T. A. 1999. Phonotactics and the prosodic structure in German function words. In Hall & Kleinhenz (1999), 99 –131. Hall, T. A. & Ursula Kleinhenz (eds.) 1999. Studies on the phonological word. Amsterdam & Philadelphia: John Benjamins. Hannahs, S. J. 1991. Prosodic structure and French morphophonology. Ph.D. dissertation, University of Delaware. Hannahs, S. J. 1995a. Prosodic structure and French morphophonology. Tübingen: Niemeyer. Hannahs, S. J. 1995b. The phonological word in French. Linguistics 33. 1125–1144. Hayes, Bruce. 1989. The prosodic hierarchy in meter. In Paul Kiparsky & Gilbert Youmans (eds.) Rhythm and meter, 201–260. San Diego: Academic Press. Henderson, John. 2002. The word in Eastern/Central Arrernte. In Dixon & Aikhenvald (2002), 100–124. Hulst, Harry van der. 1984. Syllable structure and stress in Dutch. Dordrecht: Foris. Hulst, Harry van der & Norval Smith (eds.) 1982. The structure of phonological representations. 2 parts. Dordrecht: Foris. Hulst, Harry van der & Norval Smith (eds.) 1985. Advances in nonlinear phonology. Dordrecht: Foris. Inkelas, Sharon. 1989. Prosodic constituency in the lexicon. Ph.D. dissertation, Stanford University. Inkelas, Sharon & Draga Zec (eds.) 1990. The phonology–syntax connection. Chicago: University of Chicago Press.

The Phonological Word

1225

Itô, Junko & Armin Mester. 2003. Weak layering and word binarity. In Takeru Honma, Masao Okazaki, Toshiyuki Tabata & Shin-Ichi Tanaka (eds.) A new century of phonology and phonological theory: A festschrift for Professor Shosuke Haraguchi on the occasion of his sixtieth birthday, 26 –65. Tokyo: Kaitakusha. Itô, Junko & Armin Mester. 2009. The extended prosodic word. In Grijzenhout & Kabak (2009), 135–194. Kabak, Barı; & Anthi Revithiadou. 2009. An interface account to prosodic word recursion. In Grijzenhout & Kabak (2009), 105 –134. Kaisse, Ellen M. 1983. The syntax of auxiliary reduction in English. Language 59. 93 –122. Kaisse, Ellen M. 1985. Connected speech: The interaction of syntax and phonology. New York: Academic Press. Kang, Ongmi. 1992. Korean prosodic phonology. Ph.D. dissertation, University of Washington. Kiparsky, Paul. 1982a. Lexical morphology and phonology. In Linguistic Society of Korea (ed.) Linguistics in the morning calm, 3 –91. Seoul: Hanshin. Kiparsky, Paul. 1982b. From Cyclic Phonology to Lexical Phonology. In van der Hulst & Smith (1982: Part I), 131–175. Kleinhenz, Ursula. 1998. On words and phrases in phonology: A comparative study with focus on German. Ph.D. dissertation, University of Tübingen. Lahiri, Aditi. 2000. Hierarchical restructuring in the creation of verbal morphology in Bengali and Germanic: Evidence from phonology. In Aditi Lahiri (ed.) Analogy, leveling, markedness: Principles of change in phonology and morphology, 70–123. Berlin & New York: Mouton de Gruyter. Leben, William R. & Firmin Ahoua. 1997. Prosodic domains in Baule. Phonology 14. 113–132. Liberman, Mark. 1975. The intonational system of English. Ph.D. dissertation, MIT. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. McCarthy, John J. & Alan Prince. 1993a. Generalized alignment. Yearbook of Morphology 1993. 79–153. McCarthy, John J. & Alan Prince. 1993b. Prosodic morphology I: Constraint interaction a nd satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. McCarthy, John J. & Alan Prince. 1998. Prosodic morphology. In Andrew Spencer & Arnold M. Zwicky (eds.) The handbook of morphology, 283–305. Oxford & Malden, MA: Blackwell. McCawley, James D. 1968. The phonological component of a grammar of Japanese. The Hague & Paris: Mouton. Mohanan, K. P. 1982. Lexical Phonology. Ph.D. dissertation, MIT. Distributed by Indiana University Linguistics Club. Neeleman, Ad & J. van de Koot. 2006. On syntactic and phonological representations. Lingua 116. 1524–1552. Nespor, Marina. 1985. The phonological word in Italian. In van der Hulst & Smith (1985), 193–204. Nespor, Marina. 1986. The phonological word in Greek and Italian. In Andersen (1986), 65–74. Nespor, Marina. 1990. Vowel deletion in Italian: The organization of the phonological component. The Linguistic Review 7. 375 –398. Nespor, Marina & Irene Vogel. 1982. Prosodic domains of external sandhi rules. In van der Hulst & Smith (1982: Part I), 225 –255. Nespor, Marina & Irene Vogel. 1983. Prosodic structure above the word. In Anne Cutler & D. Robert Ladd (eds.) Prosody: Models and measurements, 123 –140. Berlin: Springer.

1226

Anthi Revithiadou

Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Odden, David. 1987. Kimatuumbi phrasal phonology. Phonology Yearbook 4. 13 –36. Odden, David. 1990. Syntax, lexical rules and postlexical rules in Kimatuumbi. In Inkelas & Zec (1990), 259 –277. Peperkamp, Sharon. 1997. Prosodic words. Ph.D. dissertation, University of Amsterdam. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Pyle, Charles. 1972. On eliminating BM’s. Papers from the Annual Regional Meeting, Chicago Linguistic Society 8. 516 –532. Raffelsiefen, Renate. 1999. Diagnostics for prosodic words revisited. In Hall & Kleinhenz (1999), 133–201. Revithiadou, Anthi. 2008. A cross-dialectal study of cliticisation in Greek. Lingua 118. 1393 –1415. Revithiadou, Anthi & Vassilios Spyropoulos. 2008. Greek object clitic pronouns: A typological survey of their grammatical properties. STUF: Language Typology and Universals 61. 39 –53. Rice, Keren. 1993. The structure of the Slave (Northern Athabaskan) verb. In Sharon Hargus & Ellen M. Kaisse (eds.) Studies in Lexical Phonology, 145–171. San Diego: Academic Press. Rotenberg, Joel. 1978. The syntax of phonology. Ph.D. dissertation, MIT. Scheer, Tobias. 2008. Why the prosodic hierarchy is a diacritic and why the interface must be direct. In Jutta M. Hartman, Veronika Hegedüs & Henk van Riemsdijk (eds.) Sounds of silence: Empty elements in syntax and phonology, 145 –192. Amsterdam: Elsevier. Selkirk, Elisabeth. 1972. The phrase phonology of English and French. Ph.D. dissertation, MIT. Selkirk, Elisabeth. 1978a. The French foot: On the status of “mute” e. Studies in French Linguistics 1. 141–150. Selkirk, Elisabeth. 1978b. On prosodic structure and its relation to syntactic structure. Paper presented at the Conference on Mental Representation in Phonology. Selkirk, Elisabeth. 1980. Prosodic domains in phonology: Sanskrit revisited. In Mark Aronoff & Mary-Louise Kean (eds.) Juncture, 107–129. Saratoga: Anma Libri. Selkirk, Elisabeth. 1981a. On prosodic structure and its relation to syntactic structure. In Thorstein Fretheim (ed.) Nordic prosody II, 111–140. Trondheim: Tapir. Selkirk, Elisabeth. 1981b. On the nature of phonological representation. In Terry Myers, John Laver & John Anderson (eds.) The cognitive representation of speech, 379 –388. Amsterdam: North Holland. Selkirk, Elisabeth. 1984. Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press. Selkirk, Elisabeth. 1986. On derived domains in sentence phonology. Phonology Yearbook 3. 371– 405. Selkirk, Elisabeth. 1995. The prosodic structure of function words. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 439–469. Amherst: GLSA. Selkirk, Elisabeth & Tong Shen. 1990. Prosodic domains in Shanghai Chinese. In Inkelas & Zec (1990), 313 –338. Topintzi, Nina. 2003. Prosodic patterns and the minimal word in the domain of Greek truncated nicknames. Proceedings of the 6th International Conference of Greek Linguistics. Available (April 2010) at: www.philology.uoc.gr/conferences/6thICGL. Vigário, Marina. 1999. On the prosodic status of stressless function words in European Portuguese. In Hall & Kleinhenz (1999), 253–293. Vigário, Marina. 2003. The prosodic word in European Portuguese. Berlin & New York: Mouton de Gruyter. Vogel, Irene. 1985. On constraining prosodic rules. In van der Hulst & Smith (1985), 217–233.

The Phonological Word

1227

Vogel, Irene. 1986. External sandhi rules operating between sentences. In Andersen (1986), 55–64. Vogel, Irene. 1991. Level ordering in Italian Lexical Phonology? In Pier Marco Bertinetto, Michael Kenstowicz & Michele Lopocaro (eds.) Certamen Phonologicum II: Papers from the 1990 Cortona Phonology Meeting, 81–101. Turin: Rosenberg & Sellier. Vogel, Irene. 2009. The status of the Clitic Group. In Grijzenhout & Kabak (2009), 15–46. Wheeldon, Linda R. & Aditi Lahiri. 1997. Prosodic units in speech production. Journal of Memory and Language 37. 356 –381. Wheeldon, Linda R. & Aditi Lahiri. 2002. The minimal unit of phonological encoding: Prosodic or lexical word. Cognition 85. B31–B41.

52

Ternary Rhythm Curt Rice

1

The facts

Most languages with iterative stress patterns show a simple rhythmic alternation between stressed and unstressed syllables (chapter 39: stress: phonotactic and phonetic evidence; chapter 41: the representation of word stress). But in a few cases, stress appears not on every second syllable, but rather on every third one. Patterns of this nature reveal the phenomenon of ternary rhythm. Ternary rhythm is most easily seen in a language with a stress system that ignores the internal structure of syllables, i.e. a quantity-insensitive system (see chapter 57: quantity-sensitivity). Cayuvava, now extinct, but formerly spoken in parts of Bolivia, is a language well documented in the work of Key (e.g. Key 1961, 1967). It is classified as an isolate, with no established genetic relationship to other languages. Key’s fieldwork documents ternary rhythm in Cayuvava and no relevant syllable quantity. Stress in this language appears on every third syllable counting from the right edge of the word. To see the pattern schematically, consider the representations in (1). Each number represents a syllable: “0” represents a syllable with no stress, “1” represents a syllable with primary stress, and “2” represents secondary stress. The pattern is claimed to emanate from the right edge of the word and the representations here are therefore right-justified. The pattern clearly emerges from this schematic representation. (1)

Ternary alternation patterns of Cayuvava a. 10 b. 100 c. 0100 d. 00100 e. 200100 f. 0200100 g. 00200100 h. 200200100 i. 0200200100

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0052

Ternary Rhythm

2

The transcribed data from the literature on Cayuvava flesh out the schematic patterns. We can see forms from Key’s work in (2), which correspond to the patterns already sketched in (1). (2)

Cayuvava a. b. c. d. e. f. g.

’da.pa ’to.mo.ho a.’ri.po.ro a.ri.’pi.ri.to ‘a.ri.hi.’hi.be.e ma.‘ra.ha.ha.’e.i.ki i.ki. ‘ta.pa.ra.’re.pe.ha

‘canoe’ ‘small water container’ ‘he already turned around’ ‘already planted’ ‘I have already put the top on’ ‘their blankets’ ‘the water is clean’

To see a more complex instance of ternary rhythm, we turn to Tripura Bangla. Das (2001) describes Tripura Bangla as a dialect of Bangla, resulting from a complicated sociolinguistic situation in the small Indian state of Tripura, where it is a commonly used lingua franca. One complication in the pattern of Tripura Bangla when compared with Cayuvava is the relevance of syllable structure for stress assignment. Before illustrating this, we can discern the default pattern through a consideration of words consisting only of light syllables. In such strings, we find main stress on the initial syllable and secondary stress emanating rightward in a ternary rhythm. However, a final light syllable cannot bear stress. When the pattern would place stress on a final syllable – e.g. in strings of four or seven syllables – that stress is not realized. This means, for example, that a word consisting of exactly four light syllables will have only one stress, namely the main stress on the word-initial syllable. (3)

Tripura Bangla default pattern a. b. c. d. e. f. g. h. i. j.

’ra.za ’go.ra.li ’ne.ta ’boi.ra.gi ’be.na.ro.œi ’bi.ße.sD.na ’œD.ma.lD.‘sD.na ’o.nu.kD.‘ro.ni.jD ’D.no.nu.‘da.ßo.ni.jD ’D.no.nu. ‘kD.ro.ni.‘jD.ta

‘king’ ‘ankle’ ‘leader’ ‘mendicant’ ‘Benaras silk’ ‘consideration’ ‘criticism’ ‘imitable’ ‘unintelligible’ ‘inimitability’

These patterns can be perturbed by closed syllables. Closed syllables can under certain circumstances tolerate stress in word-final position and they can also draw stress off from a word-initial open syllable. A simple generalization is that a stressed light syllable cannot be immediately followed by a closed syllable. When this would happen, the closed syllable bears stress instead. Further complications assign stress to the third syllable when it is heavy and to a word-final closed syllable, unless immediately preceded by a stressed syllable. Providing analyses at this level of detail is not the aspiration here, but both Das (2001) and Houghton (2006) discuss these patterns in detail. The data in (4) have closed syllables in

Curt Rice

3

various positions. In positions where we expect stress anyway, the patterns are as in (3). In other cases, the heavy syllable interrupts the default pattern. (4)

Tripura Bangla quantity effects a. b. c. d. e. f. g. h. i. j. k. l.

’mal.œa ’za.til ’œDr.kar D.’hDI.kar ’œDI.rDk.‘kDn o.’big.ga.‘zDn ’o.nu.‘bik.kDn ’baz.zaiÍ.Ía.mi ’D.no.‘nu.kD.‘rDn ’zD–.–a.lD.‘sD.na ’œDI.rDk.ko.‘ni.jD.ta ’za.rD.‘doœ.œi.kD.ta

‘big metal bowl’ ‘earthen pot’ ‘government’ ‘pride’ ‘reservation’ ‘intimation’ ‘microscope’ ‘adamancy’ ‘non-imitation’ ‘deliberation’ ‘preservability’ ‘expertness’

Ternary rhythm is also visible in Chugach Alutiiq, a Yup’ik language spoken by a small number of individuals in Alaska. This language is most extensively documented in a series of important works by Leer (e.g. 1985a, 1985b). These data and Leer’s discussion figure prominently in the literature on ternary rhythm, including many of the theoretical works cited in the present chapter. Quantity is also relevant for the placement of stress in Chugach Alutiiq, but unlike in Tripura Bangla, a syllable must have a long vowel to perturb the pattern. Closed syllables with short vowels – except when in word-initial position – do not attract stress. Assuming that the default stress pattern is revealed in strings with no relevant quantity distinctions, stress in Chugach appears on the second syllable and then every third syllable thereafter. A word in Chugach Alutiiq with five or six light syllables will have stress on the second and fifth. A word with four, however, will have stress on the second and fourth. Syllables with long vowels always attract stress. (5)

Chugach Alutiiq a. b. c. d. e. f. g. h. i. j. k. l.

mu.’lu.kan a.’ku.ta.’mek ta.’qa.ma.lu.’ni a.’ku.tar.tu.’nir.tuq ma.’Iar.su.qu.’ta.qu.’ni ’taa.’taa ’taa.ta.’qa ’naa.ma.ci.’quq ’naa.qu.ma.’lu.ku ’naa.ma.’ci.’qua mu.’u.’kuut u.’lu.te.ku.’ta.’raa

‘if she takes a long time’ ‘akutaq (a food) (abl sg)’ ‘apparently getting done’ ‘he stopped eating akutaq’ ‘if he (refl) is going to hunt a porpoise’ ‘her father’ ‘my father’ ‘it will suffice’ ‘apparently reading it’ ‘I will suffice’ ‘if you take a long time’ ‘he’s going to watch her’

The three cases presented above are the clearest examples of ternary rhythm that have been uncovered to this point. The languages include some very long words, and even in the quantity-sensitive languages, there are words consisting of long

Ternary Rhythm

4

strings of light syllables. Leer’s transcriptions of those strings indicate stressed syllables that are separated by two unstressed syllables. This is the empirical basis for the claim that ternary rhythm is a real phenomenon and that a metrical theory of stress assignment must have formal tools that can generate such patterns. While the clearest cases are presented above, there are other languages which have been analyzed as having ternary rhythm, at least in some subset of the data. Most familiar among these are Ho-Chunk, Sentani, and Munster Irish. Having established the basis for the claim that ternary rhythm is an empirical fact, we turn now to metrical theory and the major strategies that that literature offers for the analysis of these data.

2

Theory and analysis

The preceding section has established that a plausible theory of metrical structure must offer a strategy for modeling ternary rhythm. We turn now to a brief review of the emergence of this issue in the literature and the general tendencies that can be identified. Hints about the treatment of ternary rhythm can be found very early in the development of a generative theory of stress assignment. As chapter 40: the foot discusses in detail, early work in generative phonology treated stress as the realization of a phonological feature [stress]. In this way, stress was analyzed with tools parallel to those used in the analysis of place of articulation – e.g. [coronal] or [dorsal] – or manner of articulation – e.g. [voice] or [continuant]; cf. Chomsky and Halle (1968) (see chapter 17: distinctive features). A breakthrough in the study of stress systems came with Liberman’s (1975) proposal that stress should be characterized not as a feature with absolute values, but rather as a relation in which two elements differ in their relative prominence. Along with this proposal came hierarchical representations and the introduction of the metrical foot into the generative literature, further developed in Liberman and Prince (1977). The foot naturally invited a more extensive theory of prosodic structure, incorporating segments into syllables, syllables into feet, and so on up the prosodic tree to the phrase or utterance; cf. Nespor and Vogel (2008) (see also chapter 40: the foot; chapter 51: the phonological word; chapter 84: clitics; chapter 50: tonal alignment). This is the context in which any proposed modifications of metrical theory find themselves today. The first extensive typological work on stress systems is found in Hayes (1980). Hayes studies the stress systems of many languages, and identifies a number of parameters that can be used to characterize the variation shown in these languages. Parameters specify points of variation, such as the direction of foot construction, sensitivity to syllable-internal quantitative structure, trochaic or iambic headedness of the feet, whether feet are binary or unbounded, the edge of the word that hosts main stress, and whether or not peripheral material can be excluded from the initial parse through extrametricality. And, indeed, it is precisely the discussion leading up to the proposal of extrametricality that includes the earliest considerations of ternary rhythm. Before turning to the treatment of iterative ternary rhythm in metrical theory and Optimality Theory (OT), the relevance of extrametricality and its competitor are discussed. For a more thorough overview of metrical theory, see van der Hulst (1999) or Hammond (1995).

5

3

Curt Rice

Extrametricality vs. ternary feet

Extrametricality as a theoretical tool arose in response to apparent ternary rhythm at the edges of words. The stress pattern of English nouns offers a relevant illustration. In sufficiently long words, we can see that English displays alternating, binary stress assignment, in words such as Apalachicola, Minnesota, candelabra. But when we examine the right edges of words more closely, we quickly find that stress is sometimes found not on one of the final two syllables, but rather on the antepenultimate syllable, as in America, cinema, analysis. In this way, we identify a fundamentally binary system that has a ternary component, namely a three-syllable window at the right edge of the word. A model that only constructs binary feet over an entire word would not be able to generate this pattern. Specifically, the construction of feet from right to left in English nouns would always result in penultimate stress. How can a binary foot “reach in” far enough to position primary stress on the antepenult? To model antepenultimate stress, two possible enhancements of the theory were entertained early on. One of these is extrametricality. Extrametricality is a theoretical tool that does not explicitly entail enhancement of the inventory of feet. Instead, it provides a particular strategy for foot construction, or parsing a string of syllables. In particular, extrametricality excludes a peripheral syllable from the string to be parsed into feet. In the case of English nouns, exclusion of the final syllable, followed by construction of a binary leftheaded foot, will place stress on the antepenultimate syllable. Extrametricality is also illustrated in chapter 43: extrametricality and non-finality. As we will see below, some later work on iterative ternary rhythm relativizes the peripherality requirement, such that syllables can be excluded from the string not only when they are word-peripheral but also, for example, when they are foot-peripheral. A conceptually different approach from extrametricality would be to enhance the model such that it also includes ternary feet. Data of the type described for English nouns would then be modeled by building a ternary foot at the edge, followed by the construction of binary feet iterating leftward. Since the stress that is found in the three-syllable window is the primary stress, this amounts to a proposal that primary stress can be modeled through the use of one kind of foot while secondary stress requires another. Such proposals can be found for other points of parametric variation for foot construction, as well. For example, primary stress may require the use of a quantity-sensitive foot, while iterative secondary stress seems to be quantity-insensitive (van der Hulst 1984, 1999). One strategy for modeling an edgemost ternary domain – extrametricality – enhances the parsing strategies available in the theory, while the other strategy – a ternary foot – enhances the inventory of feet available in the theory.

4

Modeling iterative ternary rhythm

The parameters of metrical theory specify the nature of feet and control their construction across words in languages. The feet that are constructed are constituents that create a domain for the assignment of relative prominence. Prince (1983) offers an alternative approach without internal constituency, representing relative prominence instead only with a grid, and reconstruing some of Hayes’s

Ternary Rhythm

6

parameters such that their effects can be replicated without binary constituents (see also chapter 41: the representation of word stress). The debate about constituency includes argumentation based on sensitivity of non-stress phenomena to feet, as reviewed in Kenstowicz (1993). This debate is present in much of the subsequent literature, finding one of its most extensive and significant considerations in Halle and Vergnaud (1987). Halle and Vergnaud’s constituentized grid representation integrates grids and feet. Grids are built but the gridmarks are grouped and these groupings represent constituents. The construction process implements parameter settings here, too, also in pursuit of a typology of stress systems. And it is here, in Halle and Vergnaud’s opus, that we find the first discussion of iterative ternary stress presented in a major work on stress system typology. Halle and Vergnaud of course draw on papers and presentations regarding ternarity that were floating about in the immediately preceding years, with some issues already nascent in McCarthy (1979). The discussion of iterative ternary rhythm and its implications for the typologies under consideration in the relevant literature was initiated by Levin (1985), which was ultimately published in a significantly modified form as Levin (1988). Levin’s work drew on the data from Cayuvava in (2). Halle and Vergnaud (1987) discuss neither Tripura Bangla nor Chugach Alutiiq. Regarding the latter, Leer’s (1985a, 1985b) careful and important results would soon influence the details of the constituentized grid theory. Leer’s work was picked up on in Rice (1988), where an analysis in the spirit of Halle and Vergnaud (1987) is advanced. This, in turn, influenced subsequent revisions of the theory, as presented in Halle (1990). The theory developed by Halle and Vergnaud models ternary rhythm through the construction of ternary feet, extending to the problem of iterative ternary rhythm the spirit of the approach discussed above in the context of word-final three-syllable stress windows. A competing approach also reflects that earlier debate. This competitor maintains a size limit such that feet are maximally binary. Ternary rhythm is achieved with a parsing strategy that leaves occasional syllables unincorporated into feet, extending the basic notion of extrametricality; cf. Hammond (1990) and Hayes (1995). These two general approaches, to be illustrated presently, form the heart of the theoretical debate occasioned by ternary stress patterns. As we will see in the discussion of ternary rhythm and OT below, the debate persists there, too. We turn now to the chronologically first approach, namely an analysis of ternary rhythm using ternary feet.

5 5.1

Ternary feet Amphibrachs

At first glance, the Cayuvava stress patterns in (1) and (2) suggest an analysis with dactylic feet (strong–weak–weak), built from right to left. If we maintain a parametric strategy for constructing feet, then the independently established presence in the theory of a parameter placing heads at the left or right edge of the foot means that the admission of dactyls to the inventory of derivable feet would imply

Curt Rice

7

the introduction of anapests (weak–weak–strong) as well. Allowing a ternary foot with its head at the left edge implies via the relevant parameter the possibility of constructing a ternary foot with its head at the right edge. With Cayuvava as the only known case of iterative ternary rhythm at the time of this theoretical work, generating dactyls would lead to the phenomenon of overgeneration, i.e. being able with the tools of the theory to generate patterns not known to exist. In pursuit of a restrictive theory, Levin (1988) therefore takes a different tack, relaxing metrical theory just enough to allow for exactly one type of ternary foot, instead of two; dactyls and anapests are disallowed, but the theory now permits amphibrachs, i.e. ternary feet with prominence on the middle syllable, employing a strategy described below. When combined with final extrametricality – which can be overridden when necessary to build at least one foot on the (minimal) disyllabic words – the construction of amphibrachs will yield a footing of the schematic patterns in (1) that correctly locates stress, as seen in (6). Parentheses indicate feet and angled brackets mark extrametricality. In longer words, initial lone syllables are left unfooted, by stipulation. (6)

Ternary alternations parsed into amphibrachs a. b. c. d. e. f. g. h. i.

(10) (10)〈0〉 (010)〈0〉 0(010)〈0〉 (20)(010)〈0〉 (020)(010)〈0〉 0(020)(010)〈0〉 (20)(020)(010)〈0〉 (020)(020)(010)〈0〉

Halle and Vergnaud (1987) adopt Levin’s strategy, and also limit Universal Grammar (UG) to this one type of ternary foot. They parameterize the requirement that the head of a constituent be at its edge. When this parameter is set such that the head is not required to be at the edge of a constituent, the only ternary foot that can emerge is an amphibrach; cf. Rice (1988, 1990) for related discussion. The approach developed by Levin and widely discussed in publications by Halle and Vergnaud effectively views iterative ternary rhythm as evidence for expanding the inventory of feet. Constituents may have one head, but as many as two non-heads. For them, there is no hierarchical structure within the foot, so that this approach generates flat ternary feet. Another main thrust of the literature also sees a proposal with ternary feet, but now with internal hierarchical structure. Early proponents of this include Dresher and Lahiri (1991) and Rice (1992), building on Rice (1990). Leer’s (1985b) article offers the leading idea, namely identifying the quantitative equivalence of two light syllables with a single heavy syllable, allowing either of those configurations to be the head of a foot. Taking into account a non-head consisting of a light syllable, a foot might consist of three light syllables, two of which are themselves a subconstituent. Hence, ternary feet become an option. In the foot typology of Hayes (1980), some languages were identified in which the heads of feet must be heavy, a foot type dubbed the obligatory branching foot.

Ternary Rhythm

8

Rice (1992) in particular draws a parallel between Hayes’s obligatory branching feet and the analysis of ternarity under consideration, since the head consisting of one heavy syllable or two light ones could be construed as obligatory branching. That analysis is also relevant to the contrast between (6e) and (6d): in the former case, two word-initial syllables concluding the right-to-left parse are sufficient for a foot, while the single syllable in the latter case is not. The analysis in Rice (1992) suggests that degenerate feet must have a head, and in the case of the ternary feet constructed for Cayuvava, two lights are required to constitute a head, hence the minimal foot (and word) is binary. In the approach with flat ternary feet, it is unclear why a minimum of two syllables is necessary for a degenerate foot. Additional discussion related to this approach can be found in Everett (1988), Hewitt (1992), Rice (1993), Blevins and Harrison (1999), van der Hulst (1999), Rifkin (2003), and other references mentioned below.

5.2

Weak local parsing

The appearance of iterative ternary stress patterns in the literature on metrical phonology triggered, as noted above, a second strategy. Instead of increasing the set of possible feet, this second strategy increased the set of possible parsing strategies. This approach is developed in Hayes (1995), drawing on earlier work by Hammond (1990). In Hayes’s approach, universal grammar allows only three kinds of feet, as in (7) (see also chapter 44: the iambic–trochaic law). (7)

The Hayesian foot typology a. b. c.

Syllabic trochee (x q Moraic trochee (x L Iamb (. L

.) q .) L x) q

or or

(x) H (x) H

No exhaustive parsing of a string with any of these feet will give an iterative ternary pattern. But non-exhaustive parsing can do that. Hayes proposes that UG include a weak local parsing parameter that creates the possibility of leaving an unparsed syllable between each foot. The unparsed syllable can by stipulation only be a light one. Having an unparsed light syllable between each foot yields a ternary pattern using only binary feet, as in (8). (8)

Ternary alternations parsed into non-exhaustive binary feet a. b. c. d. e. f. g. h. i.

(10) (10)〈0〉 0(10)〈0〉 00(10)〈0〉 (20)0(10)〈0〉 0(20)0(10)〈0〉 00(20)0(10)〈0〉 (20)0(20)0(10)〈0〉 0(20)0(20)0(10)〈0〉

9

Curt Rice

For the Cayuvava pattern, trochees are constructed from right to left; whether they are moraic or syllabic trochees is irrelevant, since there is no quantity distinction in this language. The parsing also uses final extrametricality and, of course, weak local parsing. The final syllable in (8c) is extrametrical and the initial syllable is unfooted, because it is too little to be a foot, since no degenerate feet are allowed. In (8d) the influence of weak local parsing is seen; in this form, there is in fact sufficient material at the left edge of the word to form a foot. Since doing so would result in adjacent feet – which is not allowed with weak local parsing – no foot can be formed. Not until we have six syllables, as in (8e), is there sufficient space to build two non-adjacent – i.e. weakly local parsed – feet. A detail beyond the scope of this chapter is that adjacency can be tolerated in the case of adjacent heavy syllables – as in Chugach – suggesting that the requirement to incorporate heavy syllables into feet has priority over the prohibition on adjacent feet under weak local parsing.

5.3

Summary

At the level of analysis, we have seen that there are two primary strategies for constructing constituents across strings when the goal is to achieve iterative ternary alternations. One strategy is to expand the inventory of constituents, and here there are also two approaches. In the approach developed by Levin (1988) and Halle and Vergnaud (1987), flat ternary feet are allowed. Any non-head in a foot must be adjacent to its head. This allows exactly one kind of ternary foot, namely an amphibrach, where the head is not found at the edge of the foot, but rather is flanked by two non-heads. The second inventory-expanding approach, as seen primarily in Dresher and Lahiri (1991) and Rice (1992), relaxes the requirement that the head of a foot can span only one syllable. Feet that require heavy heads can draw their material either from one heavy syllable or from two light ones. The alternative to expanding the foot inventory is expanding the strategies available for constructing binary feet, and the primary representative of this approach is Hayes (1995). In this approach, binary feet are constructed in a new way. There are two necessary properties to the weak local parsing of a string: feet must be non-adjacent and they must be minimally non-adjacent. These requirements lead to iterative construction of feet that are separated by one light syllable. There was, as noted, an early debate in the metrical literature regarding the need for constituency; perhaps stress systems can be modeled simply with a theory of prominence as represented with a grid, and feet are superfluous. This debate has not shown itself in the context of ternary rhythm, insofar as the literature lacks a grid-only analysis of iterative ternary rhythm.

6

Ternary rhythm in Optimality Theory

The typological enterprise in generative grammar has enjoyed enhanced prominence in the era of OT (McCarthy and Prince 1993; Prince and Smolensky 1993). One of the core foci in the OT literature is typology (Archangeli and Langendoen 1997; Roca 1997; Kager 1999). Classic OT achieves typological insights by having

Ternary Rhythm

10

a universally fixed set of constraints. Variation is modeled with constraint reranking. The factorial typology of constraint rankings defines the range of possible grammars. Stress patterns and metrical theory have played an important role in the construction and exploration of optimality-theoretic approaches to modeling grammar. Indeed, one of the important early discussions of the power of violable constraints was built around the pursuit of a parallelist strategy for achieving the effects of directionality. The insight in this discussion is that minimal violation of a requirement that all feet be at one edge of the word (AllFt-L or AllFt-R), when combined with the force of a requirement that all syllables be parsed into feet (Parse-q), will yield as optimal a parse identical with serial foot construction from one edge of a string to the other. Less present in the OT literature, however, has been discussion of ternary rhythm. There has been almost no debate about the contrast between analyses using ternary feet and those using non-exhaustive parsing with binary feet. Indeed, Rice (2007) is to the best of my knowledge the only publication in which the issue is even mentioned, although Hyde (2002) also offers relevant perspectives on the nature of ternary parsing. The most prominent discussions of ternary rhythm in OT mimic the weak local parsing approach, as in Ishii (1996) and Elenbaas and Kager (1999). Elenbaas and Kager take an important principled position on methodology. In particular, they articulate and adopt the goal of deriving iterative ternary rhythm with tools that are already necessary to account for other phenomena. This laudable position of theirs contrasts with the too frequent practice in OT analyses of positing new constraints to give new analyses. That practice has substantial implications, in light of the methodology of the factorial typology noted above; introducing a new constraint introduces many new grammars, and the restricted typological enterprise as construed in classic OT is substantially challenged with every new constraint that is introduced. The analysis of ternary rhythm in OT based on underparsing has two crucial components. First, Parse-q – which requires that syllables be incorporated into feet – must be relatively low-ranked. This will be important in allowing optimization of an incomplete parse along the lines seen in (8). But simply ranking Parse below a requirement that all feet align with the right edge of the word will yield a parse with only one foot. Note that AllFt-R awards a violation for every syllable intervening between the right edge of a foot and the right edge of the word, for each foot. (9)

Underparsing with low-ranked Parse qqqqqq a. (’qq)(’qq)(’qq) b. q(’qq)q(’qq) ☞ c. qqqq(’qq)

AllFt-R Parse *!***** *!**

** ****

This brings us to the second crucial component. To counter the pressure of AllFt-R, parsing of at least some of the other syllables must be rewarded. The solution offered builds on well-established insights that lapses in long parses

11

Curt Rice

should be avoided; cf. Selkirk (1984). There are various *Lapse constraints in the literature (e.g. Kager 1994; Green 1995; Gordon 2002), where the leading idea is that a string of more than two unstressed syllables is disfavored. With *Lapse ranked above AllFt-R, the optimal parse will show only as much parsing as is necessary to minimize violations of *Lapse, and will favor options in which the feet are relatively toward the right. (10)

Ternary rhythm with *Lapse qqqqqq

*Lapse AllFt-R Parse

a. (’qq)(q’q)(q’q) b. (q’q)q(q’q)q ☞ c. q(q’q)q(q’q) d. qq(q’q)(q’q)

*!

****!** ****!*

**

***

**

**

**

The OT analysis of ternary stress requires consideration of many more details and much more discussion, which is to be found in the cited works. For present purposes, it is sufficient to note that an analysis akin to Hayes’s weak local parsing strategy is achieved through the interaction of *Lapse and AllFt-R. Although the methodology of pursuing an analysis built simply on the reranking of independently motivated constraints is commendable, that goal has not yet been achieved. For example, careful study reveals that multiple versions of *Lapse will be necessary, one of which is specifically designed for the ternary cases; this is made laudably explicit in Houghton (2006). While the use of ternary-specific tools is not an a priori flaw of these analyses, it nonetheless keeps them from clearing the high bar set in pursuit of an analysis by pure reranking of constraints that are not ternary-specific. In addition to facilitating an illustration of the strategy that has been most thoroughly pursued in providing an analysis of ternary rhythm within OT, the patterns under consideration here raise another important methodological point. Future work in OT that considers the relative merits of the two main types of analyses illustrated above – ternary feet or underparsing – must consider related issues about the division of labor among the modules of the theory. Consider, for example, the possibility that a particular analysis intends to optimize a parse that does not use flat ternary feet, or amphibrachs. How will such feet and their optimization be avoided? If it is possible that prosodic structure is present in inputs – a possibility required by the richness of the base methodology – then amphibrachs are possibly present in inputs. One way in which these can be prevented from being selected as optimal is with a constraint that rules them out. Introducing an anti-amphibrach constraint, however, implicitly raises the possibility that it could be ranked relatively low, which in turn could open the door to the optimization of such structures. If one’s position is that amphibrachs are never optimal, then it is unfortunate to achieve this universal exclusion with a constraint. The alternative is to provide structure to Gen, such that the output of Gen cannot include amphibrachs. The tension between these possibilities is the focus of Rice (2007), and it is one of the general theoretical issues raised in OT by the study of ternary rhythm.

Ternary Rhythm

7

12

Implications and directions for future research

As noted earlier, one strategy for modeling an edgemost ternary domain – extrametricality – enhances the parsing strategies available in the theory, while the other strategy – a ternary foot – enhances the inventory of feet available in the theory. On what basis can a theoretician select the model to be pursued? Are these two options really different from one another in some meaningful sense? Does one approach allow for the description of some kind of situation that the other one does not? If the approaches do not make different predictions, are there other strategies available for selecting among them? One possibility would be to appeal to general principles of theory construction or to findings in other realms of cognitive science. Such principles or findings may have implications for selecting among competing theories of phonology. To see one example of argumentation for selecting among competing theories, we can turn to Hayes’s (1980) argumentation for extrametricality over ternary feet. This argument is based primarily on identifying differences in the types of systems the competing theories predict. In the cited work, Hayes compares his foot inventory at that point with a proposal made by Morris Halle, attributed to “class notes” (Hayes 1980: 114ff.). Halle had introduced ternary feet into his version of the theory, while Hayes had developed the approach using extrametricality. Hayes begins his argument against Halle’s inventory by stating the following: “I know of no languages whose stress patterns could simply be described using feet of the [ternary] form” (Hayes 1980: 115). This quotation suggests that a gap in the typology – namely the absence of languages with iterative ternary rhythm – can be used as an argument against a theory that could in fact model that. At the conclusion of the section, we are again encouraged to adopt an inventory with binary feet and peripheral extrametricality, in part because it provides “an explanation for why feet which have [ternary] surface forms . . . are never assigned iteratively” (Hayes 1980: 122). This example from the early literature on ternary rhythm illustrates an argument based on overgeneration. The theories are evaluated, and the one that generates a pattern not known to exist is dispreferred on those grounds. Gaps in the typology become a criterion for theory selection. However, as we now know from the sections above, this gap was soon revealed to be accidental. This is not the only example in the generative literature of argumentation based on gaps in the empirical record. We might use this occasion to ask whether such experiences are relevant as we hone our methodology for identifying the properties of UG. The following paragraphs present some of the broader implications that may be explored on the basis of our discussion of ternary rhythm. The typological enterprise as widely practiced in generative phonology aspires to model grammatical variation through simple formal manipulations of various components of the theory: reranking of constraints, different rules or rule orders, or the setting of parameters to different values. Generative linguists often see themselves as doing work grounded in a reliable typology of the structures found in natural languages. Our goal – cf. Odden (forthcoming) – is to identify the limits of the human linguistic capacity and to model that knowledge. What is a possible grammar, and what is not? What

13

Curt Rice

cognitive structures must be posited to restrict the outcome of the language acquisition process to possible grammars, thereby rendering unattainable the impossible ones? And regarding the case at hand, what does the fact of ternary rhythm force us to posit in our theory? These questions are sometimes studied from “above.” Researchers could take as their starting point a theory of cognitive capacity and build a model of linguistic knowledge within the context of that theory. Demonstrated incompatibility of a conceivable linguistic structure with a known fact about our cognitive system would be an argument for genuine universal ungrammaticality: cf. Odden (2008) and Reiss (2008). Alternatively, one could develop a theory from “below.” In this case, one would approach the matter through a deep study of one language or one family of languages, or through a carefully selected set of unrelated languages. Regardless of the starting point one adopts, any predictive model of linguistic knowledge will be held accountable to facts about natural languages. We work to enhance the empirical foundation for developing theories of grammars by studying individual languages. The very act of documenting, describing, and analyzing the multifarious properties and subsystems that are found in natural language entails engagement in a typological enterprise; cf. Newmeyer (2005) and Hyman (2009). A model of linguistic knowledge that allows the construction of a particular grammar gains credibility vis à vis its competitors when a specific language is identified that requires precisely that grammar. We might call this the positive typological enterprise: certain structures are attested in the set of well-studied natural languages, and any theory of grammar must sanction the generation of such structures. But there is also a negative typological enterprise. Work on this side of the program aspires to model the absence of unattested structures. If we imagine competing models of linguistic knowledge, all of which satisfy the positive side of the enterprise insofar as all of them generate those structures that are known to exist, then we need some criterion for choosing among them. Proposed models of linguistic knowledge are therefore routinely criticized from the negative side, i.e. for allowing the construction of a grammar that is not known to be instantiated by any natural language; this is the state of overgeneration, of which we saw an example above. A model that overgenerates is in principle inferior to one that does not. A model that completely fails to overgenerate matches the systems that it cannot generate with those that are unattested. While this seems at first glance to be an important goal, the danger we must guard against when attempting to eliminate aspects of a theory that overgenerates is the equation of unattested with impossible. How can we know whether a structure that is absent from the empirical database is merely unattested – as was the case with iterative ternary rhythm – or genuinely beyond the grasp of UG? Finding an answer to this question seems insurmountably forbidding when we realize that distinguishing the merely unattested from the cognitively impossible would be no easier if all languages were thoroughly documented, studied, and analyzed. Those activities, of course, may fill gaps in our knowledge, and they will certainly generate a richer base for our research enterprise, thereby contributing to a deeper understanding of our cognitive capacity. But even if all languages were deeply understood, any linguist would still be able to posit conceivable albeit unattested structures, and all theories that overgenerate would make this

Ternary Rhythm

14

challenge easy. Such conceivable but unattested structures can be assumed to be universally ungrammatical only if we assume that all possible structures in fact do appear somewhere in the set of human languages. This assumption, alas, is no more plausible than an assumption that all possible structures for eyes, for example, are attested somewhere in the animal kingdom. This realization presents a significant confrontation to the bottom-up approach to linguistic theory construction. Building theories on the basis of what is attested and unattested tends to confuse does not exist with cannot exist. The work of linguists is not to explain linguistic structures that do exist, but rather to explain linguistic structures that can exist. While the starting point may be the empirical record, that cannot be the ending point. The empirical record does not and cannot show us the limits of human linguistic capacity. The empirical record cannot reveal what is necessarily beyond the realm of our grammatical competence; cf. Isac and Reiss (2008).

8

Conclusions

Although it is not the purpose of this chapter to go deeply into methodological issues, it is essential nonetheless to highlight the importance of doing so. The study of ternary rhythm and the history of analyses of this phenomenon in the generative literature raise issues of the kind presented here. The remaining challenge is to take these discussions further, not only in this context, but whenever we enter into discussions arguing for the selection of one theoretical model over another. There is more work to be done in metrical theory and on ternary rhythm. This must address several issues, both in the context of a specific analysis and with respect to the theory with which that analysis is built. Naturally, any analysis has to satisfy the positive typological enterprise by allowing the generation of those patterns that are attested. An analysis must take a position on the kind of constituents that are constructed. Are they binary or ternary? If there are ternary feet, i.e. feet with three terminals, is there any internal structure to those constituents, or are they flat ternary structures? If there are not ternary feet, how are binary feet constructed with iterative non-exhaustivity, i.e. periodic non-parsing as with the earlier theory of weak local parsing? What is the fate of those syllables that are left unparsed? How are they incorporated into the prosodic structure of the word, and what implications does this have for a theory of layering in prosodic structure (Selkirk 1984)? Beyond the basic matter of developing an analysis, future work on this topic should also address the role of typological evidence in the theory being offered, specifically the extent to which patterns absent from the empirical record are considered to be not merely unattested but unattestable. In this way, the typological enterprise will remain an important topic of discussion, and the languages showing ternary rhythm will play their significant role in future refinements of metrical theory and the methodologies of research in generative linguistics.1 1

Works relevant to the study of ternary rhythm that have not been mentioned in this article, but which students of this topic should consult, include the following: Crowhurst (1992); Idsardi (1992); Kager (1993); Green and Kenstowicz (1995); Rowicka (1996); van de Vijver (1998); Elenbaas (1999); Hyde (2001); Gordon (2002); McCartney (2003); Karttunen (2006); Buckley (2009).

15

Curt Rice

ACKNOWLEDGMENTS For feedback on an early draft of this chapter, I thank Sylvia Blaho, Peter Jurgec, Bruce Morén-Duolljá, Dave Odden, Charles Reiss, Bridget Samuels, and two anonymous reviewers. I also thank Sylvia Blaho for extensive support during the editing process. I am indebted to phonologists too many to name, with whom I have had fruitful discussions about metrical theory and ternarity over the years. Some of the perspectives advanced here owe an intellectual debt to Hale and Reiss (2008), which is hereby gratefully acknowledged.

REFERENCES Archangeli, Diana & D. Terence Langendoen. 1997. Optimality Theory: An overview. Cambridge, MA & Oxford: Blackwell. Blevins, Juliette & Sheldon P. Harrison. 1999. Trimoraic feet in Gilbertese. Oceanic Linguistics 38. 203–230. Buckley, Eugene. 2009. Locality in metrical typology. Phonology 26. 389–435. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Crowhurst, Megan J. 1992. Minimality and foot structure in metrical phonology and prosodic morphology. Ph.D. dissertation, University of Arizona. Das, Shyamal. 2001. Some aspects of the phonology of Tripura Bangla and Tripura Bangla English. Ph.D. dissertation, Central Institute of English and Foreign Languages, Hyderabad (ROA-493). Dresher, B. Elan & Aditi Lahiri. 1991. The Germanic foot: Metrical coherence in Old English. Linguistic Inquiry 22. 251–286. Elenbaas, Nine. 1999. A unified account of binary and ternary stress: Considerations from Sentani and Finnish. Ph.D. dissertation, Utrecht University. Elenbaas, Nine & René Kager. 1999. Ternary rhythm and the lapse constraint. Phonology 16. 273–329. Everett, Daniel L. 1988. On metrical constituent structure in Pirahã phonology. Natural Language and Linguistic Theory 6. 207–246. Gordon, Matthew. 2002. A factorial typology of quantity-insensitive stress. Natural Language and Linguistic Theory 20. 491–552. Green, Thomas. 1995. The stress window in Pirahã: A reanalysis of rhythm in Optimality Theory. Unpublished ms., MIT (ROA-45). Green, Thomas & Michael Kenstowicz. 1995. The lapse constraint. Proceedings of the 6th Annual Meeting of the Formal Linguistics Society of the Midwest. 1–14 (ROA-101). Hale, Mark & Charles Reiss. 2008. The phonological enterprise. Oxford: Oxford University Press. Halle, Morris. 1990. Respecting metrical structure. Natural Language and Linguistic Theory 8. 149–176. Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA: MIT Press. Hammond, Michael. 1990. Deriving ternarity. Unpublished ms., University of Arizona, Tucson. Hammond, Michael. 1995. Metrical phonology. Annual Review of Anthropology 24. 313– 342. Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Published 1985, New York: Garland. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press.

Ternary Rhythm

16

Hewitt, Mark. 1992. Vertical maximization and metrical theory. Ph.D. dissertation, Brandeis University. Houghton, Paula. 2006. Ternary stress. Unpublished ms., University of Massachusetts, Amherst (ROA-836). Hulst, Harry van der. 1984. Syllable structure and stress in Dutch. Dordrecht: Foris. Hulst, Harry van der. 1999. Word accent. In Harry van der Hulst (ed.) Word prosodic systems in the languages of Europe, 3–115. Berlin & New York: Mouton de Gruyter. Hyde, Brett. 2001. Metrical and prosodic structure in Optimality Theory. Ph.D. dissertation, Rutgers University (ROA 476). Hyde, Brett. 2002. A restrictive theory of metrical stress. Phonology 19. 313–359. Hyman, Larry M. 2009. How (not) to do phonological typology: The case of pitch-accent. Language Sciences 31. 213–238. Idsardi, William J. 1992. The computation of prosody. Ph.D. dissertation, MIT. Isac, Daniela & Charles Reiss. 2008. I-language: An introduction to linguistics as cognitive science. Oxford: Oxford University Press. Ishii, Toru. 1996. An optimality theoretic approach to ternary stress systems. UCI Working Papers in Linguistics 2. 95–111. Kager, René. 1993. Alternatives to the iambic-trochaic law. Natural Language and Linguistic Theory 11. 381–432. Kager, René. 1994. Ternary rhythm in alignment theory. Unpublished ms., Utrecht University (ROA-35). Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Karttunen, Lauri. 2006. The insufficiency of paper-and-pencil linguistics: The case of Finnish prosody. Unpublished ms., Stanford University (ROA-818). Kenstowicz, Michael. 1993. Evidence for metrical constituency. In Kenneth Hale & Samuel J. Keyser (eds.) The view from Building 20: Essays in linguistics in honor of Sylvain Bromberger, 257–273. Cambridge, MA: MIT Press. Key, Harold H. 1961. Phonotactics of Cayuvava. International Journal of American Linguistics 27. 143–150. Key, Harold H. 1967. Morphology of Cayuvava. The Hague: Mouton. Krauss, Michael (ed.) 1985. Yupik Eskimo prosodic systems: Descriptive and comparative studies. Fairbanks: Alaska Native Language Center. Leer, Jeff. 1985a. Prosody in Alutiiq (the Koniag and Chugach dialects of Alaskan Yupik). In Krauss (1985), 77–133. Leer, Jeff. 1985b. Toward a metrical interpretation of Yupik prosody. In Krauss (1985), 159–172. Levin, Juliette. 1985. Evidence for ternary feet and implications for a metrical theory of stress rules. Unpublished ms., University of Texas, Austin. Levin, Juliette. 1988. Generating ternary feet. Texas Linguistic Forum 29. 97–113. Liberman, Mark. 1975. The intonational system of English. Ph.D. dissertation, MIT. Liberman, Mark & Alan Prince. 1977. On stress and linguistic rhythm. Linguistic Inquiry 8. 249–336. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCarthy, John J. & Alan Prince. 1993. Prosodic morphology I: Constraint interaction and satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. McCartney, Steven J. 2003. Ternarity through binarity. Ph.D. dissertation, University of Texas, Austin. Nespor, Marina & Irene Vogel. 2008. Prosodic phonology. Berlin & New York: Mouton de Gruyter. 1st edn. 1986, Dordrecht: Foris. Newmeyer, Frederick J. 2005. Possible and probable languages: A generative perspective on linguistic typology. Oxford: Oxford University Press. Odden, David. 2008. Ordering. In Vaux & Nevins (2008), 61–120.

17

Curt Rice

Odden, David. Forthcoming. Rules v. constraints. In John A. Goldsmith, Jason Riggle & Alan C. L. Yu (eds.) The handbook of phonological theory. 2nd edn. Malden, MA & Oxford: Wiley-Blackwell. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19–100. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Reiss, Charles. 2008. Constraining the learning path without constraints, or the OCP and NoBanana. In Vaux & Nevins (2008), 252–301. Rice, Curt. 1988. Stress assignment in the Chugach dialect of Alutiiq. Papers from the Annual Regional Meeting, Chicago Linguistic Society 24. 304–315. Rice, Curt. 1990. Pacific Yupik: Implications for metrical theory. Coyote Papers 3. Tucson: Department of Linguistics, University of Arizona. Rice, Curt. 1992. Binarity and ternarity in metrical theory: Parametric extensions. Ph.D. dissertation, University of Texas, Austin. Rice, Curt. 1993. A note on ternary stress in Sentani. University of Trondheim Working Papers in Linguistics 17. 67–71. Rice, Curt. 2007. The roles of Gen and Con in modeling ternary rhythm. In Sylvia Blaho, Patrik Bye & Martin Krämer (eds.) Freedom of analysis?, 233–255. Berlin & New York: Mouton de Gruyter. Rifkin, Jay I. 2003. Ternarity is prosodic word binarity. In Jeroen van de Weijer, Vincent J. van Heuven & Harry van der Hulst (eds.) The phonological spectrum, vol. 2: Suprasegmental structure, 127–150. Amsterdam & Philadelphia: John Benjamins. Roca, Iggy (ed.) 1997. Derivations and constraints in phonology. Oxford: Clarendon Press. Rowicka, GraTyna. 1996. 2+2=3: Stress in Munster Irish. In Henryk Kardela & Bogdan Szymanek (eds.) A Festschrift for Edmund Gussmann, 217–238. Lublin: University Press of the Catholic University of Lublin. Selkirk, Elisabeth. 1984. Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press. Vaux, Bert & Andrew Nevins (eds.) 2008. Rules, constraints, and phonological phenomena. Oxford: Oxford University Press. Vijver, Ruben van de. 1998. The iambic issue: Iambs as a result of constraint interaction. Ph.D. dissertation, University of Leiden.

53 Syllable Contact Misun Seo

1

Introduction

Syllable contact is a notion introduced to describe the sonority relation between adjacent segments across a syllable boundary, that is, between heterosyllabic coda and onset segments. According to Hooper (1976), Murray and Vennemann (1983), and Vennemann (1988), among others, there is a cross-linguistic preference to avoid rising sonority across a syllable boundary; this tendency is formulated as the Syllable Contact Law (SCL). This law states, for example, that al.ta, with falling sonority, is preferred to at.la, with rising sonority. This law has been adduced to account for both diachronic and synchronic sound alternations in coda–onset clusters. In this chapter, I will review the notion of syllable contact by referring to sound alternations involving coda–onset clusters observed in different languages. The chapter is organized as follows. In §2, I offer an overview of sonority and sonority scales on which the SCL is crucially based. In addition, I review previous proposals on syllable contact and different types of diachronic and synchronic phonological changes analyzed as repair strategies for bad syllable contact. In §3, I present the issues and debates on syllable contact such as the necessity of syllable contact, the categorical vs. gradient nature of syllable contact, language-specific variation, and problems of syllable contact (see also chapter 49: sonority).

2

Syllable contact

The notion of syllable contact is essentially based on the relative sonority of segments. Thus, in §2.1, I first review sonority and then the SCL. In §2.2, I provide examples of different types of diachronic and synchronic phonological changes which have been analyzed using the notion of syllable contact.

2.1

Sonority and the Syllable Contact Law

The SCL is based on the concept that speech sounds can be classified into different categories according to their relative sonority. There have been many proposals for the definition of sonority. For example, Ladefoged (1982) defines sonority as The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0053

2

Misun Seo

the loudness of a sound relative to that of other sounds with the same length, stress, and pitch, and Clements (1990) defines it in terms of a set of major class features. In Venneman (1988), sonority is described as an inverse restatement of strength, which is based on “degree of deviation from unimpeded (voiced) air flow.” In addition, Parker (2002, 2008) claims that sonority has a phonetic basis in intensity measured as “sound level differences in decibels between a target segment and a constant reference segment in the environment.” There exist many competing sonority scales, obtained on the basis of different definitions of the sonority. Different sonority scales result from the controversial relative sonority of laterals and rhotics, voiced and voiceless obstruents, stops, fricatives, and affricates, and high, mid, and low vowels. However, it is generally agreed among most researchers that the uncontroversial sonority hierarchy is as follows (Bell and Hooper 1978; Harris 1983; van der Hulst 1984; Clements 1987, 1990; Kenstowicz 1994; Smolensky 1995; Holt 1997; van Oostendorp 1999): (1)

Sonority hierarchy more sonorous vowels glides liquids nasals obstruents less sonorous

For extensive discussion of sonority and sonority scales, see chapter 49: sonority. With reference to the relative sonority of segments, Hooper (1976) proposes a constraint on heterosyllabic coda–onset clusters. According to the principle, the sonority of a coda consonant must exceed that of a following onset consonant. Murray and Vennemann (1983) and Vennemann (1988) extend the principle as follows: (2)

Syllable Contact Law (Vennemann 1988: 40) A syllable contact A.B is the more preferred, the less the consonantal strength of the offset A and the greater the consonantal strength of the onset B.

The SCL can be rephrased as in (3), using the concept of sonority, which is the reverse of strength and more commonly used in current phonology. (3)

Syllable Contact Law (sonority version) (Davis and Shin 1999: 286) A syllable contact A.B is the more preferred, the greater the sonority of the offset A and the less the sonority of the onset B.

The SCL has also been invoked as a family of related OT constraints (Bat-El 1996; Davis 1998; Ham 1998; Davis and Shin 1999; Rose 2000; Baertsch 2002; Gouskova 2004; Holt 2004; Zec 2007).

Syllable Contact

2.2

3

Phonological change as optimization of syllable contact

2.2.1 Diachronic change The notion of syllable contact has been employed in motivating various diachronic changes attested in coda–onset sequences (see also chapter 93: sound change). For example, Hooper (1976) claims that various phonological processes have applied to the /nr/ sequence diachronically in different Spanish dialects to remedy the sequence violating the SCL, as illustrated below: (4)

venirá > venrá > vendrá verná verrá

(by stop insertion) (by metathesis) (by assimilation)

‘(it) will come’

In the example above, after the occurrence of syncope, the resulting form venrá is unacceptable since the onset /r/ in the /nr/ sequence is more sonorous than the coda /n/, thus violating the SCL. Therefore, phonological processes such as stop insertion, metathesis, and assimilation apply in different Spanish dialects to improve the less preferred heterosyllabic /nr/ sequence. Stop insertion also applied to various heterosyllabic consonant–liquid clusters in Old Spanish and Old French, as shown below. (5)

a.

Stop insertion in Old Spanish (Martínez-Gil 2003) Latin fem(i)na hum(e)ru trem(u)lare ingen(e)rare mel(io)rare

b.

Old Spanish (f)embra hombro tremblar engendrar me(l)drar

‘female’ ‘shoulder’ ‘to shake, shiver’ ‘to engender, beget’ ‘to grow’

Stop insertion in Old French (Walker 1978; Morin 1980; Wetzels 1985; Picard 1987; Martínez-Gil 2003) Latin cam(e)ra sim(u)lare ten(e)ru mol(e)re laz(a)ru ess(e)re spin(u)la

(> (> (> (> (> (> (>

chamre) semler) tenre) molre) lazre) esre) espinle)

> > > > > > >

Old French chambre sembler tendre molder la(z)dre estre espingle

‘room’ ‘to resemble’ ‘tender’ ‘to grind’ ‘beggar’ ‘to be’ ‘pin’

According to Martínez-Gil (2003), all the heterosyllabic consonant–liquid clusters in the examples above violate the SCL by showing more sonorous onsets than codas, and thus the stop is inserted within the clusters as a strategy to improve bad syllable contact. Holt (2004) analyzes the metathesis observed in the heterosyllabic /dn/ and /dl/ sequences of Old Spanish as a repair strategy to optimize syllable contact. In Old Spanish, the heterosyllabic /dn/ and /dl/ sequences brought about by

Misun Seo

4

syncope in Late Spoken Latin or by morpheme concatenation underwent many repair strategies, such as dissimilation, palatalization, stop insertion, deletion, and strengthening, e.g. antenatu (Latin) > adnado ~ andado ~ andrado ~ alnado ~ anado ~ annado (Old Spanish) ‘stepchild’. The examples in (6) illustrate that metathesis is optional in Old Spanish forms, while only metathesized forms occur in Modern Spanish. (6)

a.

b.

/dn/ Latin cat(e)natu antenatu legitimu retina

Old Spanish cadnado ~ candado adnado ~ andado lidmo ~ lindo riedna ~ rienda

Modern Spanish candado andado lindo rienda

‘padlock’ ‘stepchild’ ‘pretty’ ‘rein’

/dl/ Latin spatula capitulu foliatile titulo

Old Spanish espadla ~ espalda cabidlo ~ cabildo hojalde ~ hojaldre tidle ~ tilde

Modern Spanish espalda cabildo hojaldre tilde

‘back’ ‘town council’ ‘puff pastry’ ‘written accent’

According to Holt, metathesis applied to /dn/ and /dl/ in Old Spanish, since the sequences have rising sonority over a syllable boundary and thus show bad syllable contact. (See also chapter 59: metathesis for a discussion of synchronic metathesis processes in response to phonotactic requirements more generally.) In addition, West Germanic word-medial gemination after a short vowel and before *j, *r, and *l has been analyzed by Murray and Vennemann (1983) as being driven by the need to avoid an onset which is more sonorous than the coda. (7) Gemination in West Germanic (Murray and Vennemann 1983; Braune and Eggers 1987; Ham 1998)1 a.

b.

East Germanic Go. skapjan Go. bidjan Go. hafjan ON framja Go. halja ON bitr Go. akrs VL facla ON epli

West Germanic OS skeppian, OE scieppan OS biddian, OE biddan OHG heffan, OE hebban OHG fremmen, OE fremman OHG hella, OS hellia OHG bittar (rut ‘fruit’ pHfaizHr ‘Pfizer’ Bengali gelaœ ‘glass’ Central Pahari silet ‘slate’ Sinhalese tijage Sanskrit tjage ‘gift’ Wolof kalas ‘class’ Uyghur kulub Russian klub ‘club’

As shown in (12a), a vowel is inserted before the onset consonants with falling or flat sonority, since the resulting coda–onset clusters do not violate the SCL. However, as illustrated in (12b), a vowel is inserted between the two consonants of the onset with rising sonority to avoid the coda–onset clusters violating the SCL. According to Rose (2000), syllable contact plays a role in determining the position of an epenthetic vowel in Chaha (see chapter 67: vowel epenthesis). For example, when [q] is epenthesized in /VCCCV/ for structural reasons, either [VCCqCV] or [VCqCCV] can surface with the aim of achieving good syllable contact. Thus, /t-n-k’rHt’m-nH/ ‘while we are cutting’ surfaces as [tqnk’qrHt’qmnH], since the alternative form *[tqnqk’rHt’qmnH] has the coda–onset cluster [k’.r], with bad syllable contact. On the other hand, /t-n-msHkr-nH/ ‘while we are testifying’ is realized as [tqnqmsHkqnnH] rather than *[tqnmqsHkqnnH], with a coda–onset cluster with flat sonority, which is avoided in the language. In addition, epenthesis in Picard clitics (Auger 2003) is said to be influenced by the preference for achieving good syllable contact, and Pons (2005) argues that regressive manner assimilation, rhotacism, gliding, onset strengthening, epenthesis, and deletion attested in Romance languages are strategies triggered to avoid bad syllable contact.

3 3.1

Issues and debates on syllable contact Sonority Dispersion Principle and the Syllable Contact Law

As seen in §2.2, the notion of syllable contact has been employed in motivating different types of diachronic and synchronic sound changes. However, there have been debates regarding whether or not it is necessary to posit a separate law just for coda–onset sequences. Clements (1990) claims that the Sonority Dispersion Principle makes the SCL dispensable. According to the Sonority Dispersion Principle, sonority rise is required to be maximal from the onset to the nucleus and sonority drop is required to be minimal from the nucleus to the coda. Thus, for example, the principle says that [ta] is more preferred than [la] as the onset and [al] is more preferred than [at] as the coda. From the Sonority Dispersion Principle, it can be predicted that maximal sonority drop is preferred across syllable boundaries, as stated in the SCL. Thus, Clements proposes that the SCL follows from the more general Sonority Dispersion Principle. However, based on the data from Kazakh in (13), Davis (1998) and Gouskova (2004) argue that the SCL cannot be reduced to the Sonority Dispersion Principle.

Misun Seo

8 (13)

Syllable contact in Kazakh (Davis 1998) a.

b.

No onset desonorization /alma-lar/ [al.ma.lar] /mandaj-lar/ [man.daj.lar] /kijar-lar/ [kijar.lar] /kol-ma/ [kol.ma] /kijar-ma/ [ki.jar.ma]

‘apples’ ‘foreheads’ ‘cucumbers’ ‘hand-interrog’ ‘cucumber-interrog’

Onset desonorization /kol-lar/ [kol.dar] /murin-lar/ [mu.rin.dar] /koIQz-lar/ [ko.IQz.dar] /murin-ma/ [mu.rin.ba] /koIQz-ma/ [ko.IQz.ba]

‘hands’ ‘noses’ ‘bugs’ ‘nose-interrog’ ‘bug-interrog’

In Kazakh, consonants of any sonority can be onsets if they are preceded by vowels or by consonants of higher sonority. Thus, no onset desonorization is attested in the examples of (13a). However, when the onset is preceded by a consonant with lower or the same sonority, it desonorizes, as shown in (13b). Therefore, to explain the sound alternations of onsets in the Kazakh examples, it is essential to refer to the sonority relation of the coda and the following onset. For this reason, both Davis and Gouskova claim that the Sonority Dispersion Principle cannot completely replace the SCL.

3.2

Nature of syllable contact: Categorical vs. gradient

Since Hooper (1976), the SCL has been extended to account for more crosslinguistic data, and the nature of syllable contact is now viewed as gradient rather than categorical (see chapter 89: gradience and categoricality in phonological theory). Hooper (1976) originally proposed the SCL for Spanish, where the sonority of a syllable-final consonant must exceed that of a following syllable-initial consonant. According to this, the syllable contact is categorical in nature. Different types of coda–onset clusters are equally fine as long as the sonority of the coda exceeds that of the onset. Thus, although the sonority distance between the coda and the onset is greater in al.ta than in al.na, both are equally fine clusters with respect to syllable contact since the coda is more sonorous than the onset. Likewise, both at.la and an.la are equally non-optimal since the coda is less sonorous than the onset. Murray and Vennemann (1983) and Vennemann (1988) propose an extended version of the SCL whose nature is gradient: two adjacent heterosyllabic segments are more preferred, the greater the sonority of the first segment and the less the sonority of the second segment. Clements (1990) paraphrases the extended SCL as follows: (14)

The extended Syllable Contact Law (Clements 1990: 319) The preference for a syllabic structure A.B, where A and B are segments and a and b are the sonority values of A and B respectively, increases with the value of a minus b.

Syllable Contact

9

Davis and Shin’s (1999) version of the SCL in (3) is also a gradient one. In the gradient view of the syllable contact phenomenon, coda–onset clusters with falling sonority are not considered to exhibit the same degree of optimality. In addition, not all violations of the SCL by clusters with rising sonority are considered equally serious. Depending on the relative sonority distance between the coda and the onset, one cluster is more optimal or less non-optimal than the others. For example, al.ta is more optimal than al.na, since the sonority distance between the coda and onset is greater in al.ta than in al.na. In addition, a sequence such as at.la constitutes a more severe violation of the law than a sequence such as an.la. In line with this, Clements (1990) provides the following aggregate complexity score of each of the coda–onset sequences by summing the complexity values of each of the demisyllables (nucleus–coda and onset–nucleus sequences) that constitute it. (15)

C2 C1 obstruent nasal liquid glide

obstruent

nasal

liquid

glide

5 4 3 2

6 5 4 3

7 6 5 4

8 7 6 5

According to Clements (1990), obstruent–glide sequences such as at.wa violate the SCL most seriously and glide–obstruent sequences such as aw.ta satisfy the law most faithfully. A more fine-grained syllable contact scale is proposed by Gouskova (2004), based on Jespersen’s (1904) sonority scale. (16)

a.

Sonority scale (Jespersen 1904) glides > rhotics > laterals > nasals > voiced fricatives > voiced stops > voiceless fricatives > voiceless stops (abbreviated as: w > r > l > n > z > d > s > t)

b.

Syllable contact scale (Gouskova 2004) more harmonic 1 2 3 4 5 6 7 8 w.t w.s w.d w.z w.n w.l w.r w.w r.t r.s r.d r.z r.n r.l r.r l.t l.s l.d l.z l.n l.l n.t n.s n.d n.z n.n z.t z.s z.d z.z d.t d.s d.d s.t s.s t.t −7 −6 −5 −4 −3 −2 −1 0 drop flat

9 r.w l.r n.l z.n d.z s.d t.s

10 l.w n.r z.l d.n s.z t.d

+1 +2

11 n.w z.r d.l s.n t.z

+3

less harmonic 12 13 14 15 z.w d.w s.w t.w d.r s.r t.r s.l t.l t.n

+4

+5 +6 +7 rise

In Gouskova’s syllable contact scale in (16b), glide + voiceless stop clusters (represented as w.t), for example, are characterized as sequences with a sonority drop of −7, since the sonority of the voiceless stop is lower than that of the glide by

10

Misun Seo

seven steps according to Jespersen’s sonority scale in (16a). As can be seen in (16b), Gouskova proposes that the syllable contact scale has 15 strata, with glide + voiceless stop sequences (represented as w.t) on stratum 1 the most harmonic, and voiceless stop + glide sequences (represented as t.w) on stratum 15 the least harmonic. Clements and Gouskova employ different sonority hierarchies, and thus propose different gradient versions of the sonority contact scale. Gouskova’s more fine-grained syllable contact scale can account for the different patterning of /ll/ and /rl/ in Kazakh, while this is not possible within Clements’ syllable contact scale. According to Davis (1998), coda–onset sequences with flat or rising sonority undergo desonorization in Kazakh in order to improve syllable contact, as in /murin-lar/ → [murindar] ‘noses’. On the other hand, no desonorization applies to coda–onset sequences with falling sonority, as can be seen from /mandaj-ga/ → [mandajga] ‘forehead + direct’. In the case of /ll/ with flat sonority, desonorization applies, as in /kol-lar/ → [kol.dar] ‘hands’, while /rl/ surfaces as [rl], as in /kijar-lar/ → [kijarlar] ‘cucumbers’. The different phonological patterning of /ll/ and /rl/ in Kazakh suggests that /r/ is more sonorous than /l/, and that /rl/ is the sequence with falling sonority, as in Gouskova’s syllable contact scale. Note that Clements’ syllable contact scale cannot account for the different patterning of /ll/ and /rl/ in Kazakh, since /r/ is assumed to be as sonorous as /l/. The gradient nature of syllable contact makes it possible to explain languagespecific patterns of the syllable contact phenomenon. According to Gouskova (2004), languages differ in the level of complexity they tolerate, and thus language-specific patterns of syllable contact are attested. For example, Kirghiz and Kazakh require sonority to drop from the coda to the onset but the two languages are different in the thresholds of acceptable sonority drop. In Kazakh, sonority is merely required to drop from the coda to the onset and need not be maximal. Thus, as can be seen in (13b), onset desonorization applies to coda–onset sequences with flat or rising sonority, but not to coda–onset sequences with falling sonority, as in (13a). On the other hand, as can be seen from the examples in (17), sonority drop alone is not sufficient in Kirghiz, which requires maximal sonority drop. (17) Syllable contact in Kirghiz (Hebert and Poppe 1964; Kasymova et al. 1991; Gouskova 2004) /konok-lar/ /taœ-lar/ /konok-nu/ /taœ-nu/ /atan-lar/ /rol-lar/ /atan-nu/ /kar-lar/ /rol-nu/ /aj-lar/ /aj-nu/ /kar-du/ /too-lar/ /too-nu/

+5 +4 +4 +3 +1 0 0 −1 −1 −2 −3 −4 — —

konok.tar taœ.tar konok.tu taœ.tQ atan.dar rol.dar atan.dQ kar.dar rol.du aj.dar aj.dQ kar.dQ too.lar too.nu

0 −1 0 −1 −2 −3 −2 −4 −3 −5 −5 −4 — —

‘guest (pl)’ ‘stone (pl)’ ‘guest (obj)’ ‘stone (obj)’ ‘gelded camel (pl)’ ‘role (pl)’ ‘gelded camel (obj)’ ‘snow (pl)’ ‘role (obj)’ ‘moon (pl)’ ‘moon (obj)’ ‘snow (obj)’ ‘mountain (pl)’ ‘mountain (obj)’

Syllable Contact

11

As can be seen from the examples above, desonorization in Kirghiz applies to suffix-initial sonorants after any consonant, in order for sonority to drop maximally from the coda to the onset. Note that the Kazakh and Kirghiz data above cannot be explained properly within the categorical version of the SCL, which predicts that all coda–onset clusters with falling sonority are equally acceptable.

3.3

Syllable contact as a language-specific constraint on minimal sonority distance

With respect to syllable contact phenomena, language-specific variations can be observed. First, some languages respecting the SCL allow coda–onset clusters with flat sonority while others disallow them. For example, as shown in (8), Korean prohibits coda–onset clusters with rising sonority. However, a coda can be as sonorous as a following onset, as can be seen from /sallim/ → [sallim] ‘housekeeping’, /simmun/ → [simmun] ‘interrogation’, etc. On the other hand, as shown in (13), coda–onset clusters with flat sonority as well as rising sonority are not sanctioned in Kazakh. Thus, /kol-lar/ is realized as [kol.dar] ‘hands’, to improve syllable contact. Such syllable contact variation of coda–onset sequences with equal sonority has been accounted for by positing two versions of the SCL. (18)

a.

Syllable Contact Law (strict) (Rose 2000: 401) The first segment of the onset of a syllable must be lower in sonority than the last segment in the immediately preceding syllable.

b.

Syllable Contact Law (loose) (Bat-El 1996; Davis and Shin 1999) The first segment of the onset of a syllable must not be of greater sonority than the last segment of the immediately preceding syllable.

Coda–onset sequences of equal sonority are not allowed within the SCL in (18a), while they are permitted within the loose version of the SCL in (18b). Syllable contact variations that cannot be explained by the two versions of the SCL are also found. The SCL has generally been employed to constrain coda– onset clusters with rising sonority. However, languages which allow only falling sonority from a coda to a following onset can vary with respect to minimal sonority distance. According to Gouskova (2004), both Kirghiz and Sidamo allow coda–onset sequences with falling sonority, forbidding sequences with flat or rising sonority. However, the thresholds of acceptable sonority drop are different in the two languages. As can be seen from the examples in (17), minimal sonority drop required in Kirghiz is −4, and coda–onset sequences with sonority drop of less than −4 undergo phonological change of desonorization to remedy bad syllable contact. Thus, desonorization applies to /aj-nu/ ‘moon (obj)’, which is realized as [aj.dQ], with a sonority drop of −3. On the other hand, /kar-du/ ‘snow (obj)’, with a sonority drop of −4, is not targeted by any phonological process in order to remedy bad syllable contact, and surfaces as [kar.dQ]. In the case of Sidamo, unlike Kirghiz, the acceptable minimal sonority drop is −2.

Misun Seo

12 (19)

Syllable contact in Sidamo (Moreno 1940; Gouskova 2004) a.

Sonority rises: Metathesis /duk-nanni/ +4 duI.kanni /huŒ-nanni/ +4 hun.Œanni /has-nemmo/ +3 han.seemo /hab-nemmo/ +2 ham.bemmo

−4 −4 −3 −2

‘they carry’ ‘they pray/beg/request’ ‘we look for’ ‘we forget’

b.

Sonority drops less than −2 or is flat: Gemination /af-tinonni/ −1 affinonni — ‘you (pl) have seen’ /lelliœ-toti/ −1 lelliœœoti — ‘don’t show!’ /ful-nemmo/ −1 fullemmo — ‘we go out’ /um-nommo/ 0 ummommo — ‘we have dug’

c.

Sonority drops more than −2: Place assimilation only /ma7-toti/ −5 ma7.toti −5 ‘don’t go’ /ful-te/ −5 ful.te −5 ‘your having gone out’ /qaram-tino/ −4 qaran.tino −4 ‘she worried’

In Sidamo, coda–onset clusters with rising sonority are repaired by metathesis, as in (19a), and the clusters with flat sonority or sonority drop of less than −2 are modified by gemination, as in (19b). On the other hand, when coda–onset sequences have a sonority drop of −2 or above, no modification applies to improve syllable contact, although the sequences undergo place assimilation, as in (19c). In addition, different languages can permit different degrees of minimal sonority rise, which cannot be explained within the SCL in (18). For example, Faroese and Icelandic set different acceptable sonority distances between a coda and a following onset. According to Gouskova (2004), Faroese permits a sonority rise of +4 or below, as illustrated in (20). (20)

Syllable contact in Faroese (Gouskova 2004) a.

Sonority rise of a(.hkvamar>n vea(.hkr>r ai(.h trant> -ea(.h pr>r mi(.hkl>r e(.h pl>

5 points or more: Complex onsets +7 ‘beryl’ +6 ‘beautiful (masc pl)’ +6 ‘poisonous’ +6 ‘sad’ +5 ‘great (masc pl)’ +5 ‘potato’

b.

Sonority rise of fewer than 5 points: Heterosyllabic coda–onset clusters s>—.r> +4 ‘further south’ h +a t.na +4 ‘to improve’ >-.la +3 ‘or’ ves.na +3 ‘to worsen’ Áar.na +2 ‘gladly’ rDhk.t> 0 ‘smoked (sg)’ ves.tÁr −1 ‘west’ hen.-Ár −2 ‘hands’ Áœr.-> −4 ‘did (sg)’ no-.-> — ‘approached (sg)’

Syllable Contact

13

In Faroese, where initial syllables are always stressed and heavy, syllable contact plays a crucial role in syllabification. As can be seen in (20a), when two intervocalic consonants show a sonority rise of +5 or more, the two consonants are syllabified into a complex onset and the preceding vowel is long. On the other hand, when two intervocalic consonants exhibit a sonority rise of +4 or less, the two consonants are syllabified as heterosyllabic and the preceding vowel is short, as in (20b). Thus, the examples in (20) show that the maximal sonority rise from a coda to a following onset permitted in Faroese is +4 or less. In Icelandic, unlike Faroese, the acceptable sonority distance between a coda and a following onset is +5 or less. (21)

Syllable contact in Icelandic (Gouskova 2004) a.

b.

Sonority rise of 6 v>:.t(h)ja +7 vœ:.k(h)va +7 a:.k(h)rar +6 t h >:.t(h)ra +6

points or more: ‘to visit’ ‘to water’ ‘fields’ ‘to vibrate’

Complex onsets skD:.p(h)ra +6 tv>:.svar +6 e:.sja +6

Sonority rise of fewer than 6 points: Heterosyllabic eh p.l> +5 ‘apple’ t hem.ja +3 h e k.la +5 ‘lack’ vel.ja +2 h †Œ t.la +5 ‘to intend’ ver.ja +1 +>Ï.ra +4 ‘to ask’ t hev.ja 0 stœÏ.va +4 ‘to stop’ hes.tõr −1 h†Œ:.r> +3 ‘right’ ev.r> −1 +laÏ.ra +3 ‘balloon’ av.la:a −2 s>—.la +3 ‘to sail’ -ver.—õr −4 v>s.na +3 ‘to wither’

‘to roll’ ‘twice’ ‘the mountain Esja’ coda–onset clusters ‘to domesticate’ ‘to choose’ ‘to defend’ ‘to delay’ ‘horse’ ‘upper’ ‘to bend out of shape’ ‘dwarf’

As can be seen from the examples in (21a), two consonants with a sonority rise of +6 or more are syllabified as a complex onset and the preceding vowel is lengthened to make the first stressed syllable heavy. On the other hand, when two consonants show a sonority rise of +5 or less, as in (21b), they are syllabified as heterosyllabic, and the preceding vowel is realized as a short vowel since the first stressed syllable is heavy, due to the coda. According to Gouskova (2004), the language-specific patterns of syllable contact discussed above suggest that languages differ in the thresholds of acceptable sonority distance between a coda and a following onset and that the SCL is not a single constraint forbidding rising sonority. To account for such language variations within Optimality Theory, Gouskova reformulates the SCL as a hierarchy of constraints derived from sonority scales, which target negative, flat, or positive sonority distance across a syllable boundary.

3.4

Problems with the Syllable Contact Law

Seo (2003) points out that the SCL has two problems. First of all, the SCL is problematic, since it cannot explain why a consonant + liquid cluster is targeted by a phonological process when a syllable boundary cannot be assumed between two segments in the cluster, for example, in word-initial position. Recall that the SCL

Misun Seo

14

crucially refers to a syllable boundary in explaining phonological change of a consonant + liquid cluster by assuming that the change is motivated to avoid cases where the onset B has higher sonority than the coda A in a heterosyllabic A.B sequence. As illustrated below, in Leti an /nl/ sequence is realized as [ll], whether it occurs word-initially or intervocalically. Note that vowel deletion occurs in the examples given. (22)

Leti (van Engelenhoven 1995; Hume et al. 1997) /na+losir/ /a(na+leti/

→ →

[llosir] [a(lleti]

‘3sg-to follow’ ‘Alety (clan)’ (child + Leti)

In line with the view that the change of a consonant + liquid cluster is motivated due to the SCL, the word-initial geminate [ll] in [llosir] might be considered to be bisyllabic, as represented below (see also chapter 37: geminates): (23)

q

q

[

[

l

o

However, as discussed in Hume et al. (1997), there is a problem with the representation in (23). Leti has a minimal word requirement that lexical words must be minimally bimoraic. With the representation of the geminate in (23), the minimal word requirement is satisfied in words consisting of an initial geminate and a vowel, for example [p.pe]. Thus, it is expected that there will exist words with such a structure in Leti. However, none are attested. On the other hand, if word-initial geminates are assumed to be part of the onset of a single syllable, words containing initial geminates such as [ppu.na] ‘nest’ are bisyllabic, conforming to the dominant tendency that lexical words are made up of two syllables in Leti. Thus, Hume et al. (1997) propose that geminates in Leti have phonological structure with a root node multiply linked to two timing slots, as in (24). (24)

Geminates and long vowels in Leti (Hume et al. 1997)

x

q

q

q

[

[

[

x

x

x

x

x

l o [llo]

a

l [alle]

e

x

In view of the minimal word requirement, it cannot be assumed that an initial geminate is bisyllabic in Leti. Therefore, the modification of the word-initial

Syllable Contact

15

tautosyllabic /nl/ sequence in Leti cannot be motivated by the SCL since a syllable boundary cannot be referred to. On the other hand, phonological change of the intervocalic /nl/ sequence in Leti could be accounted for by relying on the avoidance of rising sonority over a syllable boundary. Thus, even though both heterosyllabic and tautosyllabic /nl/ sequences show the same pattern in Leti, the syllable contact account cannot provide a unified account of the modification. The SCL is also argued to be problematic in Seo (2003), since it cannot provide a unified account of the same types of phonological changes found in nasal + liquid and liquid + nasal sequences in Korean. (25)

Modifications of nasal/liquid sequences in Korean (Davis and Shin 1999) /nl/ /ln/ cf. /lm/

→ → →

[ll] [ll] [lm]

/non-li/ /sHl+nal/ /pul+mjHn/

→ → →

[nolli] [sHllal] [pulmjHn]

‘logic’ ‘New Year’s Day’ ‘insomnia’

As illustrated in (25), in Korean /nl/ surfaces as [ll]. According to the SCL, this change is expected since the sequence violates the law. However, the same change applies to /ln/, while /lm/ (with the same sonority distance as /ln/) does not undergo any change. Thus it must be assumed within the syllable contact account that the modification of a nasal + liquid sequence and that of a liquid + nasal sequence result from different factors, although the same types of phonological changes occur in both types of sequence. Korean is not the only language that has the same types of alternations in nasal/liquid sequences. As shown in (26), the same type of phonological change applies to both /nl/ and /ln/ sequences in Leti, Toba Batak, and Boraana Oromo. (26)

Modifications of nasal/liquid sequences a.

Leti (van Engelenhoven 1995; Hume et al. 1997) /nl/ → [ll] /a:na+leti/ → [a(lleti] ‘Alety (clan)’ /ln/ → [ll] /vulan/ → [vulla] ‘moon’

b.

Toba Batak (Nababan 1981) /nl/ → [ll] /laDn+laDn/ /ln/ → [ll] /bal+na/

c.

→ [laDllaDn] ‘eventually’ → [balla] ‘his ball’

Boraana Oromo (Stroomer 1995) /nl/ → [ll] /hin+lool+a/ → [hilloola] ‘I/he will fight’ /ln/ → [ll] /kofl+na/ → [kofalla] ‘we smile’

Within the syllable contact account, the modification of /ln/ would have to be motivated by a factor other than syllable contact, although both /ln/ and /nl/ undergo the same type of phonological change. Seo (2003) proposes that the phonological processes found in phonological modifications of heterosyllabic coda–onset sequences could be viewed as resulting from a segment contact phenomenon which is closely related to speech perception, rather than from the SCL (see chapter 98: speech perception and phonology). According to Seo, contrasts of weak perceptibility triggered by phonetic similarity between two members of a cluster are a key factor in motivating the alternations in the cluster. Thus it is expected that phonological modifications will apply when

16

Misun Seo

two consonants are perceptually similar to each other, and when they occur in a sequence, regardless of the presence or absence of a syllable boundary and the order of the two consonants.

4

Conclusion

In this chapter, I have reviewed four major issues concerning the SCL. First, different opinions regarding the necessity of the SCL were discussed – while Clements (1990) claims that the SCL is not required with the Sonority Dispersion Principle, Davis (1998) and Gouskova (2004) argue that the SCL cannot be reduced to the Sonority Dispersion Principle based on the Kazakh data. Second, the change in viewpoint on the nature of syllable contact was reviewed, and it was shown that a gradient of syllable contact makes it possible to explain languagespecific syllable contact patterns while a categorical view does not. Third, after providing examples illustrating cross-linguistic variations of syllable contact, I introduced Gouskova’s (2004) proposal of the SCL as a language-specific constraint on minimal sonority distance. Finally, based on Seo (2003), after pointing out problems associated with the SCL, I argued that phonological modifications of heterosyllabic coda–onset sequences could be viewed as a segment contact phenomenon, which is closely related to speech perception.

REFERENCES Auger, Julie. 2003. Pronominal clitics in Picard revisited. In Núñez-Cedeño et al. (2003), 3–20. Baertsch, Karen. 2002. An optimality theoretic approach to syllable structure: The split margin hierarchy. Ph.D. dissertation, Indiana University. Baruch, Kalmi. 1930. El judeo-español de Bosnia. Revista de Filología Española 17. 113–154. Bat-El, Outi. 1996. Selecting the best of the worst: The grammar of Hebrew blends. Phonology 13. 283–328. Bell, Alan & Joan B. Hooper. 1978. Issues and evidence in syllabic phonology. In Alan Bell & Joan B. Hooper (eds.) Syllable and segments, 3–22. Amsterdam: North-Holland. Bradley, Travis G. 2007. Constraints on the metathesis of sonorant consonants in JudeoSpanish. Probus 19. 171–207. Braune, Wilhelm & Hans Eggers. 1987. Althochdeutsche Grammatik. 14th edn. Tübingen: Niemeyer. Clements, G. N. 1987. Phonological feature representation and the description of intrusive stops. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23(2). 29–50. Clements, G. N. 1990. The role of the sonority cycle in core syllabification. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 283–333. Cambridge: Cambridge University Press. Davis, Stuart. 1998. Syllable contact in Optimality Theory. Korean Journal of Linguistics 23. 181–211. Davis, Stuart & Seung-Hoon Shin. 1999. The syllable contact constraint in Korean: An optimality-theoretic analysis. Journal of East Asian Linguistics 8. 285 –312. Engelenhoven, Aone van. 1995. A description of the Leti language as spoken in Tutukei. Ph.D. dissertation, University of Leiden. Gouskova, Maria. 2001. Falling sonority onsets, loanwords and syllable contact. Papers from the Annual Regional Meeting, Chicago Linguistic Society 37. 175 –186.

Syllable Contact

17

Gouskova, Maria. 2004. Relational hierarchies in Optimality Theory: The case of syllable contact. Phonology 21. 201–250. Hall, T. A. 2007. German glide formation and its theoretical consequences. The Linguistic Review 24. 1–31. Ham, William. 1998. A new approach to an old problem: Gemination and constraint reranking in West Germanic. Journal of Comparative Germanic Linguistics 1. 225 –262. Harris, James W. 1983. Syllable structure and stress in Spanish: A nonlinear analysis. Cambridge, MA: MIT Press. Hebert, Raymond & Nicholas Poppe. 1964. Kirghiz manual. The Hague: Mouton. Holt, D. Eric. 1997. The role of the listener in the historical phonology of Spanish and Portuguese: An optimality-theoretic account. Ph.D. dissertation, Georgetown University (ROA-278). Holt, D. Eric. 2004. Optimization of syllable contact in Old Spanish via the sporadic sound change metathesis. Probus 16. 43–61. Hooper, Joan B. 1976. An introduction to natural generative phonology. New York: Academic Press. Hudson, Grover. 1975. Suppletion in the representation of alternations. Ph.D. dissertation, University of California, Los Angeles. Hudson, Grover. 1995. Phonology of Ethiopian languages. In John A. Goldsmith (ed.) The handbook of phonological theory, 782–797. Cambridge, MA & Oxford: Blackwell. Hulst, Harry van der. 1984. Syllable structure and stress in Dutch. Dordrecht: Foris. Hume, Elizabeth. 1999. The role of perceptibility in consonant/consonant metathesis. Proceedings of the West Coast Conference on Formal Linguistics 17. 293–307. Hume, Elizabeth, Jennifer Muller & Aone van Engelenhoven. 1997. Non-moraic geminates in Leti. Phonology 14. 371–402. Iverson, Gregory K. & Hyang-Sook Sohn. 1994. Liquid representation in Korean. In Young-Key Kim-Renaud (ed.) Theoretical issues in Korean linguistics, 1–19. Stanford: CSLI. Jespersen, Otto. 1904. Lehrbuch der Phonetik. Leipzig & Berlin: Teubner. Kasymova, Bella, Kurmanbek Toktonalijev & Asan Karybajev. 1991. Izucajem Kyrgyzskii jazyk. Frunze: Mektep. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Ladefoged, Peter. 1982. A course in phonetics. 2nd edn. New York: Harcourt Brace Jovanovich. Lamouche, L. 1907. Quelques mots sur le dialecte espagnol parlé par les Israélites de Salonique. Romanische Forschungen 23. 969 –991. Martínez-Gil, Fernando. 2003. Consonant intrusion in heterosyllabic consonant–liquid clusters in Old Spanish and Old French: An optimality theoretical account. In NúñezCedeño et al. (2003), 39–58. Moreno, Martino Mario. 1940. Manuale di Sidamo. Milan: Mondadori. Morin, Yves-Charles. 1980. Morphologisation de la épenthèse en ancient français. Canadian Journal of Linguistics 32. 365 –375. Murray, Robert W. & Theo Vennemann. 1983. Sound change and syllable structure in Germanic phonology. Language 59. 514 –528. Nababan, P. W. J. 1981. A grammar of Toba Batak. (Pacific Linguistics D37.) Canberra: Australian National University. Núñez-Cedeño, Rafael, Luis López & Richard Cameron (eds.) 2003. A Romance perspective on language knowledge and use. Amsterdam & Philadelphia: John Benjamins. Oostendorp, Marc van. 1999. Syllable structure in Esperanto as an instantiation of universal phonology. Esperantologio/Esperanto Studies 1. 52–80. Parker, Steve. 2002. Quantifying the sonority hierarchy. Ph.D. dissertation, University of Massachusetts, Amherst. Parker, Steve. 2008. Sound level protrusions as physical correlates of sonority. Journal of Phonetics 36. 55 –90.

18

Misun Seo

Picard, Marc. 1987. On the general properties of consonant epenthesis. Canadian Journal of Linguistics 32. 133–142. Pons, Claudia. 2005. It is all downhill from here: The role of syllable contact in romance languages. Paper presented at the 13th Manchester Phonology Meeting (ROA-802). Rice, Keren. 1992. On deriving sonority: A structural account of sonority relationships. Phonology 9. 61–99. Rice, Keren & Peter Avery. 1991. On the relationship between laterality and coronality. In Carole Paradis and Jean-François Prunet (eds.) The special status of coronals: Internal and external evidence, 101–124. San Diego: Academic Press. Rose, Sharon. 2000. Epenthesis positioning and syllable contact in Chaha. Phonology 17. 397– 425. Seo, Misun. 2003. A segment contact account of the patterning of sonorants in consonant clusters. Ph.D. dissertation, Ohio State University. Smolensky, Paul. 1995. On the structure of the constraint component Con of UG. (ROA-86.) Stroomer, Harry. 1995. A grammar of Boraana Oromo (Kenya): Phonology, morphology, vocabularies. Cologne: Rüdiger Köppe Verlag. Vennemann, Theo. 1988. Preference laws for syllable structure and the explanation of sound change: With special reference to German, Germanic, Italian, and Latin. Berlin: Mouton de Gruyter. Walker, Douglas C. 1978. Epenthesis in Old French. Canadian Journal of Linguistics 23. 66–83. Wetzels, W. Leo. 1985. The historical phonology of intrusive stops: A non-linear description. Canadian Journal of Linguistics 30. 285 –333. Zec, Draga. 2007. The syllable. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 161–194. Cambridge: Cambridge University Press.

54

The Skeleton Péter Szigetvári

Only theories of phonology that attach significance to representations of phonological objects and, in addition, subscribe to an autosegmental version of these representations face the question of what the phonological skeleton looks like. Therefore, this chapter presupposes an autosegmental view of phonological representations. The motivation for the autosegmental model is the fact that the segmentation of the speech signal can never result in absolutely discrete segments. Here segmentation is taken to mean practically the conversion of the continuous speech signal into the alphabetical symbols of the IPA. Some of these symbols pertain to more than one segment: for example, the stress mark to the syllable after it, tones potentially to even longer stretches. Take the question you live by the sea? Its last word, carrying the most prominent stress in the sentence, the tonic, might be transcribed as [ ↑si(]. In this transcription, the tone mark has a scope lasting all through the word (basically its only vowel): the pitch rises on steadily until the end of the utterance. The same holds if the string after the tone mark is longer, for example, as in you live by the | ↑seaside, Martin? It would take a very complicated mechanism to maintain that pitch was a property of individual segments and in some cases this rising pitch was realized on a single vowel, while in others it was split into low, higher, even higher, and highest pitch and added to several other vowels following. Tone is clearly not an immanent property of a vowel; it is an ephemeral phenomenon (from the point of view of a vowel) controlled by syntactic and pragmatic factors. If so, it is useful to represent it separately from the rest of the properties of the sound string. Such autonomous sound properties came to be known as autosegments. If the phonological shape of an utterance is represented as a string of discrete feature bundles, the only option of representing the rising pitch in ↑sea includes a feature [rising tone] (here R) in the set of features corresponding to the vowel, as in (1a). In ↑seaside, Martin, on the other hand, a set of features [low tone], [higher tone], [even higher tone], etc. (here 0H, 1H, etc.) has to be assigned to the vowels following the tonic, as in (1b). (1)

a. b.

si(R si(0H sa>1Hd mA(2H t> 3H n

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0054

2

Péter Szigetvári

One flaw of such representations is obvious: there appears to be nothing in common between the two rising tones, i.e. nothing to indicate their relationship. It is clear that the same tone is spread over the available vowels, but this is not shown in (1b). Not only tone but many other sound properties turn out to be similarly promiscuous, with the potential of simultaneously belonging to several segments, and being manipulable independently of the segment(s) they belong to. (For further discussion, see chapter 45: the representation of tone and chapter 14: autosegments.) The more sound properties extracted from their feature bundles, the fewer there remain. There are two widespread views on how many: according to one – historically the earlier – one feature, [syllabic], remains in the “bundle” (e.g. McCarthy 1979; Halle and Vergnaud 1980; Clements and Keyser 1983); according to the other, no feature remains (e.g. Levin 1985; Lowenstamm and Kaye 1986). The string of segmental positions thus vacated is called the phonological skeleton (a name suggested by Halle and Vergnaud 1980: 83) – or, alternatively, the timing tier or skeletal tier. The former type, in which the skeletal positions hold the feature [syllabic], is the CV skeleton (discussed in §2); the latter one, which is completely empty, is the X skeleton (discussed in §3). A non-segment-based framework involving only syllables and moras is introduced in §4. I will then argue that there is a way of incorporating moras in the old CV skeleton with clear advantages over the moraic framework (§5). To begin with, let us examine the types of relation that may exist between skeletal positions and phonetic features associable to them.

1

Melody–skeleton relations

Skeletal positions represent the presence of a segment, and serve as an anchoring site for the phonetic properties associated with that portion of the speech signal. If the relationship of feature bundles – referred to as melody, again following Halle and Vergnaud (1980), among many others – and skeletal positions were always one to one, the latter would be superfluous. But, as we have already seen in the case of tone, this is not so. Let us take the non-trivial options one by one.

1.1

One-to-many relations

The standard textbook examples for this type of skeleton–melody relationship are affricates and prenasalized plosives (see chapter 29: secondary and double articulation). With respect to the former, there has been much debate among phonologists about the feature [±delayed release] (and the marginal oppositions it creates), which Chomsky and Halle (1968) introduced to distinguish affricates from plosives. An alternative approach, that affricates are bisegmental (discussed by Gimson 1989: 172f. and Roca 1994: 3, among others), as suggested by the IPA symbols used to represent them, is undermined by many facts. In most cases, the distribution of affricates shows that they are not clusters, but single segments. It is even possible that an affricate in a system does not contrast with a homorganic fricative (e.g. Castilian Spanish has [Œ], but not [œ]), rendering a cluster analysis improbable.

The Skeleton

3

The separation of quantity (skeleton) and quality (melody) offers an opportunity for handling the quantitatively simplex, but qualitatively complex, affricates in an intuitive way, as so-called contour segments. (2) depicts the view of the affricate [ts] along these lines. (The skeletal slot is represented as “x”, but this is not meant to indicate a standpoint in the CV vs. X skeleton debate.) (2)

An affricate as a contour segment x t

s

The representation in (2), however, incorporates a misconception, namely, that the melody of segments, without the slot they attach to, forms some kind of unit, two of which are here associated with a single skeletal slot. In reality, the symbols “t” and “s” above have no theoretical status. What exist in an autosegmental framework (or for that matter any other phonological theory since the middle of the twentieth century) are features, many of which occur in both parts of the affricate (e.g. place of articulation, laryngeal properties). Another difficulty with the contour model of affricates lies in the interpretation of autosegmental representations. Any melody linked to a slot of the skeleton – also known as the timing tier – is interpreted simultaneously. Temporal sequencing is managed by the skeleton, i.e. what is linked to an earlier slot is interpreted earlier than what is linked to a later slot. Associating the stop part of the affricate to the left leg of the contour segment and the fricative part to the right is then no more than a graphical trick, which cannot have any realizational consequences. The standard solution to this problem, involving root nodes, is discussed in §4. As Clements (1999) and Lin (chapter 16: affricates) argue, affricates are best thought of as non-contour segments (strident stops), as Jakobson et al. (1952) have proposed. It seems then that we are left without one-to-many relations between the skeleton and melodic material. In fact, such relations are the most common occurrences in representations, since it is not segments but features that are associated with the slots of the skeleton. Thus most segments embody the one-to-many relation, as the partial representations of two very common segments, [d] and [A], show in (3). (3)

Partial autosegmental representations a. [d] x …

[+voiced] [+ coronal] [− continuant] b. [A] x [+back]

… [+low] [−round]

Péter Szigetvári

4

1.2

Many-to-one relations

The long–short contrast of a vowel could be encoded in a feature [long], so that the long vowel is [+long] and the short one [−long]. It is evident, however, that this is not an adequate way of modeling length contrasts. Vowel length (or consonant length for that matter, on which see chapter 37: geminates) is not a property like vowel height (or the voicing of obstruents): it does not harmonize or trigger or undergo assimilation of any type (see chapter 20: the representation of vowel length). Furthermore, changes in segmental length are usually unlike common assimilatory changes. Take, for example, the Rhythmic Law of Slovak, which shortens a suffixal long vowel after a long vowel in the stem. The agentive -ník (the acute accent marks length) inherently contains a long vowel (e.g. rol-ník [rolJi(k] ‘farmer’), which shortens when added to a long-vowelled stem (e.g. stráN-nik [stra(ÚJik] ‘guard’; Kenstowicz and Rubach 1987). The rule could be categorized as a dissimilatory process. What is conspicuously missing in languages is any assimilation of this type: i.e. changes where a short vowel would lengthen in the vicinity of a long vowel, and, crucially, because of that long vowel, or a long vowel would shorten purely because of the shortness of a neighboring vowel. An even more telling phenomenon is compensatory lengthening (see also chapter 64: compensatory lengthening).1 A synchronic comparison of the forms of the 1st singular copula in two varieties of Ancient Greek, Attic [e(mi] and Aeolic [em(i] ‘I am’, suggests a simple shift in the host of the alleged feature [+long]. In light of the reconstructed Proto-Greek etymon *[esmi], however, a different analysis is called for. The loss of the [s] triggers the lengthening of one of the neighboring segments, the preceding vowel in Attic and the following consonant in Aeolic. If length were encoded by a feature, the change could only be described by a pair of rules applying simultaneously, one deleting the coda consonant, the other lengthening the segment next to the deletion site. It is clear that the two rules are interrelated: spontaneously, open syllable lengthening is not attested in Attic, nor is intervocalic gemination in Aeolic – these changes only occur in tandem with the loss of the coda consonant. It is therefore difficult to understand why these two rules so commonly co-occur. If the quantity of segments is stored separately from their quality, this process, and any similar one, has a very neat explanation: it is only the quality (melody) of the coda [s] that is lost (more precisely, only its association with the skeleton); its place, that is, the time it occupied in the string of sounds, is retained (cf. e.g. Ingria 1980; Steriade 1982; Hock 1986; Hayes 1989). It is this empty place that one of the neighboring segments fills in, as shown in (4). (4)

Compensatory lengthening: The stability of the skeleton a.

1

Attic

b. Aeolic

x

x

x

x

x

x

x

x

e

s

m

i

e

s

m

i

Much of the literature limits the term compensatory lengthening to cases involving the lengthening of a vowel. The lengthening of a consonant is called inverse compensatory lengthening by Hayes (1989: 280–281). Here I will refer to both processes by the same name.

The Skeleton

5

While cases like the above could also be analyzed as the total assimilation of the [s] to the preceding vowel or the following consonant, there are more complicated types of compensatory lengthening, for which such an analysis is not at all viable. Cases in point include Middle English tale [talH] > [ta(l] (Minkova 1982), Old Church Slavonic bóg§ > pre-Serbo-Croatian bôg [bóog] ‘god’, bob? > bób [boób] ‘bean’ (Hock 1986: 435), and Old Hungarian [hida] > híd [hi(d] ‘bridge’, [levele] > [leve(l] (Modern Hungarian levél [leve(l] ‘leaf’, with subsequent closing of the second vowel; E. Abaffy 2003: 331). The English case is controversial (see Lahiri and Dresher 1999 for a different analysis), but for the others there is evidence that they are not cases of open syllable lengthening ensued by apocope. In Slavonic the original bisyllabic stress pattern is preserved on the long vowel of the monosyllabic forms. In Hungarian no lengthening takes place before suffixes that retain the stem-final vowel: Modern Hungarian hidam [hidAm] ‘my bridge’, levelek [levelek] ‘leaves’. Lengthening due to a minimal word constraint is also excluded by the last example: the process takes place in monosyllabic and polysyllabic words alike. The bipositional analyis of long vowels is also made likely by the fact that they behave similarly to “vowel clusters,” i.e. diphthongs. In English, for example, neither category occurs before non-coronal consonant clusters, and both occur word-finally, unlike short monophthongs (Fudge 1969: 272f.; Harris 1994: 37; Gussmann 2002: 20–23; see Prince 1984 for the same conclusion in Finnish for both vowels and consonants). Accordingly, there is a general consensus that long vowels ought to be represented as in (5a), and long (i.e. geminate) consonants as in (5c). The representation of diphthongs and other consonant clusters is given in (5b) and (5d), for comparison. (5)

The autosegmental representation of vowel and consonant clusters a. x

x

b. x

x

A

u

A

c. x

x

d. x

x

n

t

t

It is not only complete segments that may be linked to more than one skeletal position. The standard situation in fact is that features (autosegments) are multiply linked. Take, for example, the Hungarian word különbség ['yl«mpœe(g] ‘difference’, depicted in (6). (The features only serve illustrative purposes; their exact identity and location is irrelevant here.) (6)

Multiply linked features in the representation of ['yl«mpœe(g] [rounded] x

x

[mid, unrounded]

[front] x

x

x

x

x

x

x

x

[nasal] [voiceless] [voiceless] [voiced] [velar] [coronal, lateral] [palatal] [velar] [labial] Chaotic as it seems, the diagram in (6) does not contain all the relevant features specifying the segmental content of the string ['yl«mpœe(g] – manner of articulation features, for example, are lacking. Nevertheless, it can clearly be seen that it is more common for a feature to be associated with several skeletal slots than

Péter Szigetvári

6

to be associated exclusively with one. In (6) this is because of voicing, place of articulation assimilations, vowel nasalization, and consonant fronting, as well as vowel harmony. Even when feature sharing is not a result of such phonological processes, i.e. in monomorphemic items, the multiple association of a single feature is dictated by the Obligatory Contour Principle (Leben 1973; McCarthy 1986; Odden 1986).

1.3

One-to-zero and zero-to-one relations

As we have seen, many-to-one and one-to-many relations between the skeleton and melody are very common. Two further options are discussed in this section. It is possible that a skeletal position is not associated with any melodic material. The opposite case may also occur: features unlinked to any point on the skeleton. French liaison exemplifies both of these possibilities (see chapter 112: french liaison). In this phenomenon, a word-final consonant is pronounced when the next word begins with a vowel, but not when it begins with a consonant. (The syntactic conditions on liaison need not concern us here.) Thus in the phrase petit garçon ‘little boy’ the first element ends in a vowel ([pHti garsõ]), in petit enfant ‘little child’ a [t] is pronounced ([pHtit √f√]). According to one analysis (Prunet 1987: 226) petit comes with only four skeletal slots, but five segments; enfant, on the other hand, has an extra skeletal slot – it begins with an initial consonantal slot which is empty. The situation is shown in (7). (7) Liaison [x

x

x

x]

p

H

t

i

[x t

x

x

x]



f



The [t] at the end of petit is not associated to the skeleton; it is said to be floating. Floating melody fails to be pronounced unless it is able to associate to the skeleton. Vowel-initial words supply an empty skeletal position that the floating melody can associate to. The floating [t] at the end of petit must be lexically determined: there are other liaison consonants besides [t], whose identity is unpredictable (e.g. gros enfant [groz √f√] ‘fat child’, mon enfant [mõn √f√] ‘my child’, gentil enfant [Ú√tij √f√] ‘nice child’, long article [lõg artikl] ‘long article’, etc., where the consonant before the space appears only if the next word begins with a vowel). Therefore this consonant must be included in the lexical representation. It is also not unjustified to suppose that vowel-initial words carry an empty skeletal slot at their left side. It is true for all languages that at least some words (and syllables) begin with a consonant (chapter 55: onsets). For some languages this is not optional, but obligatory; crucially, though, there are no languages where an onset is not possible. One may argue that a syllable-initial consonantal position is in fact obligatory in all languages, the optionality lying in whether this position may or may not be left empty (see e.g. Kaye 1989: 134). Thus consonant-initial words do not carry an empty skeletal slot to their left, while vowel-initial words do, and as a result, the latter can host the floating consonantal melody at the end of the preceding word. Apparently, even languages that allow syllable-initial consonantal positions to be empty prefer them to be filled.

The Skeleton

7

Hypothesizing that there is an empty skeletal position between two vowels in hiatus and that languages make an effort to fill it also explains the common phenomenon of hiatus filling (chapter 61: hiatus resolution). Unless a language manages to get rid of this consonantal position (often together with one of the neighboring vowels), an intervocalic consonantal position is filled by some melody associating to it from one of the vowels. (8) illustrates this with English skier and Hungarian síel [œi(el] ‘s/he skies’. (8)

Hiatus filling a. x

x

x

s

k

i

x

x

x H

b.

x

x

œ

i

x

x

x

x

e

l

The hiatus between [i(] and [H] or [e] is filled by the melody of the first vowel, resulting in the forms [ski(jH] (Gimson 1989: 215, 2001: 213) and [œi(jel] (Siptár and Törkenczy 2000: 283). The possibility of vocalic positions being empty is considerably more controversial; this issue will be taken up in §5.

2

The CV skeleton

The notion of the CV tier was originally developed for the analysis of the nonconcatenative morphology of Classical Arabic by McCarthy (1979, 1981). As in other Semitic languages, a large number of morphological categories are not expressed by linking morphemes after one another, but by fusing individually unpronounceable components into one. A similar, but much less elaborate case is the ablaut found in Germanic languages, e.g. English sing, sang, sung, and song, where the consonants carry the lexical entry and the vowel the grammatical category. (See also chapter 105: tier segregation and chapter 108: semitic templates.) Paradigms in Arabic are classified into groups traditionally called conjugations – or, as McCarthy refers to them, binyans. The prime phonological property of a binyan is the order in which consonants and vowels are arranged. Roots of three (sometimes two or four) consonants contribute a lexical field to the meaning; the vowels are often responsible for grammatical categories like tense and voice. A portion of McCarthy’s (1979: 244) table depicting the forms for the root /ktb/ ‘to write’ is given in (9). (9)

Some forms of /ktb/ binyan I II III ... IX

perf act katab kattab kaatab

perf pass kutib kuttib kuutib

ktabab



The CV skeletons of the first three binyans are CVCVC, CVCCVC, and CVVCVC, respectively; that of binyan IX is CCVCVC. The root consonants and the vowels

8

Péter Szigetvári

supplied by the grammatical category are mapped onto this skeleton more or less according to the association conventions elaborated by Goldsmith (1976). Three cases are shown in (10). (10)

a. katab

b.

k

t

b

C

V

C

V

kattab

C

a

c. ktabab

k

t

b

C

V

C

C

V

C

a

k

t

b

C

C

V

C

V

C

a

In (10a), the consonants are linked to the C slots of the skeleton, one by one. It is vital that the consonantal and vocalic skeletal slots be distinguished, since the linking of the root consonants and the vowel(s) can be done as required only thus. The case of (10c) shows that association takes place from left to right: with three consonants to four positions, the last consonant is linked to the surplus position ([ktabab]). (10b) poses a problem in this respect: either association is idiosyncratically edge-in in this case, or some extra mechanism is needed. McCarthy (1979: 256) uses brute force here: he assumes the expected *[katbab] in the first round, with a later rule delinking the first linkage of [b] ([katCab], where C represents the slot from which the melody of the [b] is delinked). This is automatically followed by the spreading of the [t] ([kattab]), much like an instance of compensatory lengthening (see §1.2). A slightly less powerful solution is proposed by Lowenstamm and Kaye (1986: 117–118), who claim that association to the first position is inhibited from the start, thus each consonant of the root occupies its final position in the first round, as shown in (11a). The resulting configuration (empty C followed by filled C) is interpreted as a geminate, as in (11b). I have adapted the original to the previous diagrams of this chapter to aid comparison. We will see below (§3) that Lowenstamm and Kaye use a significantly different scheme. (11)

The mapping of a geminate ([kattab]) a. k

t

b

C

V

C

a

C

V

C

b. k

t

b

C

V

C

C

V

C

a

Note that McCarthy’s second-round-spreading solution cannot apply after Lowenstamm and Kaye’s first-round blocking, since this would yield the unattested form *[kaktab].2 2

Neither analysis gives a reason for delinking or inhibiting the association of the consonant encircled in (11a), so that the unattested form *[katbab] is avoided. Following Hoberman (1988), we may assume that long-distance geminates (those separated by a vowel) are more marked (their inhibition is ranked higher) than local ones and that word-initial geminates are even more marked. This explains why [kattab] is preferred to *[katbab], but [ktabab] to *[kkatab].

The Skeleton

9

In McCarthy’s analysis, the CV skeleton of Arabic words is a morpheme (a prosodic template, in his words), identifying the binyan of the word form, contributing to the semantic elements of the specific binyan (as if the Attic–Aeolic difference between [e(mi] and [em(i] represented a difference in morphological categories). Clements and Keyser (1983: 11) apply the CV skeleton as a universal phonological device, the mediator between the syllable and autosegments, its two types of members, C and V representing “the useful but ill-defined notion of ‘phonological segment’.” The C for them is an anchor for anything [−syllabic] and the V for [+syllabic] segments. Prince (1984) shows that such impoverished representations adequately capture the templates of, for example, verbal person endings in Finnish, which are -C in the singular and -CC[e] in the plural, with the melody [m] in the 1st, and [t] in the 2nd person. The surface forms are thus 1st singular [-n] (by an independently motivated rule turning [m] to [n] word-finally), 2nd singular [-t], 1st plural [-mme] and 2nd plural [-tte].

3

The X skeleton

Simultaneously with the development of theories of the CV skeleton there evolved an alternative view that considered the distinction of C and V slots redundant, and argued that skeletal slots are uniform, usually marked with dots or x’s (e.g. Lowenstamm and Kaye 1986; Levin 1985). Proponents of the X skeleton have put forward a number of arguments against skeletal positions predestined for syllabicity.

3.1

Reduplication in Mokilese

Levin (1985: 35–41) shows some unusual cases of reduplication from Mokilese, which, she believes, are analyzable only with an X skeleton. The point is that the reduplicant is a copy of the first three segments of the first syllable of the stem, irrespective of their being consonants or vowels. Levin argues that the template of the reduplicant must therefore also lack this information. The relevant data are given in (12). (12)

Mokilese reduplication stem a. pDdok b. kasD c. pa d. wia e. ca(k f. onop g. andip

progressive pDdpDdok kaskasD pa(pa wi(wia ca(ca(k onnonop andandip

‘plant’ ‘throw’ ‘weave’ ‘do’ ‘bend’ ‘prepare’ ‘spit’

Levin contends that the reduplicant must be a totally specificationless skeleton, q[xxx], to which the copy of the melody of the stem is associated following universal conventions. The case of (12a), (12b), and (12g) is now straightforward. When the stem is too short, as in [pa], (12c), the last melody is multiply linked. The fact that the reduplicant is a single syllable inhibits the second vowels of [wia] and [onop] from associating to the skeleton (12d) and (12f). As a result, the preceding

10

Péter Szigetvári

vowel or consonant is lengthened again. There is a problem with the stem [ca(k] though, (12e). The melody of the stem comprises three segments, “|c|”, “|a|”, and “|k|”, and the expected reduplicated form is therefore *[cakca(k] rather than the attested [ca(ca(k]. Levin has to stipulate that multiple melodic associations like that of the long [a(] are transferred in reduplication. A further problem of this analysis lies in the interpretation of the reduplicant: it is specified as a syllable, but it is not one in [on.n-onop] or [an.d-andip] (where the hyphen indicates the boundary between the reduplicant and the stem), since a word-internal pre-vocalic consonant forms a syllable with the following vowel, as the universal onset maximalization principle requires. Yet the constraint on the reduplicant being a syllable cannot be relaxed, because if the first three segments were copied without reference to a syllable, undesirable results like *[wi.a-wia] or *[o.no-onop] would emerge. In fact, Moravcsik says that in her survey of reduplication types she has never come across formulations like “reduplicate the first two [or, in our case, three – szp] segments (regardless of whether they are consonants or vowels)” (1978: 307–308). If in a language reduplication copies the first CVC part of the stem for consonant-initial stems, it will copy VC (not VCV) of vowel-initial stems. Actually, a simpler account is available for the data in (12). Theoretically it is no more plausible than Levin’s, but needs fewer stipulations, and thus invalidates her analysis as an argument for the X skeleton. Suppose, as in §1.3 above, that syllable onsets are always represented on the skeleton, either as a filled or as an empty C position. (This immediately explains Moravcsik’s observation.) The reduplicant then is a copy of the first CVC part of the stem, melody and skeleton included. The cases of (12a–c) are obvious. The third slot for (12c) is automatically filled by the vowel of the reduplicant, just as for Levin. The objection (also put forward by Broselow 1995: 184) that vowels cannot spread onto a consonantal slot is mistaken: a C slot is not meant to host consonants exclusively, but non-syllabic segments. If a syllable has one syllabic segment, then a long vowel is hosted by a VC sequence on the skeleton, as Clements and Keyser (1983: 12) argue. In (12d), the empty intervocalic C position is involved in the copying, but, as it is preconsonantal in the reduplicant, it serves as an anchor for the preceding vowel, unlike in the stem, where it is pre-vocalic. This is shown in (13a). Pre-vocalic stems blindly copy the initial empty C position, and so only the first two “real” segments form the reduplicant. (12g) seems to cause a problem now: here the reduplicant appears to be [and-], i.e. VCC, instead of the expected VC. Raimy (1999) suggests an obvious solution: if [nd] is analysed as [ ndd], a geminate pre-nasalized stop, then the situation is identical to that in (12f). The stem-initial empty C must be filled to satisfy onset maximalization: it is impossible to have a coda consonant followed by an empty onset. This is illustrated in (13b). (The reduplicant and the stem are enclosed in brackets for easier identification.) (13)

Reduplication and empty onsets a. [C w

V C] [C

V

i

i

w

C

V] a

b. [C

V

C] [C

V

C

V

C]

o

n

o

n

o

p

In fact, Levin herself suggests the empty-C-slot analysis as an escape hatch for the CV skeleton, but rejects the idea on the grounds that the vowel of the causative prefix [ka-] does not lengthen when prefixed to vowel-initial stems (e.g.

The Skeleton

11

[ka+adanki] > [ka(danki] ‘to name’, [ka+uru(r] > [kauru:r] ‘to be funny’). Vowels do not usually lengthen by filling a pre-vocalic empty C position (cf. Hayes 1989: 281); what is more, it is hard to expect a long vowel or a diphthong to further lengthen. The conclusive test, the prefix [ak-], which is expected to geminate its consonant if prefixed to a vowel-initial stem if there was an empty consonantal slot, “was only found prefixed to C-initial stems” (Levin 1985: 40). We can conclude that the hypothesis that vowel-initial stems carry an empty consonantal position at their left edge is not refuted by Levin’s data.

3.2

Redundancy of C and V

A better argument against CV skeletons is that specifying syllabicity on the skeleton is redundant if the same information can be read off higher prosodic structures, like syllabic constituents, especially the nucleus. Lowenstamm and Kaye (1986) argue that simple syllable trees, like those in (14), adequately define the slots of the skeleton. (14)

Syllable trees a.

b.

c.

N

They suggest that labeling the trees is unnecessary since this information also follows from the configuration. Nevertheless, some minimal labeling (N, i.e. “nucleus”) is necessary to distinguish CVC, (14b), and CVV, (14c), syllables – consider, for example, the Arabic templates of binyans II ([kattab]) and III ([kaatab]) (see (9)). Lowenstamm and Kaye (1986) raise the issue of whether the skeleton is an independent level in phonological representations, or merely a projection of higher prosodic structure, in particular syllable structure. A consequence of this assumption is that the nodes representing syllabic constituents (like onset or nucleus) cannot be distinguished from the skeletal position(s) that they dominate. That is, it is impossible to conceive of skeletal positions not dominated by higher prosodic structure, or of a syllabic constituent that does not dominate a skeletal position. Charette utilizes “pointless onsets” in an analysis of h-aspiré words in French (1991: 90f). She claims that “normal” vowel-initial words begin with an onset that does not dominate any skeletal position, while those which contain h-aspiré – words that phonetically begin with a vowel, but phonologically behave as consonantinitial – begin with a regular “pointful” onset, dominating a skeletal position which is not associated with any melody. The vowel of the definite article is unpronounced before vowel-initial words, but it is pronounced before consonant- and h-aspiréinitial words. (15) illustrates the first part of the two cases using Charette’s examples: l’amie [lami] ‘the girlfriend’ and la hache [la aœ] ‘the axe’. (15)

Two types of empty onset a. O

N

O

N

x

x

x

l

a

a



b. O

N

O

N

x

x

x

x

l

a

a



12

Péter Szigetvári

According to Charette’s analysis, the vowel of the article is deleted before a pointless onset as a result of the Obligatory Contour Principle, since the two nuclei are “adjacent” if the onset between them lacks a skeletal slot, as in (15a). When such an onset is linked to a skeletal slot, it inhibits the deletion process, as in (15b). This analysis faces difficulties on several counts. On the one hand, the Obligatory Contour Principle controls the appearance of identical melodic elements linked to adjacent skeletal positions. The nodes labeled “nucleus” do not qualify as such. On the other hand, liaison calls for the opposite representation of the two types of vowel-initial words. As mentioned in §1.3, some morphemes that are vowelfinal preconsonantally exhibit a consonant when followed by a vowel-initial word. The plural of the definite article is an example: les amies [lez ami] ‘the girlfriends’ vs. les haches [le aœ] ‘the axes’ (recall that h-aspiré-initial words behave as if they were consonant-initial). The final [z] of the cliticized article is pronounced when there is no skeletal position for it to anchor to, and it is not pronounced when there is one, i.e. without further stipulations, Charette’s analysis predicts just the opposite of the attested liaison facts. The impoverished structures of (14) are also impossible if labels like “onset” and “nucleus” are treated separately from what they label: the skeletal slots. To summarize, there is no compelling reason to distinguish skeletal points and the syllabic constituents containing them. Allowing pointless constituents or constituentless skeletal points makes unnecessary contrasts possible. But then, if prosodic nodes like onset and nucleus are not distinct from skeletal slots, then skeletal slots do carry the basic information of syllabic status: such a skeleton does contain Cs and Vs, irrespective of whether this is penciled on paper as Cs and Vs, Os and Ns, or something else. The two levels must, nevertheless, be kept distinct if more than one skeletal slot can be associated with a single syllabic constituent, i.e. if branching onsets and nuclei are posited. §5 discusses a model where even these are claimed not to exist.

4

Moras

As we have seen in the case of Mokilese reduplication (§3.1), preconsonantal empty C positions are available as targets for the spreading of a preceding vowel; intervocalic ones are not. In many languages, a similar asymmetry characterizes these two consonantal positions. Stress processes, for example, may treat a preconsonantal consonant as being on a par with vowels, but not pre-vocalic consonants.3 Hock (1986) argues that the notion of mora must be (re)introduced into phonological theory. The mora has been around in linguistic discussions for at least two centuries (see Allen 1974: 100); it is its theoretical status that is at issue here (see also chapter 33: syllable-internal structure). Hock’s proposal is to introduce 3

It is common at this point to offer a disclaimer with respect to Everett and Everett (1984) (who claim that Pirahã is different in this respect) or to Davis (1988) (who collects cases where the quality of the onset seems to play a role in stress assignment). However, as Hayes states: “I believe that the ability of Moraic Theory to account for wide-spread patterns of markedness should be given more weight in assessing the evidence than any particular awkwardness in the analysis of individual languages” (1989: 303). This is probably true for any theory. Furthermore, some of the very few onset-sensitive systems have been shown to be re-analyzable so that they are not onset-sensitive (Goedemans 1996; Takahashi 1999).

The Skeleton

13

the mora as an autosegment, rather similar to tones: in his proposal tones are indeed linked to moras. If compensatory lengthening could only lengthen a vowel in compensation for the loss of a tautosyllabic consonant, the “standard” CV or X skeleton would be fully capable of dealing with the process. We have seen, however, that compensatory lengthening also occurs at a distance: the loss of a vowel in the following syllable may lead to lengthening across an intervening onset consonant. Relevant cases are Greek glide loss (e.g. Proto-Greek [odwos] > Ionic [o(dos] ‘threshold’; Steriade 1982: 118) and Middle English schwa apocope (e.g. [talH] > [ta(l] ‘tale’; Minkova 1982). In both cases the melody delinks and the vowel spreading is separated by a consonant that apparently remains linked to the skeleton. (16)

Problematic cases of compensatory lengthening a. V

C

C

V

C

b. C

V

C

V

c. C

V

C

V

o

d

w

o

s

t

a

l

H

t

a

l

H

Actually, as (16a, b) show, the consonant standing in the way of compensatory lengthening is shifted to the right by one slot in both cases. This process, proposed by Steriade (1982: 126–128), is referred to as “double flop” by Hayes (1989: 265 –267). The Greek case in (16a) can be explained by universal principles: the loss of [w] leaves us with an empty onset (provided that the syllabification is [od.wos]). The resulting [od.os] violates the onset maximalization principle, thus resyllabification ensues. But the skeletal position does not resyllabify, since there is an empty onset slot, recently vacated by [w]. It is to this slot that the [d] associates, leaving its original slot empty, triggering the lengthening of the preceding vowel. The lengthening triggered by apocope, exemplified by Middle English [talH] > [ta(l] in (16b), is more problematic for a theory which lacks moras. The mechanism appears to be the same as in (16a), but now the consonant before the disappearing word-final vowel is supposed to flop to a vocalic position, to the nuclear slot of the last syllable. In addition, the position it leaves is not one that should cause lengthening of the preceding vowel. The alternative, whereby the vowel spreads out immediately to the vacated vocalic slot, as in (16c), is even worse, as it violates the axiomatic constraint inhibiting the crossing of association lines.4 In fact, with both CV and X skeletons it is hard to explain why the spreading of a vowel to some consonantal slots should cause lengthening, while in other cases an apparently similar vowel spreading does not. For example, the empty onset in Hungarian pia [pijA] ‘drink’ is filled by the spreading of the melody of the preceding vowel, as in (8).5 Yet the result is not a long vowel, which it is in film ‘film’, for which the pronunciation [fi(m] is possible (Siptár and Törkenczy 2000: 281) (cf. Hayes 1989: 281–283). 4

This problem could be avoided by placing vowels and consonants on separate autosegmental planes (as in (10) and (11)); however, such a modification would loosen the theory beyond desirable limits: we would now find it hard to explain why so many processes deemed possible by the framework never occur. 5 While it may be argued that pia is underlyingly [pijA], the question still holds why the same structure, the melody of [i] doubly linked to a V and a C slot, is [ij] in one case and [i(] in the other.

Péter Szigetvári

14

Hock’s (1986) proposal is to attach a mora ([) to each weight-bearing position, that is, to each vocalic position, as well as to some consonantal positions, notably codas. The two cases are shown in (17). (17) Double flop with moras a.

q

b.

q

q

q

V

C

C

V

C

C

V

C

V

o

d

w

o

s

t

a

l

H

[

[

[

[

[

The moraic analysis of [odwos] > [o(dos] in (17a) is not significantly different from the moraless one, shown in (16a). It nevertheless suggests a reason for the asymmetry between onset and coda consonants: the former do not possess a mora, while the latter do. The advantage of the mora analysis becomes clear in the lengthening of a vowel caused by apocope: [talH] > [ta(l], (17b). The intervening onset consonant is not affected by the process at all, since it is not associated with a mora. Thus, the mora left floating after the final vowel is lost can associate to the stem internal vowel “above the head” (or rather “below the foot”) of the intervening moraless consonant, much like in a vowel harmony process, where intervening consonants not possessing the relevant vocalic feature are transparent. Hayes (1989) rearranges the relationship of the syllable and the mora by making the latter an integral part of prosodic structure, dominated by the syllable node. In a more radical innovation he also gets rid of the skeleton as previously conceived. In his view, the function of the skeleton is taken over by moras, and moraless consonants are either associated directly with the syllable node or share a mora with the moraic segment. Accordingly, the two processes displayed in (16) and (17) would be represented as in (18). (18)

Moras as the skeleton a.

q

b.

q

[

[

o

d

[ w

o

s

t

q

q

[

[

a

l

H

The simple double-flop case of Greek glide deletion in (18a) does not require much comment, as the mechanism is the same as before. For Middle English apocope, however, Hayes needs an extra stipulation, called parasitic delinking: the loss of an overt nucleus in a syllable entails the dissolution of the whole syllable. What is now left of the last syllable, a mora and an [l], is joined to the first one, yielding the correct result. In Hock’s analysis, on the other hand, the [l] remains in place, and does not have to be delinked and relinked, as can be seen in (17b).

The Skeleton

15

Despite this complication, Hayes’s model has definite advantages over Hock’s use of moras. On the theoretical count, it is simpler in that it lacks the CV or X skeleton. On the empirical count, it predicts that compensatory lengthening of a vowel is only caused by the loss of a moraic segment that follows the vowel, never by the loss of one that precedes it. As (19a) shows, Hock’s representations easily allow the latter case, which is not attested according to Hayes. His hypothetical example is [Hla] > [la(]. (19)

Compensatory lengthening triggered by loss of preceding vowel a.

V

C

V

b. q

q

H

l

a

[

[

[

H

[

l

a

In Hayes’s model, (19b), the freed mora of the first syllable cannot be captured by the second mora, because the onset consonant inhibits this. The price to pay for this solution is the stipulative parasitic delinking mentioned above: if the moraic segment of a syllable is delinked, the onset consonant is also delinked, as in (18b). Without this an onset will always block the linking of a heterosyllabic mora. Note that in Hock’s model not only the loss of a vowel, but also the loss of a moraic consonant, could lead to the lengthening of a following vowel (e.g. Proto-Greek [esmi] > hypothetical *[emi(]). Such changes also seem to be unattested, as predicted by Hayes. While theoretically attractive, dispensing with the skeleton has serious repercussions. Recall that linking IPA symbols to elements of higher prosodic structure (slots of the skeleton, moras, syllables) is misleading, since segments are not atomic. In partial trees like those in (20), where the Greek letters a–e stand for (auto)segments, the temporal order of these autosegments is not specified. The string bc is usually referred to as a branching onset; d is a moraless coda, which may occur word finally even in languages with moraic codas, like English. (20)

Autosegmental representations without a timing tier a.

b

c

q

b. q

[

[

a

e

d

Accordingly, the order of two adjacent tautosyllabic or tautomoraic segments must be given by some stipulation. Kaye, for example, provides such a stipulation: “By universal convention the less sonorous of the two elements associated to the same point is produced first in the speech chain” (1985: 289). It remains to be seen if this can be maintained. For syllable-initial consonants, (20a), this is exactly what the sonority sequencing principle dictates. In the domain of single segments, affricates follow this convention, but the existence of prenasalized stops casts some

16

Péter Szigetvári

doubt on its validity. Apart from light diphthongs (like French [wa] in trois [trwa] ‘three’), monomoraic rhymal sequences, as in (20b), obviously cannot be subject to this generalization, since they are invariably ordered in the opposite way, with the more sonorous (vowel) first and the less sonorous (consonant) second. Be that as it may, without some similar (set of) principle(s) an autosegmental representation without a timing tier is uninterpretable. To overcome this difficulty, one might wish to introduce root nodes, a notion familiar from frameworks organizing features into hierarchical structures, “feature geometries” (Clements 1985; Sagey 1986; McCarthy 1988; chapter 27: the organization of features). The root node is the topmost node of such a hierarchy, containing all of the features making up the given segment, that is, the entirety of the segment. If the graphical order of root nodes specified their temporal order as well – as assumed in the contour-segment model of affricates – then “root node” would be just another name for “skeletal slot,” i.e. one would simply reintroduce the skeleton into the representation. The skeleton is apparently indispensable.

5

A return to the CV skeleton

The modern career of the mora was launched by the need to distinguish onset consonants from coda consonants. Only the latter are capable of contributing to the weight of a syllable, that is, of behaving like a vowel; onsets are not (chapter 55: onsets and chapter 57: quantity-sensitivity). A mora is therefore assigned to consonants in the rhyme, but not to those in the onset. Note, however, that the reasoning is circular: codas are equipped with a mora because we observe that they behave differently, and then refer to these moras to explain their difference. But we could just as well imagine an alternative world in which onsets were moraic and codas were not. There is no inherent property of coda consonants that means they are fated to be moraic, as opposed to onset consonants. To make things worse, we will see that it is not exactly true that onset consonants are never moraic, or at least that their loss never entails compensatory lengthening. It turns out to be an oversimplification to tie diverse phenomena like compensatory lengthening, stress assignment algorithms, the assignment of tone-bearing units, etc. to a single property of the representation, i.e. moras (Hayes 1995: 299; Gordon 2004). It is a version of the once rejected CV skeleton that might bring us closer to understanding this asymmetry in the behavior of consonants at the two edges of the syllable. To distinguish it from the McCarthy and Clements and Keyser type of CV skeleton, I will refer to it as the “strict CV skeleton.” In §1.3 and §3.1, we saw why it is useful to suppose that some skeletal positions are empty. So far, we have only seen empty consonantal positions, but there is no particular reason why emptiness, i.e. the state of not being associated to any melodic material, should be limited to consonantal positions. The claim that the host of the vowel (the nucleus) is the head of the syllable, and therefore cannot be missing, is not a very strong one. Syntactic heads, for example the complementizer of a complementizer phrase, may remain empty (e.g. I know [CP [C Ø] she’ll come]).6 But other prosodic units like the foot may also exist without an overt head: in the previous sentence 6

In fact, in English it is by default empty in non-questions, that is, there is an empty complementizer at the beginning of the matrix clause too.

The Skeleton

17

the first headed foot begins with know, and the pronoun I before it forms a headless, degenerate foot. Feet and syllables are similar types of prosodic units, and headless syllables are therefore not inconceivable entities. If nuclei may remain unpronounced, a very restricted syllable structure becomes available.7 Lowenstamm (1996) proposes that underlyingly all languages have the same skeleton, the simplest one available, comprising non-branching onsets and non-branching nuclei in strict alternation. Accordingly, no two consonants and no two vowels are adjacent on the skeleton, as they are always separated by a position of the opposite type. (21) gives the four cluster types of (5) in a strict CV model. (21)

The strict CV representation of vowel and consonant clusters a. V

C

b. V

V

A

A

C

c.

V

C

V t

u

C

d. C n

V

C t

Recall the discussion in §3.2: the CV skeleton contains redundant information that can be read off higher prosodic structures. But this only holds if there is any higher prosodic structure. In fact, strict CV analyses generally do not call for the recognition of such structure, and certainly not of any further syllabic constituency. The two cases of compensatory lengthening in Greek – Proto-Greek [esmi] > Attic [e(mi] and Aeolic [em(i] – are illustrated in (22). (22)

Compensatory lengthening in a strict CV skeleton a. V

C

e

s

V

C

V

b. V

C

m

i

e

s

V

C

V

m

i

With the delinking of the [s], two skeletal slots are opened up for association: both the consonantal slot of the delinked coda and the vocalic slot enclosed within the original [sm] cluster. The choice is apparently controlled by a dialect-specific parameter, just as in any other theory of the skeleton. The mora of moraic theories is an independent entity, which can be assigned to segments as the analyst needs it – it is only empirical considerations that stop them from assigning a mora to onsets. In the strict CV approach, moras are an inevitable consequence of the way the skeleton is built up (Scheer and Szigetvári 2005). A coda consonant is moraic because it is followed by an unpronounced vocalic slot. That is, the moraicness of the coda is only apparent: it is the following vocalic slot that carries weight. In this view, only vocalic slots are moraic. The loss of an intervocalic consonant does not free any “buried” empty vocalic slot, as (23a) shows. The loss of a preconsonantal consonant, on the other hand, makes an otherwise unreachable vocalic slot available as a target for spreading, as in (23b). 7

Note that “empty” and “unpronounced” are not equivalent. In a privative feature framework, empty skeletal positions may be phonetically interpreted, as a sound maximally lacking any contrast, like e.g. [H] or [?]. Some empty skeletal positions may thus be pronounced; others may remain unpronounced if they satisfy certain conditions. See Kaye et al. (1985, 1990), Charette (1991), and Harris (1994) for details.

Péter Szigetvári

18 (23)

The loss of an intervocalic and a preconsonantal consonant a.

V

C

V

b. V

C

a

t

a

a

s

V

C

V

t

a

The weight of closed syllables containing a short vowel is language-specific. For example, in English and Cairene Arabic such syllables count as heavy, while in Khalkha Mongolian and Yidiny they count as light (Zec 1995: 89). This parametric variation is trivially encoded in moraic frameworks: coda consonants are sometimes assigned a mora, and sometimes not. In the strict CV model, the same fact is encoded by parameterizing whether or not an unpronounced vocalic slot is counted by the relevant process. Crucially, however, since the shape of the skeleton is constant – it is always a strict alternation of vocalic and consonantal positions – the uncounted vocalic slot is there even when it is not counted by a certain process (say, stress assignment). One prediction running counter to those of Moraic Theory follows from this fact: compensatory lengthening of a vowel should be possible even if coda consonants are not moraic in a language. Kavitskaya (2002) claims that at least two languages, Piro and Ngajan, are exactly like this. One could claim that the mora associated with the coda in such languages is one which does not contribute to weight but does allow compensatory lengthening (as an anonymous reviewer points out). This then means that there are two types of mora, a “weight mora” and a “compensatory lengthening mora.” The strict CV model predicts exactly this: there are two types of Vs. Pronounced Vs obligatorily contribute to weight, unpronounced ones are parameterizable. In the strict CV framework, when an empty vocalic position enclosed between two consonants is “unearthed,” compensatory lengthening may ensue, irrespective of whether this target of spreading is to the left or to the right of the vowel to lengthen. That is, the loss of an onset consonant may result in the lengthening of the vowel that follows it, as (24) shows. (24)

Onset loss yielding compensatory lengthening C b

V

C

V

c

a

The theory dictates that this option is available only for postconsonantal onsets, not for intervocalic ones (see (23a)). Confirmation of this prediction comes from southwestern dialects of Finnish where gradated [k] is lost, with compensatory lengthening. The data in (25) come from Kiparsky (2008); doubled vowels are long, as in standard Finnish orthography. (25)

Compensatory lengthening in southwestern Finnish dialects input /jalka-t/ /nælæ-n/ /halko-t/

SW dialect jalaat nælææn haloot

standard jalat næljæn halvot

‘legs’ ‘hunger-gen’ ‘logs’

The Skeleton

19

In the Finnish data, the lost consonant is always preceded by another consonant, and is never intervocalic. This is important, because the empty vocalic slot is available between two consonants, but not after a vowel, as (23a) shows. Samothraki Greek exhibits a similar type of compensatory lengthening. In this dialect, pre-vocalic [r] is lost, and is only retained in preconsonantal position – a mirror image of the distribution in non-rhotic dialects of English. The loss of postconsonantal [r] is illustrated in (26a). Intervocalic [r] is lost without trace, as in (26b), as expected. (The data are from Topintzi 2006, who attributes them to Katsanis 1996.) (26)

Loss of /r/ in Samothraki Greek a.

b.

input /’protos/ /’frena/ /’xroma/ /’:rafo/ /’leftirus/ /va’reO/ /’mera/ /’skara/

output [’po(tus] [’fe(na] [’xo(ma] [’:a(fu] [’leftius] [va’eO] [’mia] [’skaa]

‘first’ ‘brakes’ ‘color’ ‘I write’ ‘free’ ‘barrel’ ‘day’ ‘grill’

To provide the missing mora, Hayes (1989: 283) has to hypothesize an epenthesis stage before the loss of the [r]: [’frena] > [fe’rena] > [fe’ena] > [’fe(na]. The strict CV analysis is rather similar, though the only difference is a very important one: the slot of the “epenthetic” vowel is lexically available, since any two consonants are always separated by such an empty slot. The relevance of this difference between the two analyses is that there is no empirical evidence for epenthesis in this case. Furthermore, this assumption creates a paradox in the ordering of the historical events (Kavitskaya 2002: 98), and Hayes’s hypothesis is therefore not plausible. The strict CV skeleton, however, has a vocalic position to which the vowel can spread without any extra process. But even the strict CV model seems to be taken by surprise when it comes to the loss of word-initial [r]: this loss also triggers compensatory lengthening, as the words in (27) show. (27) Loss of /r/ in Samothraki Greek input output /’ruxa/ [’u(xa] /’rema/ [’e(ma]

‘clothes’ ‘stream’

Scheer and Ségéral (2001) introduce the notion of “coda mirror.” The coda is a typical lenition environment, being the position in the word that is not followed by a vowel, i.e. a preconsonantal or word-final position. The coda mirror is the opposite case: it is the position not preceded by a vowel, i.e. a postconsonantal or word-initial position, which is claimed to be the strong position, where lenition is unlikely. Scheer and Ségéral’s theory is built on the strict CV skeleton: for them, “not followed by a vowel” means followed by an unpronounced vowel, and “not preceded by a vowel” means preceded by an unpronounced vowel. It

20

Péter Szigetvári

is this empty vocalic position that causes the lengthening of the vowel in the Finnish and the Greek data discussed here. Not only postconsonantal, but also word-initial consonants are assumed to be preceded by an empty vowel, a proposal first argued for by Lowenstamm (1999). Accordingly, the loss of a word-initial consonant may also cause compensatory lengthening, as shown in (28). (28)

Word-initial consonant loss yielding compensatory lengthening (C)

V

C

V

C

V

r

u

x

a

Since consonant loss is not common in the coda mirror position, compensatory lengthening is also rare in this environment. The peculiarity of Samothraki Greek, then, is that it unexpectedly exhibits [r] loss in the coda mirror position and not in the expected coda position. The ensuing compensatory lengthening is a consequence predicted by the strict CV skeleton.

6

Conclusion

The phonological skeleton evolved as a result of the autosegmental idea taken to its logical conclusion: segments, after autosegmentalizion of all their melodic content, leave behind “traces” that encode their relative temporal order. The debates concerning the phonological skeleton are (i) whether skeletal slots specify any phonetic property (consonantalness vs. vocalicness) or none, i.e. whether the skeleton contains Cs and Vs or uniform Xs; and (ii) whether the mora can replace skeletal slots, with moraless consonants linked directly to the syllable node. This chapter has argued that skeletal slots are Cs and Vs, not merely Xs, but there is no further prosodic constituency (e.g. onsets, nuclei, or syllables). Furthermore, it has been claimed that the mora is not an independent element of the representation, but a consequence of parametrical settings on vocalic skeletal slots: pronounced V slots are universally moraic; unpronounced ones are moraic in some, but not in other languages. Consonants, on the other hand, are never moraic.

ACKNOWLEDGMENTS I acknowledge useful comments and advice from two anonymous reviewers, Marc van Oostendorp, Ádám Nádasdy, Péter Siptár, and László Varga. I thank them all. Remaining errors are mine.

REFERENCES Allen, W. Sidney. 1974. Vox Graeca: A guide to the pronunciation of Classical Greek. 2nd edn. Cambridge: Cambridge University Press. Broselow, Ellen. 1995. Skeletal positions and moras. In John A. Goldsmith (ed.) The handbook of phonological theory, 175 –205. Cambridge, MA & Oxford: Blackwell.

The Skeleton

21

Charette, Monik. 1991. Conditions on phonological government. Cambridge: Cambridge University Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. 1999. Affricates as noncontoured stops. In Osamu Fujimura, Brian D. Joseph & Bohumil Palek (eds.) Proceedings of LP ’98: Item order in language and speech, 271–299. Prague: Karolinum Press. Clements, G. N. & Samuel, J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Davis, Stuart. 1988. Syllable onsets as a factor in stress rules. Phonology 5. 1–19. E. Abaffy, Erzsébet. 2003. Az ómagyarkor: Hangtörténet [The Old Hungarian period: Sound changes]. In Jen4 Kiss & Ferenc Pusztai (eds.) Magyar nyelvtörténet [A history of Hungarian], 301–351. Budapest: Osiris. Everett, Daniel L. & Keren Everett. 1984. Syllable onsets and stress placement in Pirahã. Proceedings of the West Coast Conference on Formal Linguistics 3. 105 –116. Fudge, Erik C. 1969. Syllables. Journal of Linguistics 5. 253–286. Gimson, A. C. 1989. An introduction to the pronunciation of English. 4th edn., revised by Susan Ramsaran. London: Edward Arnold. Gimson, A. C. 2001. Gimson’s pronunciation of English. 6th edn. London: Edward Arnold. Goedemans, Rob. 1996. An optimality account of onset-sensitive stress in quantityinsensitive languages. The Linguistic Review 13. 33–47. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Gordon, Matthew. 2004. Syllable weight. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 277–312. Cambridge: Cambridge University Press. Gussmann, Edmund. 2002. Phonology: Analysis and theory. Cambridge: Cambridge University Press. Halle, Morris & Jean-Roger Vergnaud. 1980. Three-dimensional phonology. Journal of Linguistic Research 1. 83–105. Harris, John. 1994. English sound structure. Oxford: Blackwell. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hoberman, Robert D. 1988. Local and long-distance spreading in Semitic morphology. Natural Language and Linguistic Theory 6. 541–549. Hock, Hans Henrich. 1986. Compensatory lengthening: In defense of the concept “mora.” Folia Linguistica 20. 431–460. Ingria, Robert. 1980. Compensatory lengthening as a metrical phenomenon. Linguistic Inquiry 11. 465 –495. Jakobson, Roman, C. Gunnar M. Fant & Morris Halle. 1952. Preliminaries to speech analysis: The distinctive features and their correlates. Cambridge, MA: MIT Press. Katsanis, Nikolaos. 1996. Sn ckwqqijó idíw[a rgp Ra[nhoájgp [The dialect of Samothraki Greek]. DÑ[np Ra[nhoájgp [Municipality of Samothraki]. Kavitskaya, Darya. 2002. Compensatory lengthening: Phonetics, phonology, diachrony. London & New York: Routledge. Kaye, Jonathan. 1985. On the syllable structure of certain West African languages. In Didier L. Goyvaerts (ed.) African linguistics: Essays in honour of M. W. K. Semikenke, 285–308. Amsterdam: John Benjamins. Kaye, Jonathan. 1989. Phonology: A cognitive view. Hillsdale, NJ: Lawrence Erlbaum. Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1985. The internal structure of phonological elements: A theory of charm and government. Phonology Yearbook 2. 305–328.

22

Péter Szigetvári

Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1990. Constituent structure and government in phonology. Phonology 7. 193–231. Kenstowicz, Michael & Jerzy Rubach. 1987. The phonology of syllabic nuclei in Slovak. Language 63. 463–497. Kiparsky, Paul. 2008. Compensatory lengthening. Paper presented at the CUNY Conference on the Syllable. Lahiri, Aditi & B. Elan Dresher. 1999. Open Syllable Lengthening in West Germanic. Language 75. 678–719. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. Lowenstamm, Jean. 1996. CV as the only syllable type. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, 419 –441. Salford: ESRI. Lowenstamm, Jean. 1999. The beginning of the word. In Klaus Kühnhammer & John Rennison (eds.) Phonologica 1996, 153–166. The Hague: Holland Academic Graphics. Lowenstamm, Jean & Jonathan Kaye. 1986. Compensatory lengthening in Tiberian Hebrew. In W. Leo Wetzels & Engin Sezer (eds.) Studies in compensatory lengthening, 97–132. Dordrecht: Foris. McCarthy, John J. 1979. On stress and syllabification. Linguistic Inquiry 10. 443–465. McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12. 373–418. McCarthy, John J. 1986. OCP effects: Gemination and antigemination. Linguistic Inquiry 17. 207–263. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. Minkova, Donka. 1982. The environment for open syllable lengthening in Middle English. Folia Linguistica Historica 3. 29 –58. Moravcsik, Edith A. 1978. Reduplicative constructions. In Joseph H. Greenberg, Charles A. Ferguson & Edith A. Moravcsik (eds.) Universals of human language, vol. 3: Word structure, 297–334. Stanford: Stanford University Press. Odden, David. 1986. On the role of the Obligatory Contour Principle in phonological theory. Language 62. 353–383. Prince, Alan. 1984. Phonology with tiers. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure: Studies in phonology presented to Morris Halle by his teacher and students, 234–244. Cambridge, MA: MIT Press. Prunet, Jean-François. 1987. Liaison and nasalization in French. In Carol J. Neidle & Rafael Núñez Cedeño (eds.) Studies in Romance languages, 225 –235. Dordrecht: Foris. Raimy, Eric. 1999. Strong syllable reduplication in Mokilese. In Rebecca Daly & Anastasia Riehl (eds.) Proceedings from ESCOL ’99, 191–202. Ithaca, NY: CLC Publications. Roca, Iggy. 1994. Generative phonology. London & New York: Routledge. Sagey, Elizabeth. 1986. The representation of features and relations in nonlinear phonology. Ph.D. dissertation, MIT. Scheer, Tobias & Philippe Ségéral. 2001. La coda-miroir. Bulletin de la Société de Linguistique de Paris 96. 107–152. Scheer, Tobias & Péter Szigetvári. 2005. Unified representations for the syllable and stress. Phonology 22. 37–75. Siptár, Péter & Miklós Törkenczy. 2000. The phonology of Hungarian. Oxford: Oxford University Press. Steriade, Donca. 1982. Greek prosodies and the nature of syllabification. Ph.D. dissertation, MIT. Takahashi, Toyomi. 1999. Constraint interaction in Aranda stress. In S. J. Hannahs & Mike Davenport (eds.) Issues in phonological structure: Papers from an international workshop, 151–181. Amsterdam & Philadelphia: John Benjamins. Topintzi, Nina. 2006. Moraic onsets. Ph.D. dissertation, University College, London. Zec, Draga. 1995. Sonority constraints on syllable structure. Phonology 12. 85 –129.

55

Onsets Nina Topintzi

Onsets are obligatory in the most typical syllable found cross-linguistically, the consonant–vowel (CV) syllable, and as such, are found ubiquitously across languages. This chapter explores various aspects of onsets, covering much of their structural, segmental, and suprasegmental behavior. Using empirical data as a point of departure, various stances and theoretical views will be addressed on a number of issues. These include the presence of the onset in unmarked CV syllables (§1), onset clusters and the role of sonority in their formation (§2), and the structure and representation of the onset within the syllable (§3). The focus will then shift to the onset’s often disregarded role in suprasegmental phonology with reference to several weight-based phenomena (§4). The chapter closes by briefly reviewing approaches that tackle the onset–coda asymmetry (§5).

1

Onsets in unmarked syllables

Most phonologists agree that the most unmarked syllable universally is a CV syllable (Jakobson 1962: 526; chapter 33: syllable-internal structure), i.e. a syllable that consists of a nucleus and a preceding consonant, the onset. When the onset consists of a single segment then it is simplex; when it contains a consonant cluster then it is complex. The present section deals with the former. Evidence for the unmarkedness of CV syllables comes from a variety of sources. First, CV syllables exist in all languages (unlike other syllable types, which only occur in some) and indeed there may be languages whose sole syllable type is CV, e.g. Hua (Blevins 1995) or Senoufo (Zec 2007). While it is the case that every language will have CV syllables, it is not equally true that every syllable in a language will have an onset. Unlike Totonak and Dakota (and of course Hua and Senoufo), where onsets are obligatory, in many other languages they are optional, e.g. Greek, English, and Fijian (Zec 2007). The naturalness of CV syllables is also indicated by the fact that they are the first syllables produced by children during the initial stages of language acquisition (chapter 101: the interpretation of phonological patterns in first language acquisition). The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0055

2 (1)

Nina Topintzi CV outputs by a Dutch child at age 1;5,2 (Levelt et al. 2000) /pus/ /klar/ /oto/ /api/

[pu] [ka] [toto] [tapi]

‘cat’ ‘finished’ ‘car’ ‘monkey’

As Buckley (2003) shows, however, children’s initial productions may also involve VC syllables. Importantly though, these never seem to arise independently, i.e. without CV syllables also being present in the language. The dominance of CV syllables is seemingly contradicted by Arrernte (also known as Aranda; Breen and Pensalfini 1999), Barra Gaelic, and Kunjen – especially its dialect Oykangand – whose syllables are claimed to be of the VC type (with extra codas if need be) and not of the CV type (Blevins 1995 and references therein). These cases are rather weak, however, since for the most part alternative explanations that actually make use of the CV syllable type have been proposed. For instance, Blevins (1995: 230–231) observes that in Kunjen, aspiration only appears prevocalically. In principle, this could be understood as occurring either syllable-initially or syllable-finally, but empirical facts suggest that only the former analysis is viable. If aspiration were to apply syllable-finally, then it should also emerge word-finally, something that never occurs. The facts are thus only compatible with syllabification in the onset. Perhaps the strongest argument in favor of the existence of CV syllables, though, comes from a rule of utterance-initial reduction that deletes initial onsetless syllables, presumably as a means to achieve more well-formed onsetful syllables, as in (2). (2)

Oykangand reduction in utterance-initial position (Sommer 1981: 240) unreduced igigun amamaI uIgul

2

reduced gigun mamaI gul

‘keeps going’ ‘mother (voc)’ ‘there’

deleted material [i] [a] [uI]

Complex onsets

As well as simplex onsets, onsets can also be complex, usually composed of two segments and hence considered maximally binary (Blevins 1995; Morelli 1999; Baertsch 2002; among many others), as in Greek [’tre.xo] ‘I run’, [’pe.tra] ‘stone’, [’vli.ma] ‘missile’, or [’tu.vlo] ‘brick’. Longer sequences such as [str] or [spl] are also commonly allowed, as in English [stre>] stray or [spl>t] split, but usually these are not considered to exceed the binarity maximum, as there is evidence that the [s] here is not part of the onset (see chapter 38: the representation of sc clusters). Yet in some work, the existence of complex clusters is denied altogether. For example, Lowenstamm (1996) and Scheer (2004) claim that all surface syllable types are subsumed under the CV matrix with the addition of empty positions, e.g. English [dØ][ri][mØ] dream. Duanmu (2008) interprets complex onsets such as pl, fr, kl, kr as complex sounds under a single timing slot, on the assumption that such sounds are possible if the articulatory gestures of two sounds can overlap (chapter 54: the skeleton).

Onsets

3

Most phonological models, however, allow complex onsets and provide relevant analyses to account for them. In government phonology (van der Hulst and Ritter 1999; Kaye 2000), for example, binarity is explicitly integrated within the model through the Binarity Theorem (Kaye 1990, 2000), which states that constituents cannot dominate more than two positions, so that onsets may either exhibit single association to a skeletal point (3a) or be maximally binary branching (3b). (3)

Onsets within government phonology (van der Hulst and Ritter 1999) a. O

b. O

x

x

x

More commonly, the binarity of the onset and the combinatorial possibilities among segments within it are attributed to co-occurrence restrictions between adjacent segments (Clements 1990; Zec 2007: 164). In fact, a number of proposals subscribe to the idea that onset syllabification – like the other components of the syllable – is governed by sonority considerations (e.g. Hooper 1976; Steriade 1982; Selkirk 1984; Clements 1990; among others). Briefly, in this approach, more sonorous segments are preferred toward the center of the syllable, whereas less sonorous ones make better syllable margins, i.e. onsets and codas (Clements 1990).1 Despite certain objections to sonority (see below; and also Parker 2002; chapter 49: sonority), its importance for phonological theory is generally acknowledged (Steriade 1982; Selkirk 1984; Clements 1990; Rice 1992; Kenstowicz 1994; Zec 1995). One fairly standard version of the sonority hierarchy is shown below (after Clements 1990). (4)

Sonority scale (> = more sonorous than) vowels > glides > liquids > nasals > obstruents2

One principle that makes use of this scale is the Sonority Sequencing Principle (SSP; Clements 1990), which states that the sonority profile of a syllable must be such that sonority rises sharply toward the peak and gradually lowers after it. Evidence for the SSP comes from various sources. One example is Imdlawn Tashlhiyt Berber (e.g. Dell and Elmedlaoui 1985), known for its long sequences of consonants. Indeed, there may be words that consist of no vowel at all, e.g. [tftkt] ‘you suffered a sprain’. These seemingly highly complicated strings can, however, be easily analyzed if one utilizes the SSP, plus a few other assumptions. Bearing in mind that in Imdlawn Tashlhiyt Berber: (i) any segment can be a syllable nucleus, (ii) onsetless syllables are only allowed word-initially, (iii) codas may appear word-finally, and (iv) complex onsets are banned, the following examples are syllabified in such a way that the nucleus of each syllable comprises a sonority peak.

1

For more detailed discussion on the Sonority Sequencing Principle and the Minimal Sonority Distance, see chapter 49: sonority. 2 For a discussion of other variants see Parker (2002).

4 (5)

Nina Topintzi Imdlawn Tashlhiyt Berber syllabification /ut-x-k/ [u.tUk] /rks-x/ [y.kzx] /t-msx-t/ [tx.sUt]

‘I struck you’ ‘I hid’ ‘you have transformed’

Additional evidence for the SSP comes from onset cluster simplification processes, as in Sanskrit (see Steriade 1988 and chapter 119: reduplication in sanskrit for relevant data) or Attic Greek (Steriade 1982), whereby C1C2 onset strings are reduced to simplex onsets in reduplication. Notably, the surviving C is the least sonorous one, resulting in a more abruptly rising slope toward the nucleus. Similar facts arise in child speech (chapter 101: the interpretation of phonological patterns in first language acquisition), as is evident in the outputs of an English-learning girl aged 2;9 reported on by Gnanadesikan (1995). (6)

Cluster simplification to the least sonorous consonant clean snow friend sky

[kin] [so] [fen] [kaj]3

Not all languages admit the same inventory of complex onsets. It is generally held to be true that the larger the distance in sonority between C1 and C2, the more well-formed the onset cluster. Thus, obstruent (O) + glide (G) clusters are highly favored, followed by O + liquid (L), O + nasal (N), and so on. Onset clusters preferably satisfy a Minimal Sonority Distance restriction in order to be allowed in a language (Vennemann 1972; Hooper 1976; Steriade 1982; Selkirk 1984; Baertsch 2002). In Bulgarian, no distance at all is necessary, thus all of OL, NL, ON, LL, NN, and OO clusters are admitted (Zec 2007); in other languages, different degrees of Minimal Sonority Distance are applicable: in Chuckchee, only OL, NL, and ON clusters are well-formed (Levin 1985); in Spanish, only OL onset clusters (Baertsch 2002); and in Huariapano (Parker 1994), only OG clusters. In a sense, Minimal Sonority Distance generates the expectation that if a language allows C1C2 onset clusters where C2 is of sonority X, then it should also admit onset clusters with a C2 whose sonority is higher than X. But as we have just seen, this is not always the case: e.g. Spanish, which bans *OG clusters. To make things worse, many languages also allow sonority plateaus and even reversals. For example, Greek plateaus like [kt], [fh], and [v:] are tolerated, as in [ktirio] ‘building’, [akti] ‘coast’, [fhiro] ‘impair’, [afhonos] ‘abundant’, [v:azo] ‘remove’, [av:o] ‘egg’. Russian also permits reversals, e.g. [rtut] ‘mercury’ and [lvov] (city name) (Gouskova 2001), which, however, are often considered not to be complex

3

Under the assumption that [sk] is a complex onset, the fricative [s] must be more sonorous than the stop [k] (cf. Dell and Eldmedlaoui 1985; de Lacy 2006; among others). In the sonority hierarchy I have adopted here, this distinction is not made. On the other hand, a difference in sonority of fricatives as opposed to stops would yield incorrect results in other accounts, e.g. Kreitman (2006). If [s], however, is not part of the onset (cf. chapter 38: the representation of sc clusters), this issue does not arise in the first place.

Onsets

5

onsets; rather, the segment(s) violating the SSP can be realized as syllabic, e.g. [y.tut], or even extrasyllabic, attaching to some higher level of prosodic structure, e.g. the foot or prosodic word (see chapter 40: the foot and chapter 38: the representation of sc clusters for more discussion). Such data partly explain why the validity of sonority is sometimes contested. Other objections to sonority include the lack of a clear way to phonetically define and measure it, and its inability to explain the frequent ban on sequences of the type ji, wu, bw, or dl (quite likely an Obligatory Contour Principle effect). Some researchers have therefore gone as far as to discard sonority. For example, Ohala (1990) and Harris (2006) claim that attested sequences in languages can be best captured through the perceptual distance between neighboring sounds in terms of a number of different acoustic properties, including amplitude, periodicity, spectral shape, and fundamental frequency (F0) (Ohala 1990: 334). As Ohala (1990: 334–335) admits, however, this view explains which sequences should be found in languages, but does not explain how and why they are grouped into syllables. This is perhaps why – despite criticism – sonority still remains highly influential in current work on syllabification (cf. Baertsch 2002; Gouskova 2004; Zec 2007; among many others). But there is yet another possibility. Rather than completely endorsing or abandoning sonority, we can accept it, but loosen somewhat the predictions and generalizations it makes. Berent et al. (2007) put forward a proposal along these lines. In particular, they suggest a more flexible version of sonority-based generalizations regarding the profile of onset clusters. They state that: In any given language: (a) The presence of a small sonority rise in the onset implies that of a large one. (b) The presence of a sonority plateau in the onset implies that of some sonority rise. (c) The presence of a sonority fall in the onset implies that of a plateau. (Berent et al. 2007: 594)4

On this view, Spanish is no longer problematic (since OL clusters involve high sonority, there is no reason that there should be OG clusters too), and the plateaus of Greek are expected, given that it also has sonority rises, while Russian has falls only because it also has plateaus. More generally, Berent et al. (2007) test the statements above against the sample of Greenberg (1978) and find that they overwhelmingly hold true typologically. Other typological surveys on onset clusters also tend to employ sonority, usually with some modification or enrichment of the theory. For instance, Morelli (2003) investigates the patterns of obstruent onset clusters and proposes implicational relationships between them, as schematized in (7), where fricative + stop (FT) clusters are the least marked, TT the most marked, and TF somewhere in between. FF clusters merely imply the existence of FT, without further implicational relationship with other clusters. 4

Berent et al. (2007) seem to adopt Greenberg’s (1978) characterization of small and high sonority. High-sonority rises are OL clusters; low-sonority rises are NL and ON; plateaus are OO, and falls are LN and NO clusters.

Nina Topintzi

6 (7)

Implicational relationships between obstruent onset clusters (Morelli 2003)5 TT TF FF

FT

Kreitman (2006) focuses on sonorant (S) and obstruent (O) clusters and proposes the implicational hierarchy SO ⇒ SS ⇒ OO ⇒ OS, with OS clusters being the most unmarked, and SO ones the most marked. These are respectively the most and least favored clusters as far as sonority is concerned. SS and OO clusters involve sonority plateaus, but do not randomly appear in languages as one would expect; instead, the presence of SS systematically implies OO. To account for this fact, Kreitman points to the increased salience of obstruents as opposed to sonorants (cf. Ohala 1983: 193). Since obstruents are considered to carry more information, due to their acoustic form, they are easier to distinguish from non-obstruents. Thus, combinations between obstruents should be perceptually favored over those between sonorants. What all these studies highlight is that removing sonority from the equation is not useful; rather it seems that consideration of other factors, e.g. the role of perceptual salience, may enhance the role of sonority conceptually and improve its empirical coverage.

3

The status of the onset within the syllable

Moving away from the principles that regulate onset syllabification, let us consider the representation of the onset within the syllable. Various models of the syllable have been proposed throughout the years (see Blevins 1995; van der Hulst and Ritter 1999 for overviews; see also chapter 33: syllable-internal structure), which due to lack of space will not be discussed here in detail. Nonetheless, reference will be made to those that are especially relevant to onsets. Broadly speaking, we can identify two major theories: (i) those that distinguish between onsets and rimes (Pike and Pike 1947; KuryÓowicz 1948; Fudge 1969; Selkirk 1982; Levin 1985; Kaye et al. 1990; Blevins 1995), and (ii) moraic models that do away with the rime, i.e. the nucleus + coda string, as a separate constituent (Hyman 1985; Hayes 1989; Morén 2001).

3.1

Onset–rime models

No single version of the onset–rime model is available, and there are significant divergences between models. For instance, Fudge (1969) accepts the syllable as a constituent, whereas Kaye et al. (1990) explicitly do away with it, but nonetheless treat the onset and rime as “an inseparable package” (van der Hulst and Ritter 1999: 23). 5

Inclusion of sC clusters among the FT clusters and their treatment as onset clusters, at least wordinitially, is quite problematic for Morelli, however, in light of evidence showing how sC clusters differ from true branching onsets in various ways (see chapter 38: the representation of sc clusters).

Onsets (8)

7

A typical representation of the onset–rime model (Blevins 1995) q R O N

C

C

C

V

Specific syllable models make different claims about constituenthood. For instance, Blevins (1995) essentially only recognizes the rimal constituent and sees no strong argument for an onset constituent – and for that matter, a coda constituent. For government phonology (van der Hulst and Ritter 1999; Kaye 2000), on the other hand, onsets, nuclei, and rimes are constituents. The basic argument for the rime hinges on the idea that co-occurrence restrictions are always more likely to occur between nuclei and codas, rather than between either onsets and nuclei or onsets and codas. The strongest argument for the rime though comes from weight facts (Blevins 1995; van der Hulst and Ritter 1999: 23). Consider stress, for example. As is well known, in many languages heavy syllables attract stress in contrast to light syllables (e.g. Hopi; Jeanne 1982). Importantly, heaviness implies a binary rime, [VV]R or [VC]R, or both, depending on the language. Since the presence of onsets is disregarded in such an evaluation, it must mean that rimes form a constituent that clearly excludes the onset. Nonetheless, each of the arguments in support of the rime has been challenged. Davis (1985) attacks the reliability of co-occurrence and phonotactic restrictions, given that those are not exclusive to nuclei and codas, but are also found between onsets and nuclei or onsets and codas. For instance, in Korean (Cho 1967), fronted vowels do not appear after labial onsets, while in Yindjibarndi (Wordick 1982), the presence of /r/ in both the onset and a coda of a syllable is banned. Another objection to the onset–rime distinction is found in Yip (2003), who claims that if it were valid, then the boundary between the two constituents should be clear and consistent, and thus segments should uniformly belong to either the onset or the rime, but not to both. English and Mandarin pre-nuclear glides, however, behave sometimes like onsets and sometimes as rimes. As for the weight effects induced by the rime, it is possible to capture them in a different manner without reference to the rime per se. This is what moraic theory does, as we will see in a moment. Before moving on, though, it is notable that the onset–rime debate is also predominant in psycholinguistic studies that explore the onset–rime boundary in terms of implicit and explicit, i.e. non-conscious vs. conscious, phonological awareness. Work by Treiman (1986 and references therein) on various segmentation and substitution tasks in both adults and children suggests that there is a closer connection between VC than CV, thus offering support for the onset–rime boundary. In the same vein, Uhry and Ehri (1999) show that English-speaking kindergarten children preferred to keep VC, rather than CV, intact during segmentation. The opposite result, however, was found by Lewkowicz and Low (1979).

Nina Topintzi

8

More recently, Geudens and Sandra (2003), in a series of four experiments on Dutch-speaking pre-readers and beginning readers, found no support for the onset–rime boundary. Importantly, they applied strict criteria regarding the selection of items under investigation, such that they could control for distributional and sonority effects. In particular, they used items of different sonority equally often and found that syllables with obstruents were easier to perceive and segment than syllables with sonorants (2003: 172); see also chapter 8: sonorants. The influence of sonority may in fact explain some of the findings of previous studies, such as Schreuder and van Bon’s (1989) finding that Dutch first-graders break up a CV string more easily than a VC one. In their study, sonorants were mainly used, but sonorants undergo more vocalization in coda rather than onset position, possibly explaining why children find it harder to break them up in a VC environment rather than a CV one. All in all, psycholinguistic experimentation also reflects contradictory evidence with regard to the onset–rime boundary debate. What this absence of consensus at the very least suggests is that the boundary dispute is well grounded.

3.2

Moraic model

A common response to criticism against the rime has been to dispense with it as a constituent altogether and to replace it with the concept of mora. In moraic theory (Hyman 1985; Hayes 1989), only segments under – what used to be – the rime node may bear moras. Since the latter are needed independently to account for a number of phenomena related to syllable weight, the natural conclusion has been to structurally eliminate the rime from representations. The representation of a CVC syllable in this model is presented next (compare with (8)). Note that the bracket around the mora of the coda indicates that this may be moraic or not on a language-specific basis (cf. Weight-by-Position; Hayes 1989). (9)

Moraic model (Hayes 1989) q [ ([) C

V

C

Within moraic theory, there is no definite agreement as to where exactly the onset associates to. According to Hayes, it directly adjoins to the syllable as in (9). For Hyman (1985), Itô (1989), and Buckley (1992), though, it attaches to the following mora, as in (10). (10)

Onset association (Hyman 1985) q [ ([) C

V

C

Onsets

9

In both these versions of moraic theory, the onset is not recognized as a constituent. This is much more clearly shown in (9), where it directly links to the syllable node, but it is visible even in (10), since the mora is shared between the onset and the nucleus. While Hayes’s representation is the most widely employed, there is nevertheless some evidence for (10). Katada (1990) describes the Japanese chain language game shiritori, in which players say a word that must begin with the final mora of the previous player’s word. If the word ends in a CV syllable, as in [tu.ba.me] ‘swallow’, then the next word can be something like [me.da.ka] ‘killfish’. If the word ends in a long vowel, then the last mora is the second half of the vowel, to the exclusion of the first half, as well as the onset. Thus [bu.doo] ‘grapes’ can be followed by [o.ri.ga.mi] ‘folding paper’ but not by *[doo.bu.tu] ‘animal’. Importantly, a word like [riN.go] ‘apple’ (where N is a moraic nasal) cannot be followed by *[o.ri.ga.mi], but must begin with [go]. This is easily explained if the final mora in [go] also associates to the onset, as claimed by (10), rather than linking directly to the syllable (9). The game ends if the final mora cannot form a proper onset, as happens when it is a moraic nasal, e.g. [ki.riN] ‘giraffe’. Since the moraic model identifies no rime constituent, it bypasses the problems faced by the onset–rime model with regard to the extension of co-occurrence restrictions beyond the rimal node, as well as the absence of a clear boundary between the onset and the rime. Superficially, however, it does equally well as the onset–rime model in accounting for syllable weight, simply by stating or – more accurately, stipulating – that moras are strictly limited to nuclei and codas. But even this assertion has been contested. Work by Hajek and Goedemans (2003), Gordon (2005), and Topintzi (2006, 2010) has shown that there is good evidence for the existence of onset weight. We explore this issue next.

4

The suprasegmental phonology of onsets

Contrary to popular belief, onsets do seem to be prosodically active, albeit in a limited number of languages. Their effects become evident in a range of phenomena, including stress, compensatory lengthening, gemination, word minimality, and tone. This section examines the relevant data and theoretical issues that stem from them.

4.1

Stress

Of all these phenomena, onset-sensitive stress has received the most extensive attention. In brief, three patterns are attested: (i) onset effects due to the presence of an onset, (ii) onset effects due to the quality of an onset, and (iii) patterns (i) and (ii) combined. Starting from (i), we find that in a number of languages onsetful syllables attract stress more than onsetless ones. Languages of this type include Arrernte (Strehlow 1944), Alyawarra (Yallop 1977), and other Australian languages, such as Lamalama, Mbabaram, Umbuygamu, Umbindhamu, Linngithig, Uradhi, KukuThaypan, Kaytetj, and Agwamin (most of them are Cape York and Arandic languages; see Davis 1985, Goedemans 1998, and Blevins 2001 for more details). Beyond Australia, this pattern is attested in unrelated languages of North and

Nina Topintzi

10

South America, Iowa-Oto (Robinson 1975), Banawá (Buller et al. 1993), and Juma (Abrahamson and Abrahamson 1984).6 In Arrernte, C-initial words receive stress on the first syllable (11a), but V-initial ones have stress on the second syllable (11b). One exception is disyllabic words, where stress is word-initial regardless of whether the word begins with a vowel or a consonant (11c). This is probably attributed to Arrernte’s avoidance of final stress or preference for creating binary feet, as the lack of final secondary stress in words like *[a(’ralka)(‘ma)] reveals. (11)

Arrernte stress (Strehlow 1944) a.

consonant-initial ’ra(tama ’kutun‘gula ’lelan‘tinama

words of three or more syllables ‘to emerge’ ‘ceremonial assistant’ ‘to walk along’

b.

vowel-initial words of three or more syllables er’guma ‘to seize’ a’ralkama ‘to yawn’ u’lambu‘lamba ‘water-fowl’

c.

words of two syllables (C- or V- initial) ’ilba ‘ear’ ’a(twa ‘man’ ’kala ‘already’ ’gura ‘bandicoot’

A common denominator is that stress may shift – albeit very locally – to dock on a syllable with an onset. This is not the only possibility, however. In other languages, the stress location remains constant, but if it falls on an onsetless syllable, this acquires an onset. Consider Dutch (Booij 1995: 65). In instances of hiatus where the first vowel is /a/, a glottal stop is inserted before the second vowel, but only if this is stressed by the normal algorithm, e.g. /paelja/ → [pa.’?el.ja] ‘paella’, /aDrta/ → [a.’?Dr.ta] ‘aorta’. Otherwise, no insertion is applicable: /xaDs/ → [’xa(.Ds] ‘chaos’, /farao/ → [’fa(.ra.o(] ‘Pharaoh’. Most analyses view this as a prominence (Smith 2005) or alignment (Goedemans 1998; Topintzi 2010) effect. In yet other languages, the mere presence of an onset is not the issue (see Topintzi 2010: 48 for details on Karo); it is the quality of the onset that matters. This is the case in Karo (Gabas 1999) and possibly Arabela (Payne and Rich 1988). In the former, stress falls on the final syllable, except when the penultimate syllable is a better stress bearer. Better stress bearers are, in order of priority, a syllable with (i) a high tone, (ii) a nasal vowel, or (iii) a voiceless or sonorant onset. When (i) and (ii) are irrelevant, (iii) is taken into consideration and stress falls on the final syllable if the onset is a sonorant (12a) or voiceless (12b) or a voiced obstruent preceded by another voiced obstruent onset (12c). 6

However, the case of Juma should be treated with caution, because only a handful of data are available and because it is possible to re-analyze it. In particular, words like [pe’jikD’pia] ‘bird (sp.)’ may be argued to contain a final diphthong, i.e. [pe.’ji.kD.’pia], rather than a sequence of heterosyllabic vowels, i.e. [pe.’ji.kD.’pi.a], which would lend support to the onset effect. Interestingly, Juma is the sole language where the effect appears at the right edge of the word and not at the left. This may perhaps be an additional indication that it is not truly onset-sensitive.

Onsets (12)

11

Karo final stress and onset voicing (Gabas 1999: 14, 39–41)7 a.

final syllable with sonorant onset kD’jD ‘crab’ ja?’mbD ‘yam (sp.)’ kq7q’wep¬ ‘butterfly’

b.

final syllable with voiceless onset pa’k(D ‘fontanel’ ma?’pe ‘gourd’ ku7u?’cu ‘saliva’

c.

final and prefinal syllables with voiced obstruent onsets ki7i’bDp¬ ‘frog (sp.)’ mq7q’7qj ‘toad (sp.)’

Stress, however, falls on the penult if the final syllable has a voiced obstruent onset and the previous one does not, indicating the stress-attracting nature of the voiceless obstruents and the sonorants in this language. (13)

Karo penult stress and onset voicing (Gabas 1999: 14, 39–41) ’jaba ’we7e ’mHga

‘rodent (sp.)’ ‘frog’ ‘mouse’

’pibe? ‘foot’ ’ka7o ‘macaw’ i?’cDgD ‘quati (sp.)’

Nonetheless, other cases where stress is seemingly sensitive to the onset quality have been shown to be much less robust or even wrong. One example of the latter arises in Mathimathi, where stress is normally word-initial unless attracted by the second syllable when it begins with a coronal onset. Davis (1988) attributes this to genuine onset-sensitivity. Gahl (1996), on the other hand, shows that another account is more plausible, namely one that considers Mathimathi stress to be morphologically based. She claims that stress is located on the last stem syllable of the word (or better, last stem vowel). Stems are generally monosyllabic or bisyllabic. It so happens that apparent stress shift appears on stems of the type C1VC2VC3, where the medial consonant is invariably coronal (Gahl 1996: 329). Evidence for Gahl’s analysis comes from monosyllabic C1VC2 stems, where C2 is again coronal. Addition of a suffix to such stems renders C2 an onset of the second syllable. If Davis were right, then stress here should also be peninitial. However, it is initial, as predicted by Gahl’s morphological account; cf. peninitial stress in bisyllabic stems such as [‘gu.’ra.g+i] ‘sand’ vs. initial stress in monosyllabic stems such as [’wa.Õ+a.{+a] ‘to come’. In both cases, C2 is coronal. Thus, re-examination of the facts in light of morphological considerations may reveal the lack of true onset-sensitive effects (see also Nanni 1977 on the English suffix -ative or Davis et al. 1987 on Italian infinitives). A final pattern that emerges involves the combination of true onset-presence and onset-quality effects. A well-known example is Pirahã (Everett and Everett 1984; Everett 1988), an Amazonian language where codas are banned. Onsetless light syllables (V) do not occur, and stress may only dock on one of the three final 7

Note that [7] in Karo behaves like [d], which is otherwise missing from the inventory (Gabas 1999: 12).

Nina Topintzi

12

syllables of the word. The weight and stress hierarchy the language motivates is: PVV > BVV > VV > PV > BV (P = voiceless; B = voiced). In particular, VV nuclei attract stress more than V ones (14c), and voiceless onsets have the same effect as opposed to voiced ones (14a) and (14d). Crucially, and unlike Karo, Pirahã ‘voiced’ consonants also include sonorants, which appear as allophones of voiced stops, e.g. /b/ may surface as [b], [m], or the bilabial trill [b]. Consequently, in this language, only voiceless obstruents attract stress. Between equally heavy syllables in terms of nucleic weight, onsetful ones attract stress over onsetless (14b). Finally, if there is more than one equal contender for stress, the rightmost one receives it (14e). (14)

Pirahã examples (Everett and Everett 1984; Everett 1988) a.

PVV > BVV ’káo.bá.bai ‘almost fell’ pa.’hai.bií ‘proper name’ b. BVV > VV ’bii.oá.ii ‘tired (lit.: being without blood)’ poo.’gáí.hi.aí ‘banana’ c. VV > PV pia.hao.gi.so.’ai.pi ‘cooking banana’ d. PV > BV ti.’po.gi ‘species of bird’ ’?í.bo.gi ‘milk’ e. rightmost heaviest stress ho.áo.’íi ‘shotgun’ *ho.’áo.íi ti.’po.gi ‘species of bird’ *’ti.po.gi paó.hoa.’hai ‘anaconda’ *paó.’hoa.hai *’paó.hoa.hai

(1988: 239) (1984: 708) (Everett, p.c.) (1984: 709) (1984: 710) (1984: 710) (Everett, p.c.) (1984: 710) (1984: 710) (1984: 707)

What is common to all these examples is that the voiceless obstruent onsets systematically attract stress, contrary to the voiced obstruent ones. Various analyses have been offered to account for the Pirahã facts (and many fewer for Karo). These are examined in Topintzi (2010). Some make use of the increased prominence of onsetful syllables and voiceless onsets over onsetless syllables and voiced onsets respectively (Everett and Everett 1984; Hayes 1995; Goedemans 1998; Smith 2005). Some treat certain onsets as weightful and some as weightless (Topintzi 2006, 2010), and others offer a mixed system that utilizes weight but sees it as a function of prominence (Gordon 2005). Due to space limitations, these proposals will not be reviewed here. However, there is one important empirical argument that favors the onset weight approach, namely the existence of other phenomena beyond stress that are weight-related and influenced by onsets.

4.2

Compensatory lengthening, geminates, and word minimality

An explicit prediction of the onset–rime and the moraic models is that onsets will never participate in weight-related processes. For the former, this is because onsets are excluded from the prosodic hierarchy (van der Hulst and Ritter 1999: 31). For the latter, it is because onsets never bear moras (Hayes 1989). However,

Onsets

13

both assertions are entirely stipulative and subject to modifications given the existence of counterevidence. First, consider compensatory lengthening (chapter 64: compensatory lengthening), a phenomenon widely utilized in support of standard moraic theory. In standard moraic theory (Hayes 1989), it is predicted that onsets will neither induce compensatory lengthening (through deletion) nor undergo it (through lengthening). Yet several cases of both types have been reported. In Samothraki Greek, the onset /r/ deletes and generally leads to lengthening of the following vowel, e.g. /’rema/ > [’e(ma] ‘stream’, /’ruxa/ > [’u(xa] ‘clothes’, /’Ïedru/ > [’Ïedu(] ‘tree’, /kra’to/ > [ka(’to] ‘I hold’ (Katsanis 1996: 50–51). Onondaga (Michelson 1988) is somewhat similar, although /r/-deletion leads to lengthening, whether it is in an onset or a coda originally. Numerous other examples have been reported (Rialland 1993; Beltzung 2007), all of which, however, are highly morphologized. For instance, in Romanesco Italian, the initial /l/ of the definite article and of the object clitic /lo la li le/ optionally deletes (Loporcaro 1991: 280), causing lengthening of the unstressed vowel that follows, e.g. [lo ’stupido] > [o( ’stupido] ‘the stupid (masc)’ or [la ’bru(œo] > [a( ’bru(œo] ‘I burn her’. Beyond this environment, such compensatory lengthening does not appear. Analogous effects are observed in Anuak/Anywa, Lango, Gyore, Turkana, and Ntcham (see Beltzung 2007; Topintzi 2010 and references therein). Nonetheless, one could question the validity of this approach in terms of onset weight structure and instead provide a more phonetic explanation, as done by Kavitskaya (2002). She observes that vowels in CVC syllables are phonetically longer when followed by certain consonants whose transitions can be misheard as part of the vowel (i.e. sonorants, approximants). On deletion of such consonants, the ‘excess’ length of the preceding vowels can be phonologized, so that listeners reinterpret them as phonemically longer. Thus vowels are reinterpreted by listeners as phonemically longer. This approach also extends to compensatory lengthening induced by onsets, but only works when highly sonorous consonants are deleted. In principle, this is appropriate for some of the cases, e.g. Samothraki Greek or Romanesco Italian, but is nevertheless problematic. For instance, it cannot explain why the same phonologization of length has not occurred with regard to the Samothraki coda r, especially since this is the prototypical position for compensatory lengthening. More troublesome, though, is the inability to account for cases like Ntcham, where the onset that is lost is the highly non-sonorous /k/. More strikingly, onsets also can serve as the target of compensatory lengthening. This means that a segment deletes and the preceding onset lengthens, i.e. geminates in order to compensate for its loss.8 For instance, Pattani Malay (Yupho 1989; Topintzi 2008) contrasts singletons and geminates in onsets, but only word-initially (on initial geminates see chapter 47: initial geminates), e.g. [‘bu’wDh] ‘fruit’ vs. [’b(u‘wDh] ‘to bear fruit’, [‘Áa’le] ‘road’ vs. [’Á(a‘le] ‘to walk’ (Yupho 1989: 135). Moreover, it exemplifies a case of compensatory lengthening (Michael Kenstowicz, personal communication). In instances of free variation, one variant involves loss of the word-initial syllable and gemination of the second 8

This characterization is unavoidably linked to a broader discussion of what exactly constitutes a geminate. Briefly, the debate relates to whether geminates are inherently moraic (i.e. heavy) or involve double linking to higher structure (i.e. long). This issue is thoroughly examined in chapter 37: geminates and chapter 47: initial geminates.

Nina Topintzi

14

onset, as in e.g. [buwi] ~ [w(i] ‘give’, [sqdadu] ~ [d(adu] ‘police’, [pqmatD] ~ [m(atD] ‘jewelry’ (Yupho 1989: 130). That these geminates are moraic is supported by another fact of the language, namely stress. Primary stress is word-final, unless the word begins with a geminate, in which case it shifts to the initial syllable (the other syllables receive secondary stress). We can easily understand this effect by claiming that the syllable hosting a geminate is bimoraic and therefore heavy, and as such, attracts stress in preference to monomoraic syllables. (15)

Stress in Pattani Malay (Yupho 1989: 133–135) a.

words lacking geminates ‘a’le ‘road path’ ‘da’le ‘in, deep’ ‘mã‘ke’n ‘food’

b.

words with initial geminates ’m(a‘tD ‘jewelry’ ’Á(a‘le ‘to walk’

As well as Pattani Malay, Trukese provides evidence that onset geminates are moraic (see also chapter 37: geminates for discussion). First, Trukese words are minimally (C)VV, e.g. [maa] ‘behavior’, [oo] ‘omen’, or CiCiV, i.e. a geminate plus a short vowel, e.g. [tto] ‘clam (sp.)’, [ŒŒa] ‘blood’ (Davis and Torretta 1998; Muller 1999).9 CVC and CV words are not allowed (Davis 1999), thus singleton codas contribute no mora (Muller 1999). Presumably, minimality is satisfied by bimoraic words, provided of course that geminates add a mora to their syllable. An additional process of compensatory lengthening following the deletion of the final mora in a word corroborates the moraicity of onset geminates (chapter 37: geminates). Various proposals within the standard moraic theory tradition have been put forward to account for initial moraic geminates (Davis 1999; Curtis 2003), common to which has been the lack of any association between the geminate’s mora and the onset, in line with a major tenet of the theory, namely the ban on onset moraicity. Crucially, to achieve this effect, these approaches link the geminate’s mora to some position other than the onset, which is made possible by the double linking commonly assigned to geminates (see chapter 37: geminates). But this solution is not available in cases of moraic initial consonants that are singletons rather than geminates. Such cases exist. In Bella Coola (Bagemihl 1998) the minimality criterion is fulfilled by VV, VC, and CV words, but crucially not by V words.10 Topintzi (2006, 2010) argues that the easiest way to uniformly understand these data and place them alongside the root-maximality facts of the language – that make reference to mora structure – is by stating a bimoraic word minimum and by allowing onsets to bear moras. 9

Many languages impose a minimum size for words to be well-formed. Commonly, words are required to be at least bimoraic (C)VV as in Ket or Mocha, or (C)VV/(C)VC as in English or Evenki (Gordon 2006), or bisyllabic, e.g. Pitta-Pitta (Hayes 1995: 201). 10 In fact, minimal words with two unsyllabified consonants CC are also allowed. Evidence for the existence of unsyllabified consonants would take us too far afield; see Bagemihl (1998) and Topintzi (2006) for details.

Onsets

15

To capture these facts, Topintzi (2006, 2010) puts forward a flatter syllable structure (reminiscent of Davis 1985), whereby all syllable constituents come in either moraic or non-moraic versions. This is hardly surprising for codas; cf. moraic codas in Latin, Delaware, English, Kiowa, and Turkish vs. non-moraic ones in Wargamay, Lenakel, Eastern Ojibwa, and Khalkha Mongolian (e.g. Hayes 1995; Zec 1995, 2007; Morén 2001). The claim extends to onsets too, e.g. moraic for onset geminates in Trukese or voiceless obstruents in Pirahã vs. non-moraic in a host of other languages. Applying the same distinction to nuclei is also not too far-fetched, as it has been suggested that they can occasionally be weightless, for example in Malagasy (Erwin 1996), Kabardian (Peterson 2007), Alamblak (Mellander 2003), and Chuvash and Mari (Hyman 1985). The following representation illustrates the proposal outlined by Topintzi (2006, 2010).11 (16)

q ([) ([) ([) C

V

C

Even with this modification, though, moraic theory faces problems when it encounters data such as those in Seri and Kikamba (Roberts-Kohno 1995) and Onondaga and Alabama (Broselow 1995 and references therein), and French haspiré (Boersma 2007 and references therein). In Seri (Marlett and Stemberger 1983; Crowhurst 1988; Broselow 1995), the distal prefix [jo-] attaches to either C- or Vinitial stems. In the former, nothing remarkable occurs (17a), but in V-initial stems, things become more complex. In general, when the first vowel of the stem is low back /a/ or low front /æ/ the prefix vowel deletes and compensatory lengthening results (17b). But in some specific stems, no deletion (and consequently no compensatory lengthening) occurs. Instead, a hiatus context is created (17c). (17)

Seri distal forms a.

b.

c.

stem C-initial stems -mækæ ‘be lukewarm’ -pokt ‘be full’

distal jo-meke jo-pokt

general pattern of /a, e/-initial stems -ataø ‘go’ -æmæ ‘be used up’

jo(-taø jo(-me

exceptional pattern of /a, e/-initial stems -amwx ‘be brilliant’ jo-amwx -ænx ‘play stringed instrument’ i-jo-enx

*jo(-mwx *i-jo(-nx

According to Crowhurst (1988), these data support a mixed representation that includes both X slots and moras (chapter 54: the skeleton). The idea is that the stems in (17c) are underlyingly specified with an empty slot in the onset, whose 11

Simultaneous moraicity on all three positions is presumably attested in Karo (see Topintzi 2010: 49).

Nina Topintzi

16

net effect is to block deletion (and compensatory lengthening), because of its intervening position between the two vowels. Effectively, then, (17c) acts as if it were a C-initial stem (17a). Data of this type can also be easily accommodated in government phonology (Kaye et al. 1990), which by its nature allows reference to empty positions. It is, however, not entirely clear that Seri cannot be accommodated by moraic theory alone (especially if onsets may bear moras). Unlinked moras appear in numerous works (cf. van Oostendorp 2005; Topintzi 2007) and are in fact suggested by Crowhurst herself. We could assume then that the input for [jo-amwx] is /jo-Mamwx/, where M indicates a floating mora. If on the surface this mora remains unassociated but anchored at the left edge of the stem, then it can produce the same blocking effect of deletion that Crowhurst achieves by means of an unassociated x-slot. Even if this is feasible, it is unlikely that all similar kinds of facts will be subject to reanalysis. One solution would be to reconsider representations that simultaneously use x-slots and moras, as Crowhurst does. This idea has reappeared in Muller’s (2001) Composite Model with respect to geminates, and in Vaux (2009) as a more complete model of timing. Whether such enrichment of the theory is justified remains to be seen. Alternatively, one could entertain Itô’s suggestion (1989: 255 and references therein) that “the role previously played by lexically empty skeletal slots can be taken over, wholly or in part, by bare melodic root nodes.”

4.3

Tone

Another phenomenon where onsets seem to be involved, albeit rarely, is tone (chapter 45: the representation of tone). Relevant cases reported include Musey (Shryock 1995) and Kpelle (Welmers 1962; Hyman 1985). In Musey, consonants are divided into Type A (or High consonants) and Type B (or Low consonants). Type A consonants include the sonorants and the historically voiceless obstruents. Type B ones correspond to the historically voiced consonants. Both Type A and B obstruents are basically voiceless (Shryock 1995: 68–69), with Type A stops presenting longer positive voice onset time (VOT), less closure voicing, and higher F0 at the onset of the following vowel than the Type B ones. The rightward displacement of lexical L tone when a suffix is added in (18) shows the genuine contrast between the two types of consonants as well as their tonal effects. When the lexical L tone shifts, the vowel that hosted the tone is interpreted as mid or high if the onset is Type A, but as low if the onset is Type B. (18)

Rightward displacement of lexical L tone in Musey a.

b.

cliticization of /-na/ Type A sà → sanà → sanà Type B Âù → Âùnà subjunctive Type A tò ‘sweep’ Type B dò ‘pick’

‘person’ ‘goat’

subjunctive with affixation tdå ‘sweep it’ dòå ‘pick it’

Onsets

17

Thus, at some level of representation, the onset consonants above seem to bear tone – be it by conditioning it or by having it floating in the input – which subsequently surfaces on the neighboring vowel to the right. What is more interesting is that the tone induced depends on the quality of the consonant involved: voiceless obstruents (and sonorants, which I will come back to in a moment) cause M tone, voiced obstruents cause L tone. This fact correlates precisely with data we find in tonogenesis (cf. Vietnamese (Haudricourt 1954) or synchronically in Kammu dialects (Svantesson 1983); see also chapter 97: tonogenesis), where the historical contrast between voiceless and voiced obstruents is neutralized in favor of voiceless obstruents and is reinterpreted by means of tone, as shown below. (19)

Common pattern in tonogenesis voicing contrast; no tone pa > ba >

no voicing contrast; presence of tone pá pà

This pattern is phonetically grounded: in voiceless obstruents, the cricothyroid muscle stretches the vocal folds to obstruct vocal fold vibration resulting in vocal fold tensing, which in turn leads to a higher F0. In voiced obstruents the larynx and hyoid bone are lower and a lowered larynx results in a lower F0 (Yip 2002: 6–7; Honda 2004). In fact, depression of F0 after voiced stops is very likely universal, as Kingston and Solnit (1988b) state (chapter 114: bantu tone). Sonorants, on the other hand, do not automatically perturb the F0 of adjacent vowels, and thus may cause either elevation or depression of the F0 (Kingston and Solnit 1988a: 276). This finding is also in line with the behavior of sonorants in onset-sensitive stress discussed above. Recall that in Karo sonorants act like voiceless obstruents in attracting stress, but in Pirahã like voiced obstruents in avoiding it. Reviewing the vast literature on the phonological effects of the onset/tone interaction phenomenon is well beyond the goals of the present chapter (see Yip 2002; Gordon 2006; van Oostendorp 2006; Tang 2008 for relevant overviews). For our purposes and in light of the data above, it suffices to say that Musey exhibits mixed behavior. On the one hand, it has not entirely lost the voicing contrast between stops (see the discussion on Type A and B consonants) – since it retains phonetic voicing by means of short vs. long VOT – but is moving in that direction, as the facts above reveal; on the other hand, it has introduced tone, which is commonly associated to specific onset quality, but has not (yet?) extended this pattern throughout the system. One thing seems quite clear: onsets in Musey may act as phonological tone bearers. And as expected, voiceless obstruents produce tone raising and voiced ones tone lowering. The more neutral sonorants here pattern with the voiceless obstruents. Along similar lines, we can understand the data in Kpelle. However, unlike Musey, Kpelle onsets act as surface tone-bearing units (TBUs). First, consider minimal pairs such as (20), where a sonorant onset can appear toneless, L-toned, or H-toned. This is hardly surprising, given the capacity of sonorants to bear any type of tone. (20)

mare-kêi åare ké 3are ké

‘a question’ ‘ask him’ ‘ask me’

Nina Topintzi

18

Moreover, the possessive form involves an underlyingly H-toned nasal prefix for the 1st singular or a floating L tone for the 3rd singular (plus the independent processes of voicing assimilation in obstruent-initial stems and total assimilation and nasal simplification in sonorant-initial stems),12 both of which surface on onset positions. (21)

Kpelle onsets as TBUs (Hyman 1985: 44) stem a.

b.

‘my’

‘his/her’

initial obstruent pólù 3bólù túe Udúé kFF ØgFF fíí 3víí

‹ólù ›úé fiFF ‚íí

‘back’ ‘front’ ‘foot, leg’ ‘hard breathing’

initial sonorant lbb Ubb jéé ±éé mXlóI 3XlóI JWI ±WI

Ybb ≤éé åXlóI ≤WI

‘mother’ ‘hand, arm’ ‘misery’ ‘tooth’

These examples show that sonorants and voiced obstruents may appear as surface TBUs, but the same does not hold for voiceless obstruents. This is entirely expected, given that the physical correlate of tone is F0, thus only voiced segments should be able to present it, i.e. vowels, sonorants, and voiced obstruents (Gordon 2006). The Musey data nonetheless have suggested that voiceless onsets should be allowed to be input phonological TBUs (a similar claim for Kpelle appears in Topintzi 2010); if this view is along the right lines, future investigation should focus on how the phonology–phonetics mapping of onset–tone association is accomplished.

5

Onset–coda weight asymmetry

Finishing this chapter, it should by now be obvious that while there is evidence that onsets participate in at least some of the phenomena that codas do, the frequency with which they do so is indisputably much lower and in some cases exceedingly rare. This issue has been mentioned but barely dealt with in the literature; nevertheless, it deserves some brief discussion. Of course, for those who deny any role for onsets in prosody (cf. the standard moraic theory of Hayes 1989), there is not much to explain in the first place. The asymmetry in behavior is the outcome of the more restricted – moraically speaking – structural representation of onsets, compared to that of codas. However, as we have just seen, this approach is too restrictive when it encounters many of the empirical data presented previously. 12

A reviewer points out that the input for the 3rd singular could instead include a low-toned nasal that on the surface fuses with the onset consonant, similarly to what happens in sonorant-initial stems. This is certainly a possibility, but not one Hyman seems to assume. In any case, this issue is orthogonal to the point made here.

Onsets

19

To my knowledge, the first explicit attempt to account for the rarity of onset weight and hence of the onset–coda prosodic asymmetry was offered by Goedemans (1998). Through a set of perception experiments using synthetic stimuli, Goedemans found that Dutch listeners are more attuned to perceive fluctuations in vowel or coda duration rather than onset duration. He next devised an additional experiment to check for the possibility that there is inherently a human bias against perceiving onset duration, but found no evidence in support of this. He therefore concluded that the effect described above must genuinely be due to the weightlessness of onsets. One problem posed by this account is that Goedemans found that listeners recognize duration shifts in onset sonorants better than obstruents. This implies that the former should be preferred as weight bearers to the latter, contra the empirical data, which suggest that in onsets the real difference is between voiced and voiceless obstruents (and that sonorants may pattern with either; cf. Karo vs. Pirahã). More troublesome for this proposal is how to accommodate later work by the same author (cf. Hajek and Goedemans 2003), where onset weight is emphatically argued for, albeit for geminates only (Rob Goedemans, personal communication). Other, more functional accounts of the onset–coda weight asymmetry include Smith (2005) and Gordon (2005), both of which accept onset-sensitivity, but only with regard to stress. To explain why onsets may have a stress-attracting effect, they offer variants of the same idea relating phonological considerations to more general cognitive abilities, such as the sensitivity to auditory stimuli (Viemeister 1980; Delgutte 1982). More specifically, they allude to the evidence of “neural response patterns that the presence of an onset, and specifically a low-sonority onset, does in fact enhance the perceptual response to a syllable” (Smith 2005: 50). Empirically, though, as we know, sonorant onsets may also contribute to weight (or prominence), a fact that both functional accounts fail to capture. Despite this problem, Gordon (2005) claims that in most cases, i.e. most languages, the onset effect is subordinated to the perceptual energy of the rime itself, which is why rimal weight is prioritized over onset weight. Finally, Topintzi (2006, 2010) does not confront this issue in much detail, but nonetheless claims that instead of a single property, it is a constellation of phonological factors, perhaps complemented by the functional accounts above, which may prove enlightening (for details, see Topintzi 2010: §3.3.3, §5.4.1, §6.2.3). For example, the rarity of onset-sensitive tone is attributed to the fact that tone and onset-weight requirements are incompatible with one another. Tone requires the presence of F0, whereas moras that can bear tone in the onset are best assigned to voiceless onsets, which by nature lack F0. In spite of the virtues of each approach, none simultaneously manages to combine accurate empirical coverage with a convincing account that acknowledges the onset–coda asymmetry in its correct perspective and offers a plausible explanation. Future research may fill in the missing pieces of the puzzle.

ACKNOWLEDGMENTS Thanks to Beth Hume, Marc van Oostendorp, and two anonymous reviewers for their instructive comments on various aspects of this chapter. All errors are my own.

20

Nina Topintzi

REFERENCES Abrahamson, Arno & Joyce Abrahamson. 1984. Os fonemas da língua Júma. In Robert A. Dooley (ed.) Estudos sobre línguas tupí do brasil, 157–174. Brasília: Summer Institute of Linguistics. Baertsch, Karen. 2002. An optimality theoretic approach to syllable structure: The split margin hierarchy. Ph.D. dissertation, Indiana University. Bagemihl, Bruce. 1998. Maximality in Bella Coola (Nuxalk). In Ewa Czaykowska-Higgins & M. Dale Kinkade (eds.) Salish languages and linguistics: Theoretical and descriptive perspectives, 71–98. Berlin & New York: Mouton de Gruyter. Beltzung, Jean-Marc. 2007. Allongements compensatoires: Une typologie. Paper presented at the 7th Biennial Meeting of the Association for Linguistic Typology, Paris. Handout available (June 2010) at http://jeanmarc.beltzung.free.fr/?page_id=3. Berent, Iris, Donca Steriade, Tracy Lennertz & Vered Vaknin. 2007. What we know about what we have never heard: Evidence from perceptual illusions. Cognition 104. 591–630. Blevins, Juliette. 1995. The syllable in phonological theory. In Goldsmith (1995), 206–244. Blevins, Juliette. 2001. Where have all the onsets gone? Initial consonant loss in Australian Aboriginal languages. In Jane Simpson, David Nash, Mary Laughren, Peter Austin & Barry Alpher (eds.) Forty years on: Ken Hale and Australian languages, 481–492. (Pacific Linguistics 512.) Canberra: Australian National University. Boersma, Paul. 2007. Some listener-oriented accounts of h-aspiré in French. Lingua 117. 1989–2054. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Breen, Gavan & Rob Pensalfini. 1999. Arrernte: A language with no syllable onsets. Linguistic Inquiry 30. 1–25. Broselow, Ellen. 1995. Skeletal positions and moras. In Goldsmith (1995), 175–205. Buckley, Eugene. 1992. Kashaya laryngeal increments, contour segments, and the moraic tier. Linguistic Inquiry 23. 487–496. Buckley, Eugene. 2003. Children’s unnatural phonology. Proceedings of the Annual Meeting, Berkeley Linguistics Society 29. 523–534. Buller, Barbara, Ernest Buller & Daniel L. Everett. 1993. Stress placement, syllable structure, and minimality in Banawá. International Journal of American Linguistics 59. 280–293. Cho, Seung-bog. 1967. A phonological study of Korean with a historical analysis. Uppsala: Almqvist & Wiksell. Clements, G. N. 1990. The role of the sonority cycle in core syllabification. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 283–333. Cambridge: Cambridge University Press. Crowhurst, Megan J. 1988. Empty consonants and direct prosody. Proceedings of the West Coast Conference on Formal Linguistics 7. 67–79. Curtis, Emily. 2003. Geminate weight: Case studies and formal models. Ph.D. dissertation, University of Washington. Davis, Stuart. 1985. Topics in syllable geometry. Ph.D. dissertation, University of Arizona. Davis, Stuart. 1988. Syllable onsets as a factor in stress rules. Phonology 5. 1–19. Davis, Stuart. 1999. On the representation of initial geminates. Phonology 16. 93–104. Davis, Stuart & Gina Torretta. 1998. An optimality-theoretic account of compensatory lengthening and geminate throwback in Trukese. Papers from the Annual Meeting of the North East Linguistic Society 28. 111–125. Davis, Stuart, Linda Manganaro & Donna Jo Napoli. 1987. Stress on second conjugation infinitives in Italian. Italica 64. 477–498. de Lacy, Paul. 2006. Markedness: Reduction and preservation in phonology. Cambridge: Cambridge University Press.

Onsets

21

Delgutte, Bertrand. 1982. Some correlates of phonetic distinctions at the level of the auditory nerve. In Rolf Carlson & Björn Granström (eds.) The representation of speech in the peripheral auditory system, 131–150. Amsterdam: Elsevier Biomedical. Dell, François & Mohamed Elmedlaoui. 1985. Syllabic consonants and syllabification in Imdlawn Tashlhiyt Berber. Journal of African Languages and Linguistics 7. 105–130. Duanmu, San. 2008. Syllable structure: The limits of variation. Oxford: Oxford University Press. Erwin, Sean. 1996. Quantity and moras: An amicable separation. UCLA Occasional Papers in Linguistics 17. 2–30. Everett, Daniel L. 1988. On metrical constituent structure in Pirahã phonology. Natural Language and Linguistic Theory 6. 207–246. Everett, Daniel L. & Keren Everett. 1984. On the relevance of syllable onsets to stress placement. Linguistic Inquiry 15. 705–711. Fudge, Erik C. 1969. Syllables. Journal of Linguistics 5. 253–286. Gabas, Nilson, Jr. 1999. A grammar of Karo (Tupi). Ph.D. dissertation, University of California, Santa Barbara. Gahl, Susanne. 1996. Syllable onsets as a factor in stress rules: The case of Mathimathi revisited. Phonology 13. 329–344. Geudens, Astrid & Dominiek Sandra. 2003. Beyond implicit phonological knowledge: No support for an onset–rime structure in children’s explicit phonological awareness. Journal of Memory and Language 49. 157–182. Gnanadesikan, Amalia E. 1995. Markedness and faithfulness constraints in child phonology. Unpublished ms., University of Massachusetts, Amherst (ROA-67). Goedemans, Rob. 1998. Weightless segments. Ph.D. dissertation, University of Leiden. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. Gordon, Matthew. 2005. A perceptually-driven account of onset-sensitive stress. Natural Language and Linguistic Theory 23. 595–653. Gordon, Matthew. 2006. Syllable weight: Phonetics, phonology, typology. London: Routledge. Gouskova, Maria. 2001. Falling sonority onsets, loanwords and syllable contact. Papers from the Annual Regional Meeting, Chicago Linguistic Society 37. 175–186. Gouskova, Maria. 2004. Relational hierarchies in Optimality Theory: The case of syllable contact. Phonology 21. 201–250. Greenberg, Joseph H. 1978. Some generalizations concerning initial and final consonant clusters. In Joseph H. Greenberg, Charles A. Ferguson & Edith A. Moravcsik (eds.) Universals of human language, vol. 2: Phonology, 243–279. Stanford: Stanford University Press. Hajek, John & Rob Goedemans. 2003. Word-initial geminates and stress in Pattani Malay. The Linguistic Review 20. 79–94. Harris, John. 2006. The phonology of being understood: Further arguments against sonority. Lingua 116. 1483–1494. Haudricourt, André-George. 1954. De l’origine des tons en vietnamien. Journal Asiatique 242. 69–82. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Honda, Kiyoshi. 2004. Physiological factors causing tonal characteristics of speech: From global to local prosody. In Bernard Bel & Isabelle Marlien (eds.) Speech prosody 2004. Nara, Japan. Available (June 2010) at www.isca-speech.org/archive/sp2004/ sp04_739.pdf. Hooper, Joan B. 1976. An introduction to natural generative phonology. New York: Academic Press.

22

Nina Topintzi

Hulst, Harry van der & Nancy Ritter. 1999. Theories of the syllable. In Harry van der Hulst & Nancy Ritter (eds.) The syllable: Views and facts, 13–52. Berlin & New York: Mouton de Gruyter. Hyman, Larry M. 1985. A theory of phonological weight. Dordrecht: Foris. Itô, Junko. 1989. A prosodic theory of epenthesis. Natural Language and Linguistic Theory 7. 217–259. Jakobson, Roman. 1962. Selected writings, vol. 1: Phonological studies. The Hague: Mouton. Jeanne, LaVerne Masayesva. 1982. Some phonological rules of Hopi. International Journal of American Linguistics 48. 245–270. Katada, Fusa. 1990. On the representation of moras: Evidence from a language game. Linguistic Inquiry 21. 641–646. Katsanis, Nikolaos. 1996. Sn ckwqqijó idíw[a rgp Ra[nhoájgp [The dialect of Samothraki Greek]. DÑ[np Ra[nhoájgp [Municipality of Samothraki]. Kavitskaya, Darya. 2002. Compensatory lengthening: Phonetics, phonology, diachrony. London & New York: Routledge. Kaye, Jonathan. 1990. “Coda” licensing. Phonology 7. 301–330. Kaye, Jonathan. 2000. A users’ guide to Government Phonology. Unpublished ms., University of Ulster. Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1990. Constituent structure and government in phonology. Phonology 7. 193–231. Kenstowicz, Michael. 1994. Sonority-driven stress. Unpublished ms., MIT (ROA-33). Kingston, John & David Solnit. 1988a. The inadequacy of underspecification. Papers from the Annual Meeting of the North East Linguistic Society 19. 264–278. Kingston, John & David Solnit. 1988b. The tones of consonants. Unpublished ms., Cornell University & University of Michigan. Kreitman, Rina. 2006. Cluster buster: A typology of onset clusters. Papers from the Annual Regional Meeting, Chicago Linguistic Society 42. 163–179. KuryÓowicz, Jerzy. 1948. Contribution à la théorie de la syllabe. Bulletin de la Société Polonaise de Linguistique 8. 80–113. Levelt, Clara C., Niels O. Schiller & Willem J. Levelt. 2000. The acquisition of syllable types. Language Acquisition 8. 237–264. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. Lewkowicz, Nancy K. & Leone Y. Low. 1979. Effects of visual aids and word structure on phonemic segmentation. Contemporary Educational Psychology 4. 238–252. Loporcaro, Michele. 1991. Compensatory lengthening in Romanesco. In Pier Marco Bertinetto, Michael Kenstowicz & Michele Loporcaro (eds.) Certamen phonologicum II: Papers from the 1990 Cortona Phonology Meeting, 279–307. Turin: Rosenberg & Sellier. Lowenstamm, Jean. 1996. CV as the only syllable type. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, 419–441. Salford: ESRI. Marlett, Stephen A. & Joseph Paul Stemberger. 1983. Empty consonants in Seri. Linguistic Inquiry 14. 617–639. Mellander, Evan W. 2003. Weightless syllables in Alamblak and the case against Head Dependence. Poster presented at the CASTL Kick-Off Conference, University of Tromsø. Michelson, Karin. 1988. A comparative study of Lake-Iroquoian accent. Dordrecht: Kluwer. Morelli, Frida. 1999. The phonotactics and phonology of obstruent clusters in Optimality Theory. Ph.D. dissertation, University of Maryland at College Park. Morelli, Frida. 2003. The relative harmony of /s+stop/ onsets: Obstruent clusters and the Sonority Sequencing Principle. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 356–371. Cambridge: Cambridge University Press. Morén, Bruce. 2001. Distinctiveness, coercion and sonority: A unified theory of weight. New York & London: Routledge.

Onsets

23

Muller, Jennifer. 1999. A unified mora account of Chuukese. Proceedings of the West Coast Conference on Formal Linguistics 18. 393–405. Muller, Jennifer. 2001. The phonology and phonetics of word-initial geminates. Ph.D. dissertation, Ohio State University. Nanni, Debbie. 1977. Stressing words in -Ative. Linguistic Inquiry 8. 752–763. Ohala, John J. 1983. The origin of sound patterns in vocal tract constraints. In Peter F. MacNeilage (ed.) The production of speech, 189–216. New York: Springer. Ohala, John J. 1990. Alternatives to the sonority hierarchy for explaining segmental sequential constraints. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26(2). 319–338. Oostendorp, Marc van. 2005. Expressing inflection tonally. Catalan Journal of Linguistics 4. 107–126. Oostendorp, Marc van. 2006. Tone and segmental structure. Handout for a course given at the GLOW/DGfS Linguistics Summer School on Micro- and Macrovariation, Stuttgart. Available (June 2010) at www.vanoostendorp.nl/pdf/tone.pdf. Parker, Steve. 1994. Coda epenthesis in Huariapano. International Journal of American Linguistics 60. 95–119. Parker, Steve. 2002. Quantifying the sonority hierarchy. Ph.D. dissertation, University of Massachusetts, Amherst. Payne, David & Furne Rich. 1988. Sensitivity to onset in Arabela stress. Unpublished ms., Instituto Lingüístico de Verano. Peterson, Tyler. 2007. Issues of homophony and the minimal word in the Adyghan languages. Paper presented at the Conference on the Languages of the Caucasus, Max Planck Institute for Evolutionary Anthropology, Leipzig. Pike, Kenneth L. & Eunice V. Pike. 1947. Immediate constituents of Mazateco syllables. International Journal of American Linguistics 13. 78–91. Rialland, Annie. 1993. L’allongement compensatoire: Nature et modèles. In Bernard Laks & Annie Rialland (eds.) L’architecture et la géometrie des représentations phonologiques, 59–92. Paris: CNRS. Rice, Keren. 1992. On deriving sonority: A structural account of sonority relationships. Phonology 9. 61–99. Roberts-Kohno, R. Ruth. 1995. Vowel coalescence and hiatus in Kikamba. In Akinbiyi Akinlabi (ed.) Theoretical approaches to African linguistics, 314–328. Trenton, NJ: Africa World Press. Robinson, Lila W. 1975. Some phonological rules of Iowa-Oto (Siouan). Proceedings of the Mid-America Linguistics Conference, 439–448. Lawrence: University of Kansas. Scheer, Tobias. 2004. A lateral theory of phonology, vol. 1: What is CVCV, and why should it be? Berlin & New York: Mouton de Gruyter. Schreuder, Robert & Wim H. J. van Bon. 1989. Phonemic analysis: Effects of word properties. Journal of Research in Reading 12. 59–78. Selkirk, Elisabeth. 1982. The syllable. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations. Part II, 337–383. Dordrecht: Foris. Selkirk, Elisabeth. 1984. On the major class features and syllable theory. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 107–136. Cambridge, MA: MIT Press. Shryock, Aaron M. 1995. Investigating laryngeal contrasts: An acoustic study of the consonants of Musey. UCLA Working Papers in Phonetics 89. Available at http:// escholarship.org/uc/item/7jb1r066. Smith, Jennifer L. 2005. Phonological augmentation in prominent positions. New York & London: Routledge. Sommer, Bruce. 1981. The shape of Kunjen syllables. In Didier L. Goyvaerts (ed.) Phonology in the 1980s, 231–244. Ghent: Story-Scientia.

24

Nina Topintzi

Steriade, Donca. 1982. Greek prosodies and the nature of syllabification. Ph.D. dissertation, MIT. Steriade, Donca. 1988. Reduplication and syllable transfer in Sanskrit and elsewhere. Phonology 5. 73–155. Strehlow, Theodor G. H. 1944. Aranda phonetics and grammar. Sydney: University Press. Svantesson, Jan-Olof. 1983. Kammu phonology and morphology. Malmö: CWK Gleerup. Tang, Katrina Elizabeth. 2008. The phonology and phonetics of consonant–tone interaction. Ph.D. dissertation, University of California, Los Angeles. Topintzi, Nina. 2006. Moraic onsets. Ph.D. dissertation, University College London. Topintzi, Nina. 2007. Weight polarity in Ancient Greek and other languages. In Mary Baltazani, George K. Giannakis, Tasos Tsangalidis & George J. Xydopoulos (eds.) Proceedings of the 8th International Conference on Greek Linguistics, 503–517. Ioannina: Department of Linguistics, University of Ioannina. Topintzi, Nina. 2008. On the existence of moraic onsets. Natural Language and Linguistic Theory 26. 147–184. Topintzi, Nina. 2010. Onsets: Suprasegmental and prosodic behaviour. Cambridge: Cambridge University Press. Treiman, Rebecca. 1986. The division between onsets and rimes in English syllables. Journal of Memory and Language 25. 476–491. Uhry, Joanna K. & Linnea C. Ehri. 1999. Ease of segmenting two- and three-phoneme words in kindergarten: Rime cohesion or vowel salience. Journal of Educational Psychology 91. 594–603. Vaux, Bert. 2009. On the phonological representation of timing and prosody. Paper presented at the CUNY Conference on the Foot, City University of New York. Handout available (June 2010) at www.cunyphonologyforum.net/FOOTPAPERS/ vauxhandout.pdf. Vennemann, Theo. 1972. On the theory of syllabic phonology. Linguistische Berichte 18. 1–18. Viemeister, Neal. 1980. Adaptation of masking. In G. van den Brink & F. A. Bilsen (eds.) Psychophysical, physiological and behavioural studies in hearing, 190–198. Delft: Delft University Press. Welmers, William E. 1962. The phonology of Kpelle. Journal of African Languages 1. 69–93. Wordick, Frank. 1982. The Yindjibarndi language. (Pacific Linguistics C71.) Canberra: Australian National University. Yallop, Colin. 1977. Alyawarra: An Aboriginal language of Central Australia. Canberra: Australian Institute of Aboriginal Studies. Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press. Yip, Moira. 2003. Some real and not-so-real consequences of comparative markedness. Theoretical Linguistics 29. 53–64. Yupho, Nawanit. 1989. Consonant clusters and stress rules in Pattani Malay. Mon-Khmer Studies 15. 125–137. Zec, Draga. 1995. Sonority constraints on syllable structure. Phonology 12. 85–129. Zec, Draga. 2007. The syllable. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 161–194. Cambridge: Cambridge University Press.

56 Sign Syllables Ronnie Wilbur

1

Introduction

This chapter focuses on the notion of syllable in sign languages. Although there is now a consensus on the defining feature of a syllable in sign languages, i.e. that there must be a movement, the initial idea of having “syllables” in sign languages met with considerable resistance on its introduction in the early 1980s, in large part because sign languages are fundamentally monosyllabic languages (see Coulter 1982 on American Sign Language (ASL); similar evidence has been provided for Finnish Sign Language by Jantunen 2007). There was also a strong undercurrent that using concepts borrowed from spoken language linguistics for sign language phenomena would be problematic, if not inappropriate. However, I had an opportunity to ask my colleague Ray Kent, a speech researcher who was then editor of the Journal of Speech and Hearing Research, what he would require if I were to send his journal a paper on syllables in sign languages. He said he would look for evidence that the concept was linguistically meaningful and could be reliably measured, and his response will begin our tour on this topic. Actually, proving to spoken language researchers that there are syllables in sign languages on those two criteria was remarkably easy (Wilbur and Nolen 1986; Wilbur and Allen 1991). In contrast, the phonological representation of sub-syllabic structure has been under constant and lively debate. Accordingly, the presentation of evidence for the existence of syllables will be brief, and the bulk of the chapter will focus on describing the issues of disagreement and providing evidence for an answer to the question of the representation of syllables in sign languages.

2

Historical background

Early sign language research treated the sign as the unit of analysis. This is best observed in Stokoe (1960), where each sign was treated as a unit. Then, when this unit was analyzed further, it was observed that a sign was composed of a “simultaneous” bundle of aspects/primes/parameters, including the “big four”: handshape, location, movement, and orientation (Stokoe 1960; Friedman 1974, 1977; Battison 1978; Siple 1978; Klima and Bellugi 1979; Wilbur 1979; see also The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0056

2

Ronnie Wilbur

Figure 56.1

ASL ARRIVE is a monosyllabic form (arrow shows direction of movement)

Figure 56.2 ASL BABY has two syllables, i.e. two movements (arrows show direction of movement)

chapter 9: handshape in sign language phonology; chapter 10: the other hand in sign language phonology; chapter 24: the phonology of movement in sign language). Several studies suggested that there are syllables in ASL and that these syllables have internal “sequential” organization (Kegl and Wilbur 1976; Chinchor 1978, 1981; Newkirk 1979, 1980, 1981; Liddell 1984). Before addressing the question of internal organization of sign syllables, some clarification of the notion “syllable” in sign languages is necessary. To better appreciate the status of a syllable in sign languages, we must consider the difference between a syllable and a sign. In many cases, the sign is a single syllable and the boundaries of the two, i.e. a syllable or a sign, coincide (Coulter 1982, 1990). Figure 56.1 is an example of a single-syllable sign. In other signs, such as BABY in Figure 56.2, there are two syllables in a single sign. The sign BABY is a larger unit than ARRIVE (Figure 56.1). In still other signs, such as MOTHER (Figure 56.3), there are specifications for handshapes, locations, and hand orientations, but, critically, these signs do not have their own movement specifications. Since every syllable must have a movement, the phonological specification for these lexical items is smaller than a syllable; it has been proposed that the epenthetic/transition movement to, or away from, the target location provides the prosodic feature needed to produce a full syllable (Wilbur 1985; Brentari 1990b, 1998; Geraci 2009). We have so far sketched sign syllables by providing three groups of signs with different movement specifications, i.e. with single movement, with two movements,

Sign Syllables

3

Figure 56.3 ASL MOTHER is a sign without a movement specification. By permission of Dr. Bill Vicars

and without lexical movement. We now consider the difference between a syllable and a morpheme in order to distinguish their functions. A morpheme is defined as the smallest possible unit of meaning. In Figures 56.1–3, each sign is also a single morpheme. In Figure 56.1, the morpheme and the syllable are the same size. In Figure 56.2, the morpheme is larger than the syllable (it has two syllables), and in Figure 56.3, the morpheme is smaller than the syllable (it is missing movement). A morpheme may also be as small as the feature specification for a single handshape, as in classifier constructions (see chapter 9: handshape in sign language phonology), or location, as in verb agreement. It should be noted that two-syllable lexical items are highly constrained with respect to their movement specifications: the movement in the second syllable is either the exact opposite (180° rotation) of the movement in the first syllable (as in BABY, Figure 56.2) or it is 90° rotated (Figure 56.4; note transitional movement inserted when the end of the first movement is not the starting position of the second movement) (Wilbur and Petersen 1997). Clearly movement is central to syllable structure (see chapter 24: the phonology of movement in sign language). The first attempt to break sign movement into smaller sequential pieces was Newkirk (1979, 1980, 1981). Considering rhythmic features of movement, he analyzed them into [onset] [movement] [offset]. Subsequently, a number of sequential and simultaneous proposals were offered. This brings us to the ongoing debate – what is the internal structure of a syllable with respect to sequentiality and simultaneity? In the following section, I provide an overview of what everyone does agree upon, i.e. that syllables in sign languages exist. In the subsequent section, I provide evidence of the behavior of syllables with respect to higher phonological organization. Finally, we dive inside the sign syllable and consider the evidence for the two theoretical options – sequential and simultaneous organization of syllable structure.

4

Ronnie Wilbur

3

1

2

4

Figure 56.4 ASL CANCEL/CORRECT/CRITICIZE and schematic of movement showing second syllable (3 to 4) perpendicular to first (1 to 2)

3

Sign syllables exist

Syllables have been reliably measured, and in conversational contexts they have roughly the same duration as spoken language syllables (Wilbur and Nolen 1986). As will become clear below, there has been no shortage of linguistic uses for the syllable in sign language (morpho-)phonology, hence it is linguistically meaningful. It is fair to say that sign language phonologists now take the notion of sign syllable as a given, and that movement is its nucleus (the carrier of its perceptual salience; Jantunen and Takkinen 2010).

3.1

Syllable measurements

Investigators have measured sign duration, including signs that are clearly single syllables (Bellugi and Fischer 1972; Friedman 1976; Liddell 1978, 1984). For those signs which are monosyllabic, the measured duration means range from 233 to 835 msecs as a function of context. Liddell (1978) reported the effects of sentence position and syntactic function on duration of the monosyllabic signs DOG and CAT. His measurements show phrase-final lengthening, as the durations were longest in sentence-final position. Duration in sentence-initial position was next longest, and medial position in relative clauses had the shortest duration. His measurements also show a syntactic effect: objects were shorter than subjects or heads of relative clauses. My investigation of syllable duration began in 1984, when videotape was “reel to reel,” which meant that the tapes could be moved by hand, forward and backward, and measurement was in “fields” – 60 per second. To measure syllables, movements, and holds, we had to provide our own mechanical guidelines and demonstrate that they could be reliably used (Wilbur and Nolen 1986; Wilbur and

Sign Syllables

5

Schick 1987). We started with the cues identified by Green (1984) for beginnings and ends of signs, which worked well for signs that were perceptually monosyllabic. These cues included points of contact and changes in facial expressions and eye gaze. However, Green’s procedures were not sufficient for determining syllable boundaries when the sign and the syllable are not coterminous, i.e. when we have more than one syllable in a sign. To capture the behavior of multisyllabic signs such as bidirectional signs (Figure 56.2 above), reduplicated forms, and compounds, additional cues were needed. With the aid of native signers, we determined that a change in the direction of movement marked a boundary between two adjacent syllables. For elliptical movements, we accepted Newkirk’s (1979, 1980, 1981) argument that they were segmentable into two parts, and then we used the change in direction of movement as the boundary between the two parts. By contrast, circular movements, which show no internal structure, were treated as one syllable per circle. For holds, we established a procedure in which the end of a hold would be marked by one or more of the following cues: start of the next movement, loss of tension in the signing hand(s), change of eye gaze, initiation of signing by the other signer, or change of eye gaze by the other signer (Wilbur and Nolen 1986). Syllables were measured by two people at a time; if they could not agree, a third person was consulted. Over three thousand syllables were measured in four situations – natural conversation, elicited paragraphs, lists of signs, and phrases and compounds. In conversations, 889 syllables from three signers had a mean of 248 msecs, comparable to the estimated 250 msecs for spoken English (Adams 1979; Hoequist 1983). This similarity may be a reflection of an underlying timing mechanism for motion that may surface not only in speech and signing, but also in non-linguistic motor behaviors. For example, in baseball, a bat swing takes about 200 msecs (Schmidt and Lee 2005). For the lists, mean syllable duration was 299 msecs for the first production, and 417 msecs for the second. Thus, signers can have different durations at different times. For paragraphs, 14 signers produced paragraphs with either a stressed or unstressed target sign. The stressed target mean was 317 msecs, and the unstressed target mean 299 msecs. There were more syllables in the stressed condition, i.e. repeated syllables and/or resyllabification (similar to English please pronounced as puh-leeze). In the last condition, compounds may have two syllables or can be reduced to one (Coulter 1982; Liddell 1984). Eighteen sets of compounds and their two component signs were provided by Ursula Bellugi. In each set, the two signs appeared in a phrase in isolation (e.g. FACE CLEAN) and in context (HE HAS FACE CLEAN ‘He has a clean face’). The same morphemes also appeared in a compound (FACE-CLEAN ‘handsome’) in isolation and in context (HE FACE-CLEAN ‘He is handsome’). The compounds had significantly more syllables per sign than simple lexical items (Wilbur and Nolen 1986). Also, the signs in isolation (whether simple lexical items or compounds) had significantly more syllables than in context, reflecting prosodic effects. Thus signers manipulate both syllable duration and number of syllables in their sign productions. The evidence so far lends support to one of the two criteria that started our discussion, i.e. “Can syllables be reliably measured in sign languages?” We turn now to the other criterion, i.e. “Are syllables linguistically useful?”

6

4

Ronnie Wilbur

Phonological applications identified for “syllable”

In this section, I briefly review the arguments that have been offered to show that syllables contribute to the statement of phonological processes and to our understanding of why some processes behave the way they do. The notion of syllables has proven to be useful in the statement of a variety of historical changes (Battison 1978; Frishberg 1978), synchronic morphological processes (Chinchor 1981), and phonological processes (Coulter 1982; Wilbur 1990b, 1993). Blevins (1993), Padden (1993) and the work of Brentari (1990a, 1990b, 1993, 1996, 1998) provide further arguments in favor of the role of syllable structure in ASL phonology. I review only a few, and refer the reader to the original authors for further evidence and discussions.

4.1

Fingerspelling and fingerspelled loan signs

Battison’s (1978) discussion of the creation of new signs from fingerspelled words provides data to support the notion of syllables. Theoretically, each fingerspelled letter consists of a handshape and, when produced in slow sequences, a transition movement (handshape change) to reach each handshape. Thus, there could potentially be as many syllables as there are letters in the word being fingerspelled, because there could be one movement to make each handshape and each could therefore be a syllable. In actuality, fluent fingerspelling is performed with a phrasal rhythm that smoothes the transition handshape changes and reduces the prominence of certain handshapes while increasing the prominence of others (Akamatsu 1982, 1985; Wilcox 1992). In the process of becoming a lexicalized fingerspelled loan sign, some letters in a word are dropped, and remaining handshapes are associated to syllabic nodes, reducing the number of syllables produced. Fingerspelling the word “sick” (Figure 56.5) involves handshape changes from each letter to the following letter. Since fingerspelling is based on English spelling, each English word will have a different set of handshapes and a corresponding different set of transitional handshape changes. In contrast, in the fingerspelled loan sign #SICK (where # denotes a fingerspelled loan sign (Figure 56.6), the middle letters have been dropped, and the handshape change from S to K has created the movement nucleus of the syllable, to which a slight directional path movement has been added (the arrow does not do justice to this – the middle finger can appear to flick forward from the fist while the index finger straightens up). At the syllable-internal level, the features for

Figure 56.5

Handshapes S, I, C, K, with three transition movements between them

Sign Syllables

Figure 56.6

7

Fingerspelled loan sign #SICK, with one lexical movement

the handshapes S and K are associated with the same syllabic node, i.e. the only syllabic node. The handshape change from S to K is not permitted in core lexical items as reflected in Brentari’s model, thus this form is clearly identified as of foreign origin from English. Brentari (1994) used the fingerspelled loan signs from published ASL lectures (Valli and Lucas 1992) to determine the processes involved in lexicalization. She found that long fingerspelled words with as many as eight or more letters reduced to fewer handshapes and just two movements. The result is that the newly lexicalized forms fit the phonotactics of ASL, having a maximum of two syllables.

4.2

Evidence for a sonority hierarchy

Several researchers have suggested a relationship between visibility and syllable sonority (Corina 1990; Perlmutter 1992; Sandler 1993; Brentari 1998). The sonority hierarchy treats movements made with joints closer/proximal to the body/trunk, such as elbows and shoulders, as more sonorous, because of their visibility in motion when compared to those lower down and more distal, such as hands and fingers, which are considered less visible and hence less sonorous (1) (see chapter 49: sonority for an overview of issues surrounding the sonority hierarchy in spoken language): (1)

Sonority hierarchy with respect to the relevant joints (from Brentari 1998) Most sonorous shoulder > elbow > wrist joint joint joint

>

base finger joints

Least sonorous > non-base finger joints

Movements made with the wrist joint have a higher sonority value, i.e. are more visible due to larger movements, than movements made with the use of finger joints. With respect to the development of fingerspelled loan signs above, the addition of the slight path movement in the loan sign #SICK may be viewed as adding a higher sonority value to the less sonorous handshape change from S to K, thereby producing a more sonorous syllable and hence more acceptable lexical item. Thus what we observe is that the proposed prosodic hierarchy provides a rationale for why finger positions are dropped and wrist or elbow movements (depending on how the small path is made) are retained in fingerspelled loan words (chapter 95: loanword phonology).

8

Ronnie Wilbur

A

Figure 56.7

4.3

B

Metathesis for the ASL sign DEAF. (a) A-to-B. (b) B-to-A

Contact metathesis

Another situation where the notion of syllable is phonologically relevant is found with two-location contacting signs, which in certain circumstances can undergo a change that causes the two locations to switch their order (i.e. metathesis; Kegl and Wilbur 1976; Johnson 1986; Sandler 1986; Wilbur 1987; chapter 59: metathesis). For example, the location points in signs such as DEAF, PARENTS, and FLOWER may switch from A-to-B to B-to-A (Figure 56.7), depending on preceding phonological context: The process of metathesis is limited to signs that are both single morphemes and single syllables. Some were originally compounds and may be articulated in a way which reflects their origins. For example, the sign PARENTS might be deliberately made to emphasize its origin from FATHER + MOTHER: movement to contact at the forehead (FATHER) and then movement to contact at the chin (MOTHER). This form would be two syllables, and not subject to metathesis. In the monosyllabic version of PARENTS, the forehead contact is preceded by a transition movement, and the lexical syllable consists of the change in location from the forehead contact to the chin contact. This latter form can undergo metathesis, i.e. the hand can touch the chin first and the forehead second. The statement of metathesis in terms of syllables greatly simplifies the formalization of the rule. We can see that the statement in terms of syllables is the correct one, as opposed to morphemes or signs. PARENTS can undergo metathesis in one form of production but not in another, so the rule about which signs can undergo metathesis could not refer to “contacts in the same sign” or “contacts in the same morpheme,” but only to “contacts in the same syllable.”

4.4

Handshape change (change in aperture)

Brentari and Poizner (1994) show that handshape change timing is different within syllables than it is between syllables (chapter 9: handshape in sign language phonology). Within syllables, if there is path movement and handshape change, the handshape change coordinates with the beginning and end of the path. However, when two signs in sequence have different handshapes, there must be a transitional handshape between the end of the first sign and the beginning of the second (as discussed for fingerspelling, above). In such conditions, the

Sign Syllables

9

change in handshape is not coordinated with the path movement in the same way as within signs, that is, the timing of the change does not distribute evenly over the transitional path movement.

4.5

Consistency in movement

Likewise, Tyrone et al. (2007) compare monosyllabic sign movements toward the body to the same location (e.g. forehead) in two conditions: (i) the movement is part of the sign (THINK), and (ii) the movement is transitional prior to the sign (SMART). They report that within-syllable within-sign movements show typical bell-shaped velocity curves for targeted movement, whereas transitional movement between signs is less regular. These findings, along with the handshape change findings above, converge on the necessity of separating phonological syllable movement from phonetic epenthetic/transitional movement.

4.6

Minimal word

Simply put, the syllable is the smallest possible well-formed sign/word. Furthermore, two syllables is the maximum for well-formed core lexical signs (Perlmutter 1992; Sandler 1993; Brentari 1998; Jantunen 2007). Alternative formulations without mention of syllables would necessarily be more complex.

4.7

Prosodic constraints

Miller (1997) argued, on the basis of Quebec Sign Language (LSQ), that Phonological Phrases require a disyllabic foot. Similarly, van der Kooij and Crasborn (2008) suggest that in Sign Language of the Netherlands (NGT) the phonological constraint on the addition of sentence-final pointing must be stated in syllabic terms: sentence-final pointing is permitted only if the outcome is a disyllabic foot. Wilbur (1999a) observes that ASL pronouns in sentence-final position tend to be extrametrical with respect to stress assignment at the phrase level, supporting prosodic constraints proposed in Halle and Vergnaud (1987) for spoken languages.

5

Syllables and prosody

The last two arguments in favor of syllables also provide evidence for the prosodic hierarchy in sign languages. That is, metrical structure (lexical, phrasal, and clausal stress assignment), rhythmic structure, and intonational phrasing are dependent to some degree on the syllable level. (2) shows the prosodic hierarchy we adopt for further discussions (see chapter 33: syllable-internal structure; chapter 40: the foot; chapter 51: the phonological word; chapter 57: quantity-sensitivity; chapter 84: clitics for more discussion of the prosodic hierarchy). Prosodic words will be discussed in this chapter; for discussion of Intonational Phrases see Wilbur (1994), Sandler and Lillo-Martin (2006), and Weast (2008). (2)

Prosodic hierarchy syllable < prosodic word < prosodic phrase < intonational phrase

Ronnie Wilbur

10

GIVE-A-GIFT1

repeat2

repeat3

] repeat4

repeat5

WORK(ERS)

Figure 56.8 One prosodic word, composed of GIVE + distributive aspect, repeated five times, accompanied by one Posture-NM, followed by the next prosodic word, containing WORK

5.1

Prosodic words

As indicated, the minimal prosodic word is at least one syllable, and the prosodic constraint on well-formed lexical items is a maximum of two syllables. Brentari and Crossley (2002) demonstrated that changes in lower face tension (mouth and cheeks) mark the end of a prosodic word (PW), which is above the syllable in the prosodic hierarchy. Figure 56.8 shows a single lower face position, i.e. closed mouth with lip corners slightly down, referred to as posture non-manuals (P-NM), which stretches over one long PW, followed by the sign WORK, which has a round mouth and is in a different prosodic word. The context was “every year at Christmas time, the boss gives each of the employees a gift.” Note that the single PW contains five syllables (five repetitions of the lexical item GIVE-A-GIFT). (3) represents the marking of the relevant prosodic words through sign language glossing conventions. The tier above the glosses represents non-manual marking, and the line indicates the spread of the non-manual marker: (3)

Prosodic grouping for Figure 56.8 P-NM P-NM !GIVE-A-GIFT [Repeat × 5]# !WORK(ERS)# @PW $ @PW $

In contrast, Figure 56.9 shows “one car hits another car three times.” The signer produces three mouth changes (Transition-NMs), once for each repetition. These changes result in three PWs, as represented in (4): (4)

Prosodic grouping for Figure 56.9 T-NM T-NM T-NM !HIT-CAR # !HIT-CAR # !HIT-CAR # @PW $ @PW $ @PW $

Sign Syllables

] CAR-HIT1 ]PW

] CAR-HIT2 ]PW

11

] CAR-HIT3 ]PW

Figure 56.9 Sequence of three CAR-HITs, involving three syllables, three mouth changes, and three PWs

5.2

Stress assignment

Stress assignment is a prosodic process, and may occur on lexical items, compounds, and phrases. Early research on ASL stress focused on marking stress on lexical items (Covington 1973; Friedman 1976; Wilbur and Schick 1987; Coulter 1990). Stressed signs can be set off from unstressed signs by several cues: (i) faster/shorter transition movement than between unstressed signs, breaking the rhythmic pattern; (ii) higher in the signing space compared to their unstressed counterparts; (iii) increased repetitions compared to their unstressed counterparts, changing the duration; (iv) increased speed (higher peak instantaneous velocity) compared to their unstressed counterparts; (v) increased muscle tension compared to their unstressed counterparts; and (vi) stressed signs have a following pause (Wilbur and Schick 1987; Wilbur 1990a, 1990b, 1999b, 2009; Allen et al. 1991).

5.2.1 Lexical items So far, no sign language has been shown to have distinctive lexical stress, comparable to English ’permit and per’mit (Jantunen and Takkinen 2010). The predominance of monosyllabic lexical items is partly responsible for this absence. Another reason is that polysyllabic signs are restricted to three possibilities: Lexicalization of repetition: A sign may have more than one syllable if it is formed as a result of lexicalization of a repeated form (e.g. ASL FINGERSPELL; Brentari 1998: 169). The result is a two-movement sign with a Return transition in the middle: A-Return-A. In these forms, only the first syllable is prominent/ stressed (Supalla and Newport 1978; Coulter 1990). Lexical disyllables: A sign may have two syllables if it is a lexical disyllable, i.e. if the morpheme itself requires two syllables. There are two types of disyllables, both of which are subject to constraints on the nature of the movements in each syllable (Wilbur 1990b). In the first type, the movement of the second syllable must be rotated in direction 180° from that of the first, returning the hands to their original location (BABY; Figure 56.2). In the second type, the movement of the second syllable is rotated 90° from the first (creating a crossing movement) (CANCEL; Figure 56.4). Supalla and Newport (1978) discuss the first type, and note that prominence is equal on both syllables. It is also the case that prominence is equal on both syllables in the second type. Thus all lexical disyllables have equal stress on both syllables. Similarly, van der Kooij and Crasborn (2008) show that NGT has both trochaic and iambic stress patterns for disyllabic signs, the

12

Ronnie Wilbur

type being predictable on the basis of the phonotactics of the rest of the sign. Thus, as for ASL, stress is not distinctive in NGT. Lexical items have stress on the first, and perhaps only, syllable. Lexical disyllables are exceptional in being specified at the morphemic level for two syllables and in requiring equal prominence on both syllables. Compounds and phrases: A sign may have two syllables if it is a compound, with the first weaker than the second. Unlike lexical items, the assignment of stress to ASL compounds and syntactic phrases follows a very general pattern. In a compound or phrase, a single stress is assigned to the most prominent syllable of the rightmost lexical item.

6 6.1

The internal structure of syllables The debate

Historically, models of the internal structure of syllables (chapter 33: syllableinternal structure) have taken one of two views. The first view is that sign syllables, like spoken syllables, are composed of sequences of segments (chapter 54: the skeleton). These segments are of two types (like consonant and vowel in spoken language), namely Movement (M) (the hands are in motion) and a contrasting type with the hands not in motion. Distinctive features are distributed among these phonological segments parallel to spoken language C and V. Liddell (1984) argued for two types of segments, movements (M) and holds (H). The remaining information – handshape, contact, orientation, location, and facial expression – is represented as features occurring simultaneously with each segment, thus there is a sequence of feature matrices within each sign. Signed syllables could then be of several types, e.g. M, MH, HM, HMH. Sandler (1986, 1989, 2008) proposed a different model, in which the segment opposition is between movement (M) and location (L), with handshape configuration on a separate autosegmental tier. The presence or absence of holds would be characterized by a binary feature in the location feature matrix; rather than having holds underlyingly, there will be some phonetic holds (list rhythm), some phonological holds (at utterance boundary), some morphological holds (ASL aspectual inflections may include final hold as part of their pattern), and some pragmatic holds (end of conversational turn, waiting for back-channel nod). For these models (e.g. Liddell and Johnson 1989; Sandler 1989, 2008; Sandler and Lillo-Martin 2006), the segments are at the top of the phonological trees containing the distinctive features, i.e. the mother nodes in a feature geometry model. That is, the syllable is composed of segments, which are characterized by relevant phonological features. In the other view, supported in Brentari (1998) for ASL and van der Kooij (2002) for NGT, movements are dynamic prosodic units with similar autosegmental status as tones in contrastive tonal languages (e.g. Mandarin, Cantonese; chapter 45: the representation of tone; chapter 107: chinese tone sandhi). An important step leading to this alternative view was van der Hulst’s (1993) Head–Dependency Model, in which features that did not change during the sign were considered to be heads, with changing features treated as dependents (for detailed discussion see chapter 24: the phonology of movement in sign language). Head features

Sign Syllables

13

could be location, orientation, the active (selected) fingers, or their configuration. Movement itself was dependent on change of location (path movement) or hand configuration or orientation (local movement) (see similar arguments in Wilbur 1987). Brentari (1998) provides arguments against the notion of dependent/emerging movement, and instead identifies those features that do not change within the syllable as Inherent Features (IF) and those that do change as Prosodic Features (PF). From this perspective, ASL syllables contain distinctive features which may be accessed by phonological rules only in terms of their tiers and syllabic positions (e.g. syllable-initial, syllable-final), without further subdivision or organization. The segments are abstract timing slots at the bottom of the tree, onto which the phonological features are mapped, i.e. the terminal nodes in a feature geometry approach. Thus the question arises of how these two models should be distinguished.

6.2

Evidence related to syllable structure

Jantunen and Takkinen (2010) observe that there is no “direct phonetic evidence” to support the sequential segmental models. In fact, evidence against the segmental arrangement of internal syllable structure comes from a variety of experimental sources: tapping, slips of the hand, and backwards signing.

6.2.1 Tapping Spoken syllables have a rhythmic focus at the onset of the nuclear vowel (Allen 1972). That is, native English speakers who tap in time to speech cluster their taps at the stressed vowel onset. In a comparable study of ASL, native Deaf signers, native hearing signers and sign-naive hearing subjects were asked to “tap the rhythm” of five different three-sentence signed stories. Each story was presented 30 times. One story was repeated as the sixth condition (30 repetitions) for reliability; these conditions represent “tap the rhythm” (Allen et al. 1991). Finally, another one of the stories was repeated (30 repetitions) with new instructions to “tap the syllables.” Analysis of the tap responses in this condition showed that for all groups, taps are evenly distributed within syllables and do not differ from chance distribution. That is, no syllable-internal rhythmic focus is apparent (Wilbur and Allen 1991). This result is very crucial, and can only be predicted if the sign syllable is composed of constantly changing movement (smoothly changing muscular activity), meaning that there is no single point in time which attracts perceptual attention in the way that the onset of a spoken stressed vowel does, with large changes in muscular and acoustic energy (Allen 1972). The absence of such peaks is consistent with the proposal in the Prosodic Model that there is no further segmentation inside the sign syllable.

6.2.2 Slips of the hand Additional arguments against segmental models come from sign errors (Meier 1993; Wilbur 1993). English slips of the tongue tend to involve all the features of the segments involved (Fromkin 1971, 1973). If sign phonological features are distributed across segments, as suggested in segmental models, all features associated to each segment should be able to behave as a group. Therefore, parallel to speech, we might expect that the initial segments of two signs could switch with everything else remaining the same. In the corpus of 131 slips of the hand (Klima and Bellugi 1979), the predicted segmental switch did not occur. Instead,

Ronnie Wilbur

14

observed slips involved handshape, location, orientation, or handedness (one vs. two hands) features, with handshape involvement being the most common. In one slip involving BLACK and WHITE, the handshape sequence in WHITE (open fingers and thumb changing to closed fingers and thumb touching at tips) is anticipated in BLACK, with its regular handshape completely replaced by the handshape change from WHITE, whereas its location (at the forehead) and movement direction (brushing across) remained unaffected (Klima and Bellugi 1979: 139). What did not happen was a complete replacement of the initial handshape, location, and orientation of BLACK with those of WHITE, which would have created a form that started at the chest, not at the forehead. In none of their examples did the features act together as a group, as would be predicted from segmental models.

6.2.3 Backwards signing Backwards signing demonstrates that signers have access both to syllable sequences and to individual features within syllables which can be exchanged in temporal sequence, but not to units corresponding to segments as defined by the segmental models. This contrasts with the evidence from spoken language games and backwards speaking (Sherzer 1970; Cowan and Leavitt 1981, 1990; Cowan et al. 1982; Treiman 1983; Cowan et al. 1985; Cowan et al. 1987; Cowan 1989). Cowan et al. (1985: 680) report that fluent backwards talkers segment speech into “phonemic or syllabic” units, and then reverse their order. Their subjects fall into two groups, using either orthography or phonology as the basis for reversal. For example, for ‘terrace’, orthographic reversers would say /ekaret/, including the final “silent e” and adjusting the pronunciation of the letter “c” followed by back vowels to /k/. Phonological reversers would say /s>ret/, simply reversing phonological segment order. Data from backwards signing (Wilbur and Petersen 1997) provide evidence that signers treat monosyllabic signs in ways that are not compatible with segmental models. For example, Liddell’s (1984) representation for THINK in (5) consists of two segments MH, i.e. M-[approach](AP) followed by H with contact. The exchange of these segments should yield HM, i.e. H with contact followed by M-[approach]. Signers actually produce “contact” followed by “move away,” a result which would be predicted if the starting and ending locations of the movement are exchanged. That is, if movement is represented as a sequence of features, say [−contact] [+contact] (5b) (or [neutral] [forehead] rather than [approach]), then exchanging those features within-syllable to [+contact] [−contact] will result in the correct prediction of movement away from the forehead. Note that in Liddell’s model in (5a), there is a sequence of [−contact] [+contact], but these features cannot switch independently of the segments AP and H to which they are assigned. This is what we mean by having the segments at the top of the model. (5)

a.

Liddell’s model of THINK THINK Segment AP H Handshape 1 1 Orientation TI TI Location FH FH Contact – + Non-manual markings – –

Sign Syllables b.

15

Feature representation of change seen in backwards signing of THINK q

q [−contact] [+contact]



[+contact] [−contact]

Incorrect predictions from segmental models are more obvious with the sign FLY, represented as a single M segment. The predicted backwards version should be the same as the original, because there is nothing available to exchange. Backwards signing shows that the direction of movement of FLY is reversed (Figure 56.10), comparable to the movement reversal in THINK. This remains an inexplicable fact in segmental models which treat movement as a single M segment with its own feature matrix. The only recourse is to change the representation to H1MH2 and then reverse the two Hs, but evidence for the presence of those H segments would need to be provided. In any case, the lack of analogy with spoken segment sequences can be seen: the backwards form of cat /kæt/ is /tæk/, with the vowel unchanged. But, clearly, in the backwards form of FLY the movement has changed. Any segmental model containing M segments will have the same problem, because it is the phonological features associated with the movement that must be available to signers to be exchanged. In backwards signing, movements are consistently reversed by exchanging end specification with start specification, as though initial and final features are exchanged on their own tiers: end location with beginning location; end handshape with beginning handshape; end orientation with beginning orientation. Wilbur and Petersen (1997) argue that movement is not inside the syllable, but rather that movement is the syllable, a conception of “syllable” that takes movement as a dynamic gesture with only starting and ending specifications for the movement trajectory and no further linguistically meaningful internal specifications (see current arguments from gestural phonology approaches, e.g. Mauk and Tyrone 2008; Tyrone and Mauk 2008). For speech, Bagemihl (1989) and Pierrehumbert and Nair (1995) argue that sub-syllabic constituents, such as onset and rime or coda, do not participate in language game behavior, with Pierrehumbert and Nair extending this observation to phonological theory in general, claiming that these sub-syllabic constituents do not exist and that a flat syllable model, such as that proposed by Clements and Keyser (1983), is adequate to account for the facts. Bagemihl (1989: 485f.) notes that language

(a) normal Figure 56.10 signing

(b) backwards

The sign FLY, made (a) with normal movement and (b) in backwards

16

Ronnie Wilbur

games in cultures lacking phonemic alphabet writing systems do not use segments, only syllables. Furthermore, children do not use segmental language games until they are exposed to those writing systems. He suggests that alphabetic writing systems may be necessary for the development of metalinguistic awareness of segments as opposed to syllables. Brentari (1998) capitalizes on these and other observations by separating the specifications that do not change during a syllable (Inherent Features) from those that do (Prosodic Features). Each syllable has two timing slots, one after the other, representing sequentiality, and the Prosodic Features are associated accordingly. The Inherent Features spread across both slots. Thus, her timing slots (sequentiality) are at the bottom of the tree, whereas for Sandler and Liddell, the sequentiality is at the top of their models. Jantunen and Takkinen (2010) review the sign language studies, and note that there is no evidence for internally structured sequential segmental syllables of the kind found in spoken languages (such as an onset–nucleus–coda distinction). Hence there is no justification for positing an intermediate level between segment (referring here to the timing slots in Brentari’s model) and syllable, or more than two segments/slots per syllable. Finally, another benefit of the Prosodic Model for sign languages is that it provides seamless access to the prosodic hierarchy above the syllable.

7

What does simultaneity in syllable structure buy us?

It is time to turn our attention to the benefits of the notion of simultaneity in sign syllables, that is, what it accounts for that the other approaches do not. There are two important concepts that come from this model of syllable representation, namely the notion of syllable weight and the analogue to spoken language sonority. In addition, aspectual reduplication can be seen to operate prosodically on verb roots, whether one or two syllables.

7.1

Syllable weight

Consider the difference in speech between syllables of different structures CV, CVC, CVCC, CCVC, CCVCC. It is easy to identify an increase in syllable weight as more consonants are added to these syllables, even without knowing what those consonants are or the type of vowel in the nucleus or whether there is a distinction between short and long vowels or open syllable vowel lengthening (chapter 57: quantity-sensitivity). If sign syllables do not have the same internal structure as spoken language syllables, then is there a syllable weight distinction in sign languages, and, if so, how does it manifest itself? Brentari (1998) argues that there is a weight distinction in ASL, based on the number of simultaneous movements specified for the syllable. Syllables with one movement are light and those with two are heavy; more technically, a weight unit is constructed for every prosodic foot. With this analysis, she can explain the pattern of verbs that can and cannot take reduplication to form nouns, i.e. respectively light and heavy verbs. For example, the sign FLY in Figure 56.10a above is able to form a repeated nominalization for AIRPLANE because it is a light (one-movement) syllable. In contrast, syllables

Sign Syllables

17

with complex movements (for example, a path movement combined with a handshape change) cannot undergo reduplicated nominalization, even if the verb qualifies semantically. Similarly, activity verbs that form activity nouns with the addition of the feature [trilled movement] must have light syllable structure to start with (Brentari 1998: 242–243). Brentari also shows the correlation between verb heaviness and preference for sentence-final position, that is, the word order is sensitive to the weight of the verb, which is determined by the number of movements in the syllable (and if reduplicated, the number of syllables). In Brentari’s analysis, the maximum number of weight units per syllable is two. Using data from Finnish Sign Language (FinSL), Jantunen (2005; see also Jantunen and Takkinen 2010) argues that an extended system is necessary for FinSL because more than two weight units per syllable are possible if one takes the non-manual movements into account – in fact, three or four may be possible. For Jantunen, a movement is complex (not simple) if more than one articulator is involved. He is then able to make a weight distinction between two monosyllabic signs, MUSTA ‘black’ which has only path movement, hence one weight unit, and UJO ‘shy’, which has local movement accompanied by a head movement, and hence has two weight units. Thus, even though both are monosyllabic, the difference in weight results from the non-manual head specification. Another benefit of this line of reasoning is that in FinSL there are lexical items which are made entirely with non-manual articulation (there are a few in ASL also) – these would be assigned a single weight unit, and that is the desired result. One additional generalization can be stated: both FinSL and ASL prefer syllables with simple movements over complex movements (Brentari 1998; Jantunen and Takkinen 2010). This generalization would be lost from a segmental perspective on syllable structure.

7.2

Sonority

Sonority is not built on syllable weight, as a mora-based generalization might suggest (Perlmutter 1992). Brentari suggests that sonority be approached as multidimensional salience. She suggests that sonority is correlated articulatorily with closeness of the articulator to the body’s midline, and that articulation closer to the midline has greater visual salience than articulation further away. Thus, strengthening of visual salience (Enhancement Theory; Stevens and Keyser 1989) by choice of articulator higher up on the hierarchy (and likewise, reduction by choice lower down) is captured directly by the Prosodic Model in a way that segmental models cannot (Brentari 1998: 135). She suggests the following hierarchy, repeated here from (1) above: (6)

Brentari hierarchy shoulder joint

> elbow > joint

wrist > joint

base finger joints

> non-base finger joints

This distinction can be observed in cases where a movement can be articulated by different articulators, that is, if a wrist movement and an elbow movement

18

Ronnie Wilbur

can convey the same phonology. In such cases, if the movement is made by an articulator that is up the hierarchy, say an elbow joint replacing a wrist movement, then “proximalization” is said to occur, whereas if the articulator used is down the hierarchy, from elbow to wrist, then “distalization” is said to happen (see Mirus et al. 2001 for an empirical test of the factors involved). Crasborn (2001) likewise provided evidence in NGT for some of the factors relating to proximalization/ distalization. An important methodological aspect of Crasborn’s study is that it looked at fluent L1 signers, whereas Mirus et al. looked at L2 acquisition, for which the presence of sonority effects might be obscured by developmental performance factors. Looking at data from British Sign Language and the echoing of manual movements by the signer’s mouth, Woll (2001) suggests than non-manual articulations can also provide insight into the sonority hierarchy. Based on detailed investigation of these aspects of Finnish Sign Language, Jantunen (2005, 2006, 2007) suggests that non-manual movement should be included in the hierarchy, as in (7), from Jantunen (2005: 56): (7)

7.3

upper body and head > hands (including Brentari’s hierarchy) > mouth

Another perspective on reduplication: Templatic vs. prosodic

Klima and Bellugi (1979) treat reduplication as part of a templatic approach to aspectual modification (chapter 100: reduplication).1 They refer to formational terms such as Planar locus (horizontal, vertical), Cyclicity (repetition), Direction (e.g. upward, downward), Geometric array (line, arc, circle, other arrangement), Quality (small, large), and Manner (continuous, hold, restrained). Thus, each morphological function (e.g. iterative, durative) involves a template composed of some of these formational features. But the choices of feature combinations in each template are not explained. Similarly, Sandler (chapter 24: the phonology of movement in sign language) argues for a templatic approach to reduplication, using additional M (movement) or L (location) segments to account for differences in movement type or final holding (what Klima and Bellugi refer to as “end marking”). Further discussion of reduplication with Klima and Bellugi led us to an interesting separation of function for spatial and temporal formational properties, with the spatial properties providing information about the arguments of the verb and the temporal/rhythmic properties providing information about aspect on the verb (Wilbur et al. 1983). We speculated that reduplication could be analyzed the same way as in spoken languages. It took over 20 years to work it out, but a standard Base–Copy reduplication approach can be applied to sign languages using the Prosodic Model (Wilbur 2005, 2009). In Brentari’s model (8), the node dominating syllables and associated features is the root (chapter 24: the phonology of movement in sign language). 1

It is important to distinguish repetition from reduplication. Here, repetition is viewed as prosodically driven, to fill the needs of a prosodic foot. Lexicalized repetition creates nouns from verbs, with only two formations of the lexically meaningful movement required. Reduplication is aspectually driven.

Sign Syllables (8)

19

Brentari’s Prosodic Model of syllable structure root inherent features articulator non-manual

place of articulation

prosodic features setting ∆

manual H2

H1

path orientation ∆ aperture ∆ x

x q

Whether one syllable or two in a root, the Base for reduplication is the root, and the entire Base is copied. A simple example with Base and Copies is illustrated in Figure 56.11; planar difference indicates argument differences. There are, however, two modality differences between standard Base–Copy and what occurs in sign languages. First, multiple copies are common (Figure 56.11); indeed, a single copy implies “dual,” so aspectual reduplication typically has two or more copies of the Base. A second difference is that for many aspectual reduplications the hand must return to its initial position in the Base before it can articulate the Copy, thus the sequence is Base–Return–Copy. Aspectual reduplication is a combinatorial system of Base event (verb root), followed by Return to initial position, which reflects the time between the end of the Base event and the onset of the repeated event in aspects involving iteration (Wilbur 2005, 2009). Different aspects determine the size of the Return (smaller than, equal to or greater than the Base) (Table 56.1). Whether the shape includes a stopping point or is smooth (circular, elliptical) is dependent entirely on whether the verb is telic (contains a stop) or atelic (cannot stop) (Wilbur 2008).

Figure 56.11 Base–Copy reduplication schemata for apportionative external and internal. By permission of Ursula Bellugi, The Salk Institute for Biological Studies

20

Ronnie Wilbur Table 56.1 Combinations of Return options and Base event type yield aspectual inflections time between events

telic event root

atelic event root

habitual incessant iterative

durative n/a continuative

[return = root] [return < root] [return > root]

For telic events (Figure 56.12), when Return and Base are equal in size, there is the appearance of equal prominence on both (habitual); when the Return is smaller, there is a tendency for the Base to reduce as well (incessant). When the Return is larger than the Base, an arc is added (the morpheme EXTRA; Wilbur 2008). Thus, featurally, incessant aspect has [repeat] [return] [less than], habitual [repeat] [return] [equal] and iterative [repeat] [return] [greater than]. For atelic events (Figure 56.13), only two of the three options are possible, and the Base must be curved. Both of these requirements result from the absence of stops in the formation of atelic roots. Durative has [repeat] [return] [equal] and continuative has [repeat] [return] [greater than]. The atelic equivalent of the incessant, [repeat] [return] [less than], is not possible, because shortened movements would be perceptually equivalent either to stops, creating confusion with telics, or to trilled movement, which has a different interpretation (stative, not repeated). The difference between the modifications shown in Figures 56.12 and 56.13 is the Base root, which reflects the event structure in the semantics of the verb.

LOOK-AT

LOOK-ATincessant

LOOK-AThabitual

LOOK-ATiterative

Figure 56.12 LOOK-AT and three inflections. By permission of Ursula Bellugi, The Salk Institute for Biological Studies

LOOK

LOOK-durative

LOOK-continuative

Figure 56.13 LOOK-stative and durative and continuative aspects. By permission of Ursula Bellugi, The Salk Institute for Biological Studies

Sign Syllables

21

These observations are not captured by a templatic approach to reduplication. Sandler’s (1989, 2008) evidence for templatic analysis (for signs like BLOW-TOP, FAINT, and SNOOP) are all compatible with the generalization that aspectual reduplication copies the verb root. She argues that for the sign BLOW-TOP, which is a two-syllable compound created from HEAD and EXPLODE-OFF, EXPLODEOFF is copied but not HEAD. Her explanation is that the rightmost M is copied in reduplication, whereas in lexicalized monosyllabic (single M) signs like FAINT (originally from MIND+DROP), the whole form is repeated. But in the alternative analysis presented here it is expected that EXPLODE-OFF will be copied. Similarly, Sandle cites SNOOP (from NOSE+STICK-IN) as an exception, with the syllable associated with STICK-IN reduplicating, even though the entire form is monosyllabic (single M). An explanation for this is that the initial movement to the nose for NOSE is purely epenthetic (as for MOTHER (Figure 56.3), and one of the two versions of PARENTS in §4.3). That is, SNOOP starts its STICK-IN movement at the nose but does not return there for subsequent repetitions. If so, it might be appropriate to consider the initial location at the nose to be something akin to a prefixal location adjoined/cliticized onto the beginning location of STICK-IN, resulting from the compound-to-lexical-item reduction process, leaving STICK-IN to be copied by reduplication with its original location, as shows up in subsequent repetitions. This discussion highlights the kind of phonological level analysis that can be conducted with respect to syllables and their contents. Thus, factors other than phonology, such as verbal telicity and type of aspectual morphology, affect the final form of reduplicated signs (Wilbur 2005, 2009).

8

Conclusion

In this chapter, we started with two basic criteria for discussion of syllables in sign languages: linguistic meaningfulness and reliable measurement. To address the requirements of these criteria, we introduced the reader to relevant historical discussions of the status of syllables in sign languages. The key is that movement is the defining feature of a phonological syllable in sign languages. Then, we provided evidence that syllables in sign languages can be reliably measured. As for the criteria of linguistic meaningfulness, we reviewed several phenomena for which one has to make reference to the syllable for a reasonable account. Among those discussed were the phonologization of fingerspelled loan words, contact metathesis, handshape changes within and between signs, and some prosodic constraints. We then turned to the debate concerning the formal representation of sign syllables. We reviewed the sequential and simultaneous models of syllable representation. Data presented – tapping, slips of the hand, and backwards signing – strongly favor the Prosodic Model proposed by Brentari (1998), resulting in a syllable model that is internally different from those that exist in speech. We then considered the implications of this conclusion, especially since it goes against expectations of similarity between signed and spoken languages. Issues of sonority, syllable weight, and Base–Copy reduplication indicate that the syllable performs similarly in sign language and spoken language phonologies despite the internal differences in organization.

22

Ronnie Wilbur

To get to the point where we can use syllables for explaining other phenomena, we have needed a consistent and well-developed syllable model that makes empirically testable claims. This model has been tested in a variety of ways reviewed in this chapter, and there is more that has been omitted for reasons of space (variability of non-manual marking; judgments of well-formed syllables; Brentari and Wilbur 2008). The critical feature of the model is that it is a prosodic model, and the lowest level of the prosodic hierarchy is the syllable, for sign languages as well as spoken languages.

ACKNOWLEDGMENTS The preparation of this chapter was funded in part by NIH-NIDCD: “Prosodic features of American Sign Language” (1993–98); NIH-NIDCD: “Modeling the nonmanuals of American Sign Language” (2004–10); NSF: “A basic grammar of Croatian Sign Language” (2004–10); NSF: “Syllables in American Sign Language” (1984–87). A special acknowledgment to Diane Brentari for discussion in the process of developing this chapter, and to Kadir Gökgöz for suggestions and revisions.

REFERENCES Adams, Corinne. 1979. English speech rhythm and the foreign learner. The Hague: Mouton. Akamatsu, Carol. 1982. The acquisition of fingerspelling in pre-school children. Ph.D. dissertation, University of Rochester. Akamatsu, Carol. 1985. Fingerspelling formulae: A word is more or less than the sum of its letters. In Stokoe & Volterra (1985), 126–132. Allen, George. 1972. The location of rhythmic stress beats in English speech, Parts I & II. Language and Speech 15. 72–100, 179–195. Allen, George, Ronnie Wilbur & Brenda Schick. 1991. Aspects of rhythm in American Sign Language. Sign Language Studies 72. 297–320. Bagemihl, Bruce. 1989. The crossing constraint and “backwards languages.” Natural Language and Linguistic Theory 7. 481–549. Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring, MD: Linstok Press. Bellugi, Ursula & Susan Fischer. 1972. A comparison of sign language and spoken language: Rate and grammatical mechanisms. Cognition 1. 173–200. Blevins, Juliette. 1993. The nature of constraints on the nondominant hand in ASL. In Coulter (1993), 43–61. Brentari, Diane. 1990a. Licensing in ASL handshape change. In Lucas (1990), 57–68. Brentari, Diane. 1990b. Underspecification in American Sign Language phonology. Proceedings of the Annual Meeting, Berkeley Linguistics Society 16. 46–56. Brentari, Diane. 1993. Establishing a sonority hierarchy in American Sign Language: The use of simultaneous structure in phonology. Phonology 10. 281–306. Brentari, Diane. 1994. Prosodic constraints in American Sign Language. In Helen Bos & Trude Schermer (eds.) Sign Language Research 1994, 39–51. Hamburg: Signum Press. Brentari, Diane. 1996. Trilled movement: Formal representation and phonetic representation. Lingua 98. 43–71. Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA: MIT Press. Brentari, Diane & Laurinda Crossley. 2002. Prosody on the hands and face: Evidence from American Sign Language. Sign Language and Linguistics 5. 105–130.

Sign Syllables

23

Brentari, Diane & Howard Poizner. 1994. A phonological analysis of a Deaf Parkinsonian signer. Language and Cognitive Processes 9. 69–100. Brentari, Diane & Ronnie Wilbur. 2008. A cross-linguistic study of word segmentation in three sign languages. In Ronice Müller de Quadros (ed.) Sign languages: Spinning and unraveling the past, present and future – Theoretical Issues in Sign Language Research 9, 48–63. Petrópolis, Brazil: Editora Arara Azul. Chinchor, Nancy. 1978. The syllable in ASL. Paper presented at the MIT Sign Language Symposium. Chinchor, Nancy. 1981. Numeral incorporation in American Sign Language. Ph.D. dissertation, Brown University. Clements, G. N. & Samuel J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Corina, David. 1990. Handshape assimilation in hierarchical phonological representation. In Lucas (1990), 27–49. Coulter, Geoffrey. 1982. On the nature of ASL as a monosyllabic language. Paper presented at the 56th Annual Meeting of the Linguistic Society of America, San Diego. Coulter, Geoffrey. 1990. Emphatic stress in ASL. In Fischer & Siple (1990), 109–126. Coulter, Geoffrey (ed.) 1993. Current issues in ASL phonology. New York: Academic Press. Covington, Virginia. 1973. Features of stress in American Sign Language. Sign Language Studies 2. 39–58. Cowan, Nelson. 1989. Acquisition of Pig Latin: A case study. Journal of Child Language 16. 365–386. Cowan, Nelson & Lewis Leavitt. 1981. Talking backward: Exceptional speech play in late childhood. Journal of Child Language 9. 481–491. Cowan, Nelson & Lewis Leavitt. 1990. Speakers’ access to the phonological structure of the syllable in word games. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26. 45–59. Cowan, Nelson, Lewis Leavitt, Dominic Massaro & Raymond Kent. 1982. A fluent backward talker. Journal of Speech and Hearing Research 25. 48–53. Cowan, Nelson, Martin Braine & Lewis Leavitt. 1985. The phonological and metaphonological representation of speech: Evidence from fluent backward talkers. Journal of Memory and Language 24. 679–698. Cowan, Nelson, Cristy Cartwright, Carrie Winterowd & Molly Sherk. 1987. An adult model of preschool children’s speech memory. Memory and Cognition 15. 511–517. Crasborn, Onno. 2001. Phonetic implementation of phonological categories in sign language of the Netherlands. Ph.D. dissertation, University of Leiden. Fischer, Susan D. & Patricia Siple (eds.) 1990. Theoretical issues in sign language research, vol. 1: Linguistics. Chicago: University of Chicago Press. Friedman, Lynn A. 1974. On the physical manifestation of stress in the American Sign Language Unpublished ms., University of California, Berkeley. Friedman, Lynn A. 1976. Phonology of a soundless language: Phonological structure of American Sign Language. Ph.D. dissertation, University of California, Berkeley. Friedman, Lynn A. (ed.) 1977. On the other hand: New perspectives on American Sign Language. New York: Academic Press. Frishberg, Nancy. 1978. The case of the missing length. Communication and Cognition 11. 57–67. Fromkin, Victoria A. 1971. The non-anomalous nature of anomalous utterances. Language 47. 27–52. Fromkin, Victoria A. 1973. Slips of the tongue. Scientific American 229. 109–117. Geraci, Carlo. 2009. Epenthesis in Italian Sign Language. Sign Language and Linguistics 12. 3–51. Green, Kerry. 1984. Sign boundaries in American Sign Language. Sign Language Studies 42. 65–91.

24

Ronnie Wilbur

Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA: MIT Press. Hoequist, Charles, Jr. 1983. Syllable duration in stress-, syllable- and mora-timed languages. Phonetica 40. 203–237. Hulst, Harry van der. 1993. Units in the analysis of signs. Phonology 10. 209–241. Jantunen, Tommi. 2005. Mistä on pienet tavut tehty? Analyysi suomalaisen viittomakielen tavusta prosodisen mallin viitekehyksessä [What are the little syllables made of? A Prosodic Model account of the Finnish Sign Language syllable]. Licentiate thesis, University of Jyväskylä, Finland. Jantunen, Tommi. 2006. The complexity of lexical movements in FinSL. SKY Journal of Linguistics 19. 335–344. Jantunen, Tommi. 2007. Tavu suomalaisessa viittomakielessä [The syllable in Finnish Sign Language] (with English abstract). Puhe ja kieli 27. 109–126. Jantunen, Tommi & Ritva Takkinen. 2010. Syllable structure in sign language phonology. In Diane Brentari (ed.) Sign languages, 312–331. Cambridge: Cambridge University Press. Johnson, Robert. 1986. Metathesis in American Sign Language. Paper presented at the Conference on Theoretical Issues in Sign Language Research, Rochester, NY. Kegl, Judy & Ronnie Wilbur. 1976. When does structure stop and style begin? Syntax, morphology and phonology vs. stylistic variation in American Sign Language. Papers from the Annual Regional Meeting, Chicago Linguistic Society 12. 376–396. Klima, Edward S. & Ursula Bellugi. 1979. The signs of language. Cambridge, MA: Harvard University Press. Kooij, Els van der. 2002. Phonological categories in Sign Language of the Netherlands: The role of phonetic implementation and iconicity. Ph.D. dissertation, University of Leiden. Kooij, Els van der & Onno Crasborn. 2008. Syllables and the word-prosodic system in Sign Language of the Netherlands. Lingua 118. 1307–1327. Liddell, Scott K. 1978. Non-manual signals and relative clauses in ASL. In Siple (1978), 59–90. Liddell, Scott K. 1980. American sign language syntax. The Hague: Mouton. Liddell, Scott K. 1984. THINK and BELIEVE: Sequentiality in American Sign Language. Language 60. 372–399. Liddell, Scott K. & Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64. 197–277. Lucas, Ceil (ed.) 1990. Sign language research: Theoretical issues. Washington, DC: Gallaudet University Press. Mauk, Claude & Martha Tyrone. 2008. Sign lowering as phonetic reduction in American Sign Language. Proceedings of the 8th International Speech Production Seminar, 185–188. Available (September 2010) at http://issp2008.loria.fr/proceedings.html. Strasbourg. Meier, Richard. 1993. A psycholinguistic perspective on phonological segmentation in sign and speech. In Coulter (1993), 169–188. Miller, Christopher. 1997. Phonologie de la langue des signes québecoise: Structure simultanée et axe temporel. Ph.D. dissertation, Université de Québec à Montréal. Mirus, Gene, Christian Rathmann & Richard Meier. 2001. Proximalization and distalization of sign movement in adult learners. In Valery L. Dively, Melanie Metzger, Sarah Taub & Anne Marie Baer (eds.) Signed languages: Discoveries from international research, 103–119. Washington, DC: Gallaudet University. Newkirk, Donald. 1979. The form of the continuative aspect on ASL verbs. Unpublished ms., Salk Institute for Biological Studies, La Jolla, CA. Reprinted 1998 in Sign Language and Linguistics 1. 75–80. Newkirk, Donald. 1980. Rhythmic features of inflection in American Sign Language. Unpublished ms., Salk Institute for Biological Studies, La Jolla, CA. Reprinted 1998 in Sign Language and Linguistics 1. 81–100.

Sign Syllables

25

Newkirk, Donald. 1981. On the temporal segmentation of movement in American Sign Language. Unpublished ms., Salk Institute for Biological Studies, La Jolla, CA. Reprinted 1998 in Sign Language and Linguistics 1. 59–97. Padden, Carol. 1993. Response to Sandler’s “Linearization of phonological tiers in ASL.” In Coulter (1993), 131–134. Perlmutter, David. 1992. Sonority and syllable structure in American Sign Language. Linguistic Inquiry 23. 407–442. Pierrehumbert, Janet B. & Rami Nair. 1995. Word games and syllable structure. Language and Speech 38. 77–114. Sandler, Wendy. 1986. The spreading hand autosegment of American Sign Language. Sign Language Studies 50. 1–28. Sandler, Wendy. 1989. Phonological representation of the sign: Linearity and non-linearity in American Sign Language. Dordrecht: Foris. Sandler, Wendy. 1993. A sonority cycle in American Sign Language. Phonology 10. 243–279. Sandler, Wendy. 2008. The syllable in sign language: Considering the other natural modality. In Barbara Davis & Kristine Zajdo (eds.) The syllable in speech production, 379–408. New York: Taylor Francis. Sandler, Wendy & Diane Lillo-Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press. Schmidt, Richard & Tim Lee. 2005. Motor control and learning: A behavioral emphasis. Urbana, IL: Human Kinetics. Sherzer, Joel. 1970. Talking backwards in Cuna: The sociological reality of phonological descriptions. Southwestern Journal of Anthropology 26. 343–353. Siple, Patricia (ed.) 1978. Understanding language through sign language research. New York: Academic Press. Stevens, Kenneth N. & Samuel J. Keyser. 1989. Primary features and their enhancement in consonants. Language 65. 81–106. Stokoe, William C. 1960. Sign language structure: An outline of the visual communication systems of the American deaf. Silver Spring, MD: Linstok Press. Stokoe, William C. & Virginia Volterra (eds.) 1985. SLR ‘83: Proceedings of the 3rd International Symposium on Sign Language Research. Rome: Consiglio Nazionale delle Ricerche. Supalla, Ted & Elissa Newport. 1978. How many seats in a chair? The derivation of nouns and verbs in American Sign Language. In Siple (1978), 91–132. Treiman, Rebecca. 1983. The structure of spoken syllables: Evidence from novel word games. Cognition 15. 49–74. Tyrone, Martha & Claude Mauk. 2008. Sign lowering in ASL: The phonetics of wonder. Paper presented at the SignTyp Conference, Storrs, CT. Tyrone, Martha, Louis Goldstein & Gaurav Mathur. 2007. Movement kinematics and prosody in American Sign Language. Poster presented at the 20th Annual CUNY Conference on Human Sentence Processing, La Jolla, CA. Valli, Clayton & Ceil Lucas. 1992. Linguistics of American Sign Language. Washington, DC: Gallaudet University. Weast, Traci. 2008. Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows. Ph.D. dissertation, University of Texas, Arlington. Wilbur, Ronnie. 1979. American Sign Language and sign systems: Research and applications. Baltimore: University Park. Wilbur, Ronnie. 1985. Towards a theory of “syllable” in signed languages: Evidence from the numbers of Italian Sign Language. In Stokoe & Volterra (1985), 160–174. Wilbur, Ronnie. 1987. American Sign Language: Linguistic and applied dimensions. Boston: College Hill Press. Wilbur, Ronnie. 1990a. An experimental investigation of stressed sign production. International Journal of Sign Language 1. 41–60.

26

Ronnie Wilbur

Wilbur, Ronnie. 1990b. Why syllables? What the notion means for ASL research. In Fischer & Siple (1990), 81–108. Wilbur, Ronnie. 1993. Segments and syllables in ASL phonology. In Coulter (1993), 135–168. Wilbur, Ronnie. 1994. Eyeblinks and ASL phrase structure. Sign Language Studies 84. 221–240. Wilbur, Ronnie. 1999a. Metrical structure, morphological gaps, and possible grammaticalization in ASL. Sign Language and Linguistics 2. 217–244. Wilbur, Ronnie. 1999b. Stress in ASL: Empirical evidence and linguistic issues. Language and Speech 42. 229–250. Wilbur, Ronnie. 2005. A reanalysis of reduplication in American Sign Language. In Bernhard Hurch (ed.) Studies in reduplication, 593–620. Berlin & New York: Mouton de Gruyter. Wilbur, Ronnie. 2008. Complex predicates involving events, time and aspect: Is this why sign languages look so similar? In Josep Quer (ed.) Signs of the time: Selected papers from TISLR 2004, 217–250. Hamburg: Signum. Wilbur, Ronnie. 2009. Effects of varying rate of signing on ASL manual signs and nonmanual markers. Language and Speech 52. 245–285. Wilbur, Ronnie & George Allen. 1991. Perceptual evidence against internal structure in ASL syllables. Language and Speech 34. 27–46. Wilbur, Ronnie & Susan Nolen. 1986. Duration of syllables in American Sign Language. Language and Speech 29. 263–280. Wilbur, Ronnie & Lesa Petersen. 1997. Backwards signing and ASL syllable structure. Language and Speech 40. 63–90. Wilbur, Ronnie & Brenda Schick. 1987. The effects of linguistic stress on ASL signs. Language and Speech 30. 301–323. Wilbur, Ronnie, Edward Klima & Ursula Bellugi. 1983. Roots: On the search for the origins of signs in ASL. Papers from the Annual Regional Meeting, Chicago Linguistic Society 19. 314–336. Wilcox, Sherman. 1992. The phonetics of fingerspelling. Amsterdam & Philadelphia: John Benjamins. Woll, Bencie. 2001. The sign that dares to speak its name: Echo phonology in British Sign Language. In Penny Boyes Braem & Rachel Sutton-Spence (eds.) The hands are the head of the mouth: The mouth as articulator in sign languages, 87–98. Hamburg: Signum.

57

Quantity-sensitivity Draga Zec

1

Introduction

Quantity-sensitivity is an important property of prosodic constituents, which are subclassified along this dimension as either light or heavy. In a typical hierarchical organization of prosodic units, as in (1) (Selkirk 1978, 1980; Nespor and Vogel 1986), each of the prosodic levels may be instantiated by constituents that vary in length, segment quality, or structural complexity (see chapter 33: syllable-internal structure; chapter 40: the foot; chapter 51: the phonological word; chapter 84: clitics; chapter 50: tonal alignment). (1)

Prosodic hierarchy Prosodic phrase Prosodic word Foot Syllable

This variation, in at least some of its aspects, introduces distinctions in quantity among constituents at the same level of the hierarchy, evidenced by distinctions in phonological behavior. While quantity-sensitivity is most clearly manifested at the level of the syllable, other prosodic levels exhibit this property as well. Quantity-sensitivity characterizes a wide range of phonological phenomena, including stress, tone, poetic meter, and various prosodic effects on morphosyntax. Moreover, quantity-sensitivity can be manifested either as a binary or as a scalar property. For these and other reasons to be addressed in this chapter, prosodic quantity needs to have its place in the formal representation of prosody, and is a central issue in any discussion of phonological representations. The chapter is organized as follows: §2 addresses crucial aspects of weightsensitivity in the syllable, providing a typology of weight patterns supported by a wide range of attested cases. §3 focuses on formal representations of the syllable and its weight, while §4 addresses the relevance of vowel length for quantity-sensitivity. §5 shows that weight distinctions could be binary in some languages, and multivalued in others. §6 documents inconsistencies in weight The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0057

Draga Zec

2

patterns, both with respect to phonological processes and phonological contexts. §7 addresses quantity-sensitivity in feet, focusing on binary patterns, and §8 focuses on scalar patterns of quantity. §9 touches upon quantity-sensitivity at the higher levels of the prosodic hierarchy, in prosodic words and prosodic phrases. §10 offers some remarks on markedness, and §11 concludes.

2

The syllable and quantity-sensitivity

Phonological quantity is primarily associated with the syllable. One of the traditional classifications of syllables is into those that are light and those that are heavy. This distinction is motivated on empirical grounds and is brought to relief by a number of quantity-sensitive phonological phenomena, including stress, tone, and poetic meter. We focus here only on those languages that do exhibit quantitysensitivity at the syllable level, as this is not a universal prosodic property. In §2.1 we present a paradigm case of syllable weight, and then, in §2.2, turn to the typology of syllable-based quantity-sensitivity, supported by a wide range of phonological phenomena.

2.1

A paradigm case of quantity-sensitivity

Quantity-sensitivity has figured prominently in studies of classical languages and their prosody (Allen 1973). Latin provides a paradigm case of quantity-sensitivity, already known to the early grammarians such as Quintilian. Latin stress (chapter 39: stress: phonotactic and phonetic evidence) is quantity-sensitive, as illustrated in (2). If the penultimate syllable is heavy it is stressed, as in (2a), but if it is light the antepenultimate syllable is stressed, as in (2b). (2)

Latin stress a.

If the penultimate syllable is heavy, it is stressed. for’tena ‘fortune (nom sg)’ ’gaudbns ‘rejoicing (nom sg)’ gau’dentem ‘rejoicing (acc sg)’

b.

If the penultimate syllable is light, the antepenultimate is stressed. ’anima ‘soul (nom sg)’

Syllables that function as light are of the CV type, containing a short vowel, as the penultimate syllable in (2b). Syllables that function as heavy are more diverse, as shown in (2a). They are either of the CVV type, containing a long vowel or a diphthong, or of the CVC type, with a short vowel followed by a consonant. Significantly, syllables that are functionally equivalent may differ in their segmental content (chapter 54: the skeleton). Within the set of heavy syllables, differences in segmental content are found not only across CVV and CVC syllables but also within the class of CVV syllables, which may contain either a long vowel or a diphthong. In this case, as in many others, onset consonants are excluded from the computation of weight (chapter 55: onsets). This same pattern of syllable weight also figures in Latin poetic meter. In one of the meters of Horace’s Odes, known as the First Asclepiad (borrowed from

Quantity-sensitivity

3

Greek), a line of verse contains a sequence of metrical positions that admit either heavy or light syllables (marked as – and ∪, respectively, with „ marking the caesura), as in (3). In (3a), the first three metrical positions, all heavy, are filled with CVV syllables that contain either a diphthong or a long vowel, while in (3b) these same metrical positions are filled with CVC syllables. The light metrical positions, such as the next two, are filled in both lines with CV syllables. (The remaining metrical positions are filled in the same fashion, with the exception of the last vowel of tollere in (3b) which is elided, and therefore not scanned.) (3)

Latin poetic meter: Horace, First Asclepiad a.

– – – ∪∪ – „ – ∪∪ – ∪ x Maecbnas atavcs bdite rbgibus ‘O Maecenas, born from kingly ancestors!’ (Odes 1.1.1)

b.

– – – ∪ ∪– „ – ∪ ∪ –∪ x Certat tergemincs tollere honoribus ‘vies to lift [him] with triple magistracies.’ (Odes 1.1.8)

Thus, in Latin, both stress and poetic meter are sensitive to distinctions in quantity, with weight characterized identically in the two phonological subsystems.

2.2

Patterns of quantity-sensitivity

The system of quantity in Latin exemplified in (2) and (3) was taken in much relevant work to be the standard mode of computing quantity, with broad empirical support. This is how syllable weight was characterized in KuryÓowicz (1948) and later in Newman (1972), among others. Newman (1972), in particular, identifies quantity-sensitivity in a number of languages, all exemplifying the pattern of quantity with light CV and heavy CVV and CVC syllables. In addition to Latin, the list includes Classical Greek, Finnish, Estonian, Classical Arabic, and Gothic, as well as three Chadic languages, Bolanci, Kanakuru, and Hausa. In fact, a number of researchers stated important generalizations about quantitysensitivity solely in terms of the Latin pattern of weight (e.g. Kiparsky 1979, 1981; Halle and Vergnaud 1980; Clements and Keyser 1983). The Latin pattern, however, is not the only empirically attested mode of computing quantity, as shown in Hyman’s (1977) broad survey of stress systems and in much later work. In what follows we present the range of quantity patterns that have been empirically attested, and a typology of weight distinctions.

2.2.1 Weight patterns: A typology McCarthy (1979) made the crucial theoretical statement that quantity-sensitivity can be instantiated in more than one way. In addition to the Latin weight pattern, with light CV and heavy CVV and CVC syllables as in (4a), henceforth type 1, there is a further weight pattern, one in which only CVV syllables are heavy and both CV and CVC syllables are light, as in (4b), henceforth type 2. A number of languages were identified to belong to this weight type: for example, Huasteco Mayan in Hyman (1977) and Yidiny and Tiberian Hebrew in McCarthy (1979). Many more such cases figure in Hayes (1980, 1995) and Gordon (2006).

Draga Zec

4 (4)

Possible weight patterns (first approximation) a. b.

Type 1: heavy CVV, CVC vs. light CV Type 2: heavy CVV vs. light CV, CVC

Thus the weight of CVC syllable is “parameterized”: while in (4a) such syllables form a natural class with CVV, in (4b) they form a natural class with CV. This is crucially due to the status of the final consonant in a CVC syllable, which contributes to weight in (4a), but not in (4b) types of languages. McCarthy (1979) further identifies an important implicational relation: a language with heavy CVC syllables also has heavy CVV syllables. This supports the prediction about the following impossible weight pattern: no language can have heavy CVC but light CVV syllables. But computation of quantity can be even more fine-grained than in (4), and in order to show this we invoke the sonority of segments. In particular, vowels are more sonorous than consonants, and within the class of consonants, sonorants (chapter 8: sonorants) are more sonorous than obstruents. (For a general discussion of sonority, see chapter 49: sonority.) In addition to the two weight systems in (4), one in which all consonants contribute to weight, and one in which no consonants contribute to weight, there is also a type 3 system, in which only some consonants contribute to weight (see Prince 1983; Zec 1988, 1995). In such split systems, the subset of consonants contributing to weight is generally sonorants. Such a case is exemplified by Kwakw’ala, to be discussed in §2.2.2, in which heavy syllables are CVV and CVR (sonorant), while light syllables are CV and CVO (obstruent). This yields the implicational relation that if CVO syllables are heavy, so are CVR syllables; and excludes the impossible system, with CVV and CVO syllables being heavy, and CV and CVR syllables being light (Zec 1995). Furthermore, while other splits in the hierarchy should in principle be possible, say, with liquids being weight-bearing to the exclusion of obstruents and nasals, such systems have not been attested. Only major splits within the sonority hierarchy appear to be exploited for distinctions in quantity, those in particular that correspond to splits imposed by the major class features (Chomsky and Halle 1968). To summarize, a basic typology of weight patterns is given in (5). In type 1 languages, all segments contribute to weight, so that both CVV and CVC syllables are heavy; in type 2 languages only vowels are weight-bearing, which makes CVC syllables light; and in type 3 languages vowels and sonorant consonants are weight-bearing, to the exclusion of obstruents. The set of weight-bearing segments follows the sonority hierarchy: if a less sonorous segment contributes to weight, so does a more sonorous segment. (5)

Typology of weight patterns Type 1 Type 2 Type 3

Heavy CVV, CVC CVV CVV, CVR

Light CV CV, CVC CV, CVO

A special case of type 2 languages is those that lack CVC syllables. In a syllable inventory including only CV and CVV syllables, the former are light

Quantity-sensitivity

5

and the latter are heavy, as in Fijian (Hayes 1995; among others). Likewise, a special case of type 3 languages consists of those that lack CVO syllables, with an inventory that includes light CV and heavy CVV and CVR syllables, as in Tiv (Zec 1995); or Manam, in which the set of heavy syllables includes CVV and CVN (nasal), but excludes syllables closed with liquids (Lichtenberk 1983; Buckley 1998). Crucially, the onset is excluded from the computation of weight: the number of segments in the onset does not affect the weight status of a syllable. This empirically grounded property of onsets will need to be captured in the representation of the syllable, an issue to be addressed in §3. But although broadly attested, this is not a universal property of onsets; see chapter 55: onsets for cases of weight-sensitive onsets, which constitute counterexamples to this claim. To conclude, it has been shown that there is a measure of language-specificity, with different modes of quantity computation employed in different languages. It has also been shown that there is an implicational relation across occurring weight patterns, or, more specifically, across the sets of heavy syllables in different languages, as in (6): (6)

Implicational relations among heavy syllables a. b.

If a language has heavy CVC syllables, it also has heavy CVV syllables. If a language has heavy CVO syllables, it also has heavy CVR syllables.

2.2.2 Weight patterns: Case studies The three patterns of quantity in (5) are documented below with two types of quantity-sensitive phonological phenomena: stress and tone. We begin with stress, which provides the most striking cases of quantity-sensitivity. It should be noted though that only some stress systems are quantity-sensitive. According to Gordon’s (2006: 20–21) extensive survey, based on 408 languages, 310 languages have culminative accent systems. Out of those, 136 (43.9 percent) exhibit quantitysensitivity, and 86 belong to one of the three weight systems we exemplify here. Languages with quantity-sensitive stress show a clear preference for placing stress on heavy syllables (cf. Hyman 1977; Hayes 1980, 1995; Halle and Vergnaud 1987; Halle and Idsardi 1995). Simply stated, heavy syllables attract stress (Prince 1990). Moreover, languages with quantity-sensitive stress systems are of either type 1 or type 2, and rarely of type 3. The Latin stress pattern illustrated in §2.1, which belongs to type 1, is found in a number of languages. Out of 86 languages with quantity-sensitive stress in Gordon’s survey, 42 languages are of type 1. It is found, for example, in Modern Classical Arabic (as described in Ryding 2005), where stress falls on the penultimate heavy syllable, CVV or CVC, otherwise on the antepenultimate syllable. Note, however, the pattern in Classical Arabic, where stress falls on the rightmost (non-final) heavy syllable, as in (7a), otherwise on the first syllable, as in (7b) (McCarthy 1979, and the references therein).1 1

Final syllables have a special status, in at least two respects. Stress does not fall on final CVC syllables, but CVVC and CVCC syllables, which are only found word-finally, do bear stress. The special behavior of final elements is a more general issue, to be addressed in §6.2.

Draga Zec

6 (7)

Type 1: Classical Arabic a.

b.

ki’taabun manaa’diilu ju’œaariku ’mamlakatun ’kataba ’balapatun

‘book (nom sg)’ ‘kerchiefs (nom)’ ‘he participates’ ‘kingdom (nom sg)’ ‘he wrote’ ‘date (nom sg)’

The type 1 weight pattern is also noted in English, although the overall stress system is rife with idiosyncrasies. A small portion of the English lexicon, the set of underived nouns, has a relatively regular stress pattern: stress falls on the heavy penult, either CVV as in e’litist, ma’rina, and Ari’zona, or CVC as in a’genda, a’malgam, and co’nundrum; otherwise on the antepenult, as in ’discipline, ’labyrinth, and A’merica (Hayes 1982). This stress pattern is again reminiscent of Latin. Many more type 1 stress systems are documented in Hayes (1995) and Gordon (2006). Quantity-sensitive stress systems of type 2 are evidenced in a wide range of languages, just like type 1 (Hayes 1995; Gordon 2006; among others): 40 out of 86 quantity-sensitive stress systems in Gordon’s survey. It is found, for example, in the Mongolian language Buriat (Poppe 1960; Walker 1996), illustrated in (8): stress falls on the initial syllable in words with no long vowels, as in (8a), and on the rightmost non-final heavy syllable in words with more than one long vowel or diphthong, as in (8b). If a word has only one CVV syllable, stress falls on that syllable even if it is final, as in (8c). Note that CVC syllables figure in the language, yet do not attract stress, for example, the third syllable in /ta’ruulagdaxa/, in (8b). (8)

Type 2: Buriat a. b.

c.

’xada mo’rjooroo dalai’gaaraa ta’ruulagdaxa xa’daar

‘mountain’ ‘by means of his own horse’ ‘by one’s own sea’ ‘to be adapted to’ ‘through the mountain’

Another type 2 system is Huasteco Mayan (Larsen and Pike 1949; Hyman 1977; Hayes 1995: 296): stress falls on the rightmost CVV syllable, otherwise on the initial CV syllable. Again, CVC syllables pattern with CV rather than CVV syllables. And, in Aguacatec Mayan (McArthur and McArthur 1956; Hayes 1980, 1995), stress falls on a CVV syllable regardless of its position within a word, as in (9a); stress is final in words with no long vowels, as in (9b).2 (9)

Type 2: Aguacatec Mayan stress a.

2

Forms with CVV syllables ?in’ta( ‘my father’ ’Œi:bah ‘meat’ ’?e:q’um ‘carrier’

All examples are from McArthur and McArthur (1956), who list no cases of final CV syllables. Also, they claim that stress falls on the rightmost CVV syllable, yet no words with more than one CVV syllable are found in this source.

Quantity-sensitivity b.

7

Forms with no CVV syllables wu’qan ‘my foot’ ?al’k’om ‘thief’ tpil’ta? ‘courthouse’

Quantity-sensitive stress systems are very rarely of type 3. In Gordon’s (2006) survey, only four out of 86 languages are of type 3. Here we illustrate the distribution of stress in Kwakw’ala, in which CVV and CVR syllables pattern as heavy, while CV and CVO pattern as light (Boas 1947; also Zec 1988 and references therein). Stress falls on the leftmost heavy syllable, either CVV, in (10a), or CVR, in (10b). In words that contain only light syllables, CV or CVO, stress is final, as in (10c) and (d). (10)

Type 3 language: Kwakw’ala ’qa(sa ’n’a(la ’ts’e(kwa t’H’li(dzu b. ’m’Hnsa ’dHlxa ’dzHmbHtHls mH’xHnxHnd c. nH’pa bH’ha m’HkwH’la ts’HxH’laH d. ts’Ht’xa tHO’ts’a kw’Hs’xa

a.

‘to walk’ ‘day’ ‘bird’ ‘large board on which fish are cut’ ‘to measure’ ‘damp’ ‘to bury in hole in ground’ ‘to strike edge’ ‘to throw a round thing’ ‘to cut’ ‘moon’ ‘to be sick’ ‘to squirt’ ‘to warm oneself’ ‘to splash’

The three weight patterns in (5) can be further exemplified with tonal phenomena, those provided by languages with lexical, i.e. contrastive tone, in simpler systems commonly High, or High and Low (chapter 45: the representation of tone). Quantity-sensitive tonal phenomena differ substantially from quantity-sensitive stress.3 Crucial evidence for quantity-sensitivity comes from the so-called contour tones. If no more than one tone is sponsored by a light syllable and no more than two by a heavy one, we can say that multiple tones, standardly referred to as contour tones, may occur on heavy, but not on light, syllables. In other words, we focus on those languages in which a light syllable has one tone-bearing unit and a heavy syllable has two (see Zhang 2002 for different characterizations of

3

It is not typical for tone to be attracted to a heavy syllable, although some cases have been interpreted in this light. Thus Hopi, as described in Jeanne (1982), has been interpreted as a quantity-sensitive stress system (Hayes 1995): stress occurs on initial heavy syllables, either CVV or CVC, otherwise on non-final peninitial syllables; stress is initial in all disyllables. However, because stress is realized as tonal prominence, this system has also been interpreted as a tonal system in which High tone is attracted to the initial heavy syllable, otherwise to the second syllable, if non-final (Yip 2002: 245). This stress-like behavior of tone, if indeed correctly interpreted, is truly atypical.

Draga Zec

8

contour tones).4 We further focus on those languages in which the mapping between tones and tone-bearing units is fairly straightforward: a tone-bearing unit may be associated with at most one tone. With this background, we turn to the evidence for the three weight patterns coming from the tonal domain. We again rely on Gordon’s (2006: 32–33) survey: out of 408 languages in his survey, 111 use contrastive tone and, of those, 61 use tone in a quantity-sensitive mode. Type 2 and type 3 weight patterns are widely exploited by weight-sensitive tonal phenomena, while type 1 is rarely associated with quantity-sensitive tone. Type 2 pattern is found in 28 languages (four without CVC syllables), and type 3 is found in 30 languages. In type 2 languages, contour tones occur on CVV syllables, but are absent from both CV and CVC syllables. In Navajo, contour tones occur only on CVV syllables as in (11a), while simple tones occur on all syllable types; (11b) exemplifies the absence of contour tones on CV and CVC syllables (Zhang 2002, based on Young and Morgan 1987). (11)

Type 2: Navajo contour tone a.

b.

sáànìì hákòónèè? tèíl?á háá?ált’è? pìkh ìn

‘old woman’ ‘let’s go’ ‘they extend’ ‘exhumation’ ‘his house’

Another type 2 language is Ju|’hoansi, in which, as reported in Miller-Ockhuizen (1998; also Zhang 2002), contour tones are found only on long vowels and diphthongs, but not on CV syllables or syllables closed with nasals (the only type of closed syllable in the language). Type 3 weight pattern is exemplified by a number of languages, including Nama (Khoisan), Lithuanian (Indo-European), and Tiv (Niger-Congo). Lithuanian has a pitch accent system, in which contour tones appear on heavy, but not on light, syllables. In particular, a Low High tonal contour, the so called circumflex accent, occurs on heavy syllables: CVV, as in /víinas/ ‘wine’, /zúikas/ ‘rabbit’, and CVR, as in /gársas/ ‘sound’, /bálsas/ ‘voice’, and /lánkas/ ‘rainbow’. Syllables that pattern as light are CV and CVC, and those that pattern as heavy are CVV and CVR (Zec 1995 and references therein). We now turn to the tonal evidence for the type 1 weight pattern. Contour tones are rare, and phonetically difficult to realize, on syllables closed with an obstruent. In his broad survey of quantity-sensitive tone, Gordon (2006) documents only three such cases: Hausa, Luganda, and Musey. Zhang (2002: 51) also lists Ngizim, and Yip (2002: 141–142) mentions the Nilo-Saharan language Kunama (Eritrea). Here we present evidence from Hausa, based on Gordon’s (2006) experimental data. Hausa has three tones, two level tones, High and Low, and a contour High Low tone. As shown in (12), on the targeted initial syllables, the two level tones occur on all syllable types, while the contour tone occurs on CVV,

4

Note, however, that tone languages vary as to what constitutes a tone-bearing unit. What we described here is one of several modes of selecting a tone-bearing unit. On tone and tone-bearing units, see chapter 45: the representation of tone.

Quantity-sensitivity

9

CVR and CVO, but not on CV syllables. That is, the contour tone occurs on heavy, but not on light syllables. (12)

Type 1: Hausa contour tone CV CVV CVR CVO

L fàsá( mà(má( ràndá( fàskí(

H sáfú( rá(ná( mándá( máskó(

HL – lâ(lá( mântá( râssá(

Of interest here is the fact that while sonorants, both vowels and consonants, are capable of phonetically realizing pitch, obstruents are not. As shown by Gordon (2006: 92), although the phonological weight of the CVO syllable provides the two tone-bearing units required for the realization of contour tone, the contour is phonetically realized on the vowel, which in this case has greater duration. No comparable increase in duration is evidenced in CVV and CVR syllables with contour tones. Thus, Hausa presents an interesting case of a mismatch between phonology and phonetics. Other phonological phenomena that provide evidence for quantity-sensitivity include vowel shortening in closed syllables, to be addressed in §6, as well as compensatory lengthening (see chapter 64: compensatory lengthening) and poetic meter, which in §2.1 served as evidence for Latin. Onsets may on occasion exhibit quantity-sensitivity; for such cases, see chapter 55: onsets and chapter 47: initial geminates.

3

Representation of syllable quantity

The relevance of quantity-sensitivity, as well as its representation, was clearly recognized in early theoretical approaches to phonology. Both Jakobson (1931) and Trubetzkoy (1939) document weight distinctions among syllables, and cast them in terms of the unit of weight traditionally referred to as the mora: a light syllable contains one mora, and a heavy syllable contains two moras. Quantity-sensitivity was also recognized by KuryÓowicz (1948), who pursued the characterization of quantity in configurational terms, that is, in terms of a subconstituent of the syllable, the rhyme, whose structure is branching for heavy, and non-branching for light syllables. These two theoretical approaches to quantity-sensitivity, one in terms of constituency and the other in terms of arboreal configuration, emerged again in the 1970s and 80s, as competing representations of syllable weight, as well as the weight of other constituents in the prosodic hierarchy. These two approaches both express an important intuition: that quantity formally corresponds to a binary structure. This will emerge as highly relevant in the representation of the syllable and its internal structure. This is also relevant for the representation of feet, as will be shown in §7. The questions to be addressed in this section are: (i) how is weight computed from the representation of the syllable?; and (ii) how are different weight patterns represented? (For a general discussion of syllable structure and its representation, see chapter 33: syllable-internal structure.)

10

3.1

Draga Zec

Quantity represented in configurational terms

We begin with the configurational approach to syllable weight. In the representation in (13), the syllable branches into an onset and a rhyme, with the latter obligatorily dominating the nucleus and, optionally, the coda. The sub-syllabic constituent which is taken to be the domain of weight is the rhyme: if the rhyme branches, the syllable is heavy (13b); otherwise it is light (13a). An alternative assumption has been that a branching nucleus, as in (13c), has its role in the computation of quantity. (13)

b. Heavy: Branching rhyme q

a. Light q O

R

O

N C

V

C

R

c. Heavy: Branching nucleus q O

N

Co

V

C/V

R N

C

V

V

While this constituency was motivated on other grounds as well, capturing syllable quantity has been one of its important rationales. It was generally assumed that encoding weight distinctions is a crucial role of syllable structure. This representation was advocated, in this or somewhat modified form, by Kiparsky (1979, 1981), McCarthy (1979), Halle and Vergnaud (1980), Hayes (1980), Steriade (1982, 1988), and Levin (1985), among others. In all these approaches, the weight domain, provided by the rhyme subconstituent, crucially excludes the onset consonants, which do not participate in any of the weight patterns characterized in §2.2. How does this representation capture the three weight patterns presented in (5)? In some proposals that primarily focus on type 2 languages (e.g. Halle and Vergnaud 1980), both CVV and CVC syllables are represented in terms of a branching rhyme, that is, as (13b). Capturing both type 1 and type 2 languages called for modifications. In one modification, CVV syllables are represented in terms of a branching nucleus, as in (13c), and CVC syllables in terms of a branching rhyme, as in (13b) (e.g. Hayes 1980). In another modification, different configurations are posited for type 1 and type 2 languages (e.g. McCarthy 1979). Type 3 language posed a special challenge: heavy CVR syllables in this language type were represented in terms of a branching nucleus (13c), with the weight-bearing sonorants residing in the nucleus together with vowels (Steriade 1990).

3.2

Quantity represented by constituency

Another way of capturing syllable weight is in terms of constituency. By positing the mora as a sub-syllabic constituent, syllable weight is represented in terms of the number of moras that the syllable dominates. A syllable with one mora is light, and a syllable with more than one mora is heavy.

Quantity-sensitivity (14)

a. Light q [

11

b. Heavy q [

[

While the mora as a unit of syllable weight goes back at least as far as the study of classical languages, it was introduced to theoretical phonology by Jakobson (1931) and Trubetzkoy (1939). Arguments for representing the mora as a sub-syllabic entity are primarily due to Hyman (1985), McCarthy and Prince (1986), Hayes (1989), and Zec (1988). Crucially, moras do not uniquely map to the level of segments. Moraic representations in (14) are thus sufficiently flexible to capture all three systems of syllable quantity. What needs to be stated is the set of segments that can be dominated by the second mora of the syllable: all segments, as in the type 1 weight pattern, only vowels, as in type 2, and vowels and sonorant consonants, as in type 3. How this is to be implemented varies with specific phonological models, which may rely either on rules or on constraints. Thus Hayes (1989) posits a weight-by-position rule, Zec (1988, 1995) posits language-specific sets of moraic segments that act as constraints on the second mora of a syllable, and Morén (1999) proposes optimality-theoretic constraints on moraic segments that parallel Prince and Smolensky’s (1993) constraints on syllable nuclei.

4

Quantity-sensitivity and vowel length

Quantity-sensitivity is not a necessary property of the syllable. A number of languages, some listed in Hayes (1995), do not exhibit quantitative distinctions at the level of the syllable, for example, Bulgarian (Indo-European), Piro (Arawakan), Garawa (Karawic), and Modern Greek (Indo-European). Significantly, all these languages also lack vowel length (chapter 20: the representation of vowel length). This strongly suggests that the basic weight contrast is in fact that between short and long vowels, and raises the question of possible implicational relations between syllable weight and vowel length, either phonemic or non-phonemic. A strong claim on the relation of CVV and CVC syllables, proposed by KuryÓowicz (1948) and Newman (1972), among others, is that a language with heavy CVC syllables also has phonemic vowel length. While true in a number of specific cases, including Latin, Classical Arabic, and Fijian, this claim is too strong. A weaker claim is that the CVV syllable type is available in languages with heavy CVC syllables even if a language does not have phonemic vowel length (cf. Hayes 1989; Zec 1988, 1995). In such languages, vowel length could arise due to phonological processes such as compensatory lengthening, as in Ilokano (Hayes 1989), or iambic lengthening, as in Hixkaryana (Hayes 1995: 205 and the references therein). This claim rests crucially on a representation already available in a language (see §3), rather than on its phonemic distinctions.

5

Are weight distinctions binary or multivalued?

Cases of quantity-sensitivity presented thus far are characterized by two degrees of weight: a syllable is either light or heavy. The representations of syllable weight

Draga Zec

12

in §3 characterize quantity-sensitivity as a binary opposition, with two degrees of weight. However, a further question to be explored is whether there are cases of more than two degrees of weight, that is, whether quantity distinctions can be construed as scalar in nature. Weight patterns with weight-bearing consonants, types 1 and 3, present an obvious point of departure. In a language with light CV and heavy CVV and CVC syllables, what is the status of CVVC and CVCC syllables? Are such syllable shapes allowed? And, if allowed, are they superheavy? That is, do they call for syllable structures that are either ternary branching or trimoraic? Likewise, what is the status of CVVR (and the less likely CVRR) syllables in type 3 languages? Starting with type 1 languages, we find the following two cases. First, a language may have a syllable inventory that includes CVVC and CVCC syllables. In Hindi, such syllables give rise to three degrees of weight, as in (15a). Evidence for this ternary weight pattern comes from quantity-sensitivity in the stress system. Stress falls on a superheavy syllable if there is one, otherwise on a heavy syllable, otherwise on a light syllable (glossing over the complexities of this system, for details and examples, see §8). By contrast, Latin also has CVVC and CVCC syllables in addition to the standard type 1 inventory, yet exhibits only two degrees of weight, as in (15b). In this case, CVVC and CVCC syllables are functionally non-distinct from heavy syllables, CVV and CVC. This functional identity is supported by both stress and poetic meter. (15)

a.

b.

Hindi light heavy superheavy

CV CVV, CVC CVVC, CVCC

Latin light heavy

CV CVV, CVC, CVVC, CVCC

Newman (1972) claims that all weight distinctions are binary, pointing to languages like Latin. However, languages like Hindi clearly show that ternary weight distinctions are an attested reality. Second, a language may have a restricted syllable inventory, with only CV, CVV, and CVC, excluding both CVVC and CVCC syllable shapes. Such languages impose binarity as an upper limit to syllable complexity both in terms of weight, or mora count, and in terms of the number of consonants that may occur at the right margin of the syllable. This situation is clearly illustrated by Turkish (Clements and Keyser 1983). The syllable inventory of Turkish, a type 1 language, includes light CV and heavy CVV and CVC syllables, and systematically lacks CVVC and CVCC syllables. If the prohibited syllable types arise by virtue of morpheme concatenation, they are eliminated by phonological processes. In (16a), the underlying long vowel is shortened in a closed syllable, (nominative and ablative), but not in the open syllable (accusative). And in (16b), the two post-vocalic consonants in the underlying form are split by an epenthetic vowel (chapter 67: vowel epenthesis), in order to avoid a CVCC syllable (nominative and ablative).

Quantity-sensitivity (16)

13

Turkish a.

CVVC → CVC /zAmA(n-/ ‘time’ /ispA(t-/ ‘proof’

accusative zAmA(nQ ispA(tQ

nominative zAmAn ispAt

ablative zAmAndAn ispAttAn

b.

CVCC → CVCVC /kArn-/ ‘abdomen’

kArnQ

kArQn

kArQndAn

Type 3 languages, or at least the known cases, do not provide evidence for ternary weight distinctions. Lithuanian, for example, has the following syllable shapes in its inventory, classified in terms of weight: (17)

Lithuanian Light Heavy

CV, CVO, CVOO CVV, CVR, CVVO, CVRO

This weight pattern, as we saw in §2, is supported by the system of Lithuanian pitch accents (chapter 42: pitch accent systems): only heavy syllables, that is, CVV and CVR, can have contour tones. Lithuanian also provides evidence for strict binarity. This is evidenced by the process known as ablaut which applies in verbal morphology, with the effect of lengthening the root vowels in the preterite and infinitive, but not in the present form (Zec 1995). Vowel lengthening due to ablaut takes effect in all preterite forms: the root vowel occurs in an open syllable, due to the vowel-initial ending -ee, and is free to lengthen. In the infinitive forms, the root vowel is in a closed syllable, due to the consonantinitial ending -ti. Lengthening takes place in (18a), i.e. in roots that end in an obstruent, but not in roots that end in a sonorant (18b). (18)

Lithuanian: Ablaut in verbal forms a.

CVO

b.

CVR

root tupdrebvirmir-

present tupia drebia viria miria

preterite tuupee dreebee viiree miiree

infinitive tuupti dreebti virti mirti

‘perch’ ‘splash’ ‘boil’ ‘die’

That is, ablaut may not create a superheavy CVVR syllable, and is therefore prevented from taking effect in the infinitives of the roots in (18b). While type 2 languages may tolerate CVCC and CVVC syllables in their syllable inventories, such syllables do not form a natural class: the former has the weight of CV, and the latter has the weight of CVV syllables. The extended syllable inventories we document in this section call for representations richer than those discussed in §3. This was directly addressed in moraic representations of the syllable and its weight: a constraint restricts the number of moras per syllable to no more than two; and this constraint can be violated in some languages, giving rise to trimoraic syllables, as in Hindi. The syllable inventory in Latin is accommodated by allowing some non-moraic consonants at the syllable’s right margin (for a detailed discussion, see Sherer 1994).

Draga Zec

14

6

Inconsistencies in weight patterns

The representations in §3, despite some conceptual differences, make the strong prediction that quantity distinctions in a language will be of the same type across the board, that is, in all relevant phonological processes, and in all contexts. However, a challenge to this strong position comes from many known cases of weight inconsistencies.

6.1

Weight inconsistencies with respect to phonological process

In §2.1 we saw that Latin belongs to the type 1 weight pattern both in its stress system and in the system of poetic meter. The phenomenon of compensatory lengthening (chapter 64: compensatory lengthening) conforms to this same weight pattern, as in /kasnus/ → [ka(nus]. While not uncommon, weight consistency across different phonological processes, as evidenced in Latin, is not the general case. Weight inconsistencies are encountered in a number of languages, as noted by Steriade (1990), as well as Hayes (1995) and Gordon (2006). One such case is Kiowa (Watkins 1984). As shown in (19), vowels are shortened in syllables closed by sonorants (19a), as well as those closed by obstruents (19b) and (19c), suggesting a type 1 weight system that obeys strict binarity. (19)

Kiowa short vowels in closed syllables: Type 1 a.

b.

c.

gú(lê( gûl gúltG( cá(dG( cát cátpé t h F( t h F(dèkh ì( t h Fp

‘write-imperf-fut’ ‘write-imp’ ‘write-fut’ ‘from the doorway’ ‘entrance, doorway’ ‘against the doorway’ ‘beyond’ ‘next day’ ‘away beyond’

However, the distribution of contour tones, shown in (20), clearly points to a weight system of type 3. Contour tones occur on CVV syllables and syllables closed by a sonorant, as in (20a), but not on either CV syllables or syllables closed with an obstruent, as exemplified in (20b). (20)

Kiowa contour tones: Type 3 a.

b.

pá(lê( sân khûl sà(né sép

‘weak’ ‘child’ ‘pull off’ ‘snake’ ‘rain’

Another case is Lhasa Tibetan (Gordon 2006, based on Dawson 1980), in which the stress system treats only CVV syllables as heavy, while the system of tone treats

Quantity-sensitivity

15

as heavy both CVV and CVR syllables (Gordon 2006 and references therein). In other words, Lhasa Tibetan is a type 2 language in its stress system, and a type 3 language in its tonal system. According to Steriade (1990), variability in weight is also found in Classical Greek, in which CVV syllables are heavy for the purposes of tone, yet all syllables are heavy for the purposes of stress distribution. Thus, stress falls on the penultimate syllable if the final syllable is heavy, either CVC(C) or CVV(C), otherwise it is antepenultimate. However, only CVV syllables can sustain tonal contours, either HL or LH. Cases of weight variability in different phonological subsystems within a single language present an important challenge to formal representations, and call for fresh perspectives on the syllable and its quantity.

6.2

Weight inconsistencies with respect to phonological context

It has been noted in much work on stress that the weight of a syllable may be computed differently in word-internal and word-final positions. Thus in Classical Arabic, as shown in §2, stress falls on the rightmost CVV or CVC syllable, yet never on the final CVC. That is, CVC syllables are computed as heavy wordinternally and as light word-finally. A further fact, not mentioned in §2, is that final CVCC syllables are always stressed, i.e. they are computed as heavy (CVCC do not occur word-internally). In other words, word-final consonants do not contribute to weight. Such cases of variable weight were subsumed in Hayes (1980) under the more general rubric of extrametricality (chapter 43: extrametricality and non-finality), according to which certain phonological entities, segments as well as higher constituents, are “invisible” to phonological processes at word edges. There have been proposals, however, to treat contextual differences in weight as representational differences (Davis 1987; Kager 1989; Rice 1995; Rosenthall and van der Hulst 1999; see also chapter 36: final consonants). Under this view, the CVC sequence in Classical Arabic would be parsed as a heavy syllable wordinternally, and as a light syllable word-finally. It has been shown, however, that contextual weight differences are not restricted to word edges. Several cases of this type have been reported in Hayes (1994, 1995), among them Cahuilla and Eastern Ojibwa, as well as Central Alaskan and Pacific Yupik. In the Pacific Yupik dialect of Chugach, CVV syllables are heavy in all positions, while CVC syllables are heavy only initially, and light elsewhere. The distribution of stress in Chugach is fairly complex, and there can be more than one stress per word (for details, see Leer 1985; Kager 1993; Hayes 1995). We focus here on the evidence for the variable weight of CVC syllables. While initial CVV and CVC syllables are stressed, as in /’ta(ta’qa/ ‘my father’ and /’anciku’kut/ ‘we’ll go out’, initial CV syllables are not, as in /mu’lu’ku(t/ ‘if you take a long time’. But in medial position, CVC syllables pattern with CV rather than CVV syllables. Note that the second syllable in /’kal’ma(nuq/ ‘pocket’, a CVV syllable, is stressed. Neither CV nor CVC syllables are stressed in this same environment, as in the forms /’anku’taøtu’a/ ‘I’m going to go out’ and /’atmax’Œiqu’a/ ‘I will backpack’. Another relevant case is Goroa (Hayes 1980; Rosenthall and van der Hulst 1999, and references therein), in which stress falls on the leftmost CVV syllable, as in

Draga Zec

16

(21a), or on the final CVC syllable, as in (21b); or on the penultimate syllable, as in (21c). Crucially, CVC syllables in positions other than final are not heavy: the second CVC syllable in /giram’bo(da/ does not win over the following CVV syllable, nor do the CVC syllables in /axe’mis/ and /idir’dana/ attract stress. (21)

Goroa stress: Variable weight of CVC syllables a.

Leftmost CVV du(gnuno( giram’bo(da heni’nau

stressed ‘thumb’ ‘snuff’ ‘young’

b.

Final CVC stressed a’dux ‘heavy’ axe’mis ‘hear’

c.

Penultimate syllable stressed oro’mila ‘because’ am’rami ‘ivory arm ring’ idir’dana ‘sweet’

Contextually conditioned variation in syllable quantity affects CVC syllables, those that cross-linguistically could be either light or heavy. Thus the variability of the weight of CVC syllables found across languages has also been evidenced within individual languages. The phenomenon of contextually conditioned weight inconsistency of CVC syllables has been addressed, with a fair amount of success, in the Optimality Theory framework, most notably in Rosenthall and van der Hulst (1999).

7

Quantity-sensitivity of the foot

Syllables are grouped into feet, which belong to the next higher level of the prosodic constituency in (1) (see chapter 40: the foot; chapter 41: the representation of word stress). Quantity-sensitivity of the syllable is directly reflected at the level of the foot, as noted in Hayes (1980, 1995), McCarthy and Prince (1986), and Prince (1990), among others. Feet play an important role in the characterization of stress and in prosodic morphology, and our examples will come from both domains. As shown in a vast body of literature, feet tend to be binary. That is, feet are prosodic constituents resulting from grouping at most two constituents at the next lower level (Hayes 1995; McCarthy and Prince 1986; Prince 1990; among others). How this proceeds depends crucially on whether a language has a quantity-sensitive or a quantity-insensitive foot system (Hayes 1980). In quantity-insensitive systems, pairs of syllables are incorporated into feet regardless of their weight. Relevant for our discussion is foot formation in quantity-sensitive systems, in which syllable weight plays a crucial role. An important property of such systems is the commensurability of a heavy syllable with two lights. There are two types of quantitysensitive feet, trochaic and iambic (chapter 44: the iambic–trochaic law). In quantity-sensitive trochaic systems a foot corresponds to either one heavy syllable, as in (22a), or two light syllables, as in (22b); feet are left-headed, that is, have initial prominence, shown in (22b) by underlining.

Quantity-sensitivity (22)

17

Trochaic foot inventory a. b.

qH qL qL

This receives a straightforward interpretation in moraic theory of syllable structure: a foot contains two moras, a condition met either by one heavy syllable, as in (22a), or by two lights, as in (22b). A heavy syllable has a dual status: it counts not only as a syllable but also as a foot. This foot inventory is active in the stress system of Fijian, a type 2 language (Hayes 1995, and references therein). In words with only a light syllable, pairs of syllables are incorporated into feet, computing from right to left, and foot-initial syllables are assigned stress. As a result, stress falls on every second syllable, computed from the right edge, as shown in (23). Parsing of syllables into feet obeys strict binarity, but is not necessarily exhaustive. In words with an odd number of syllables, as in (23c) and (23e), a syllable at the left edge is not footed. (The rightmost stressed syllable bears primary stress; others bear secondary stress.) (23)

Fijian stress: Light syllables only a. b. c. d. e.

(24)

(’lako) (’talo) ßi (’naka) (‘ndiko) (’nesi) pe (‘resi) (’tendi)

‘go’ ‘pour’ ‘good’ ‘deaconess’ ‘president’

Fijian stress: Light and heavy syllables a. e. c. d. e.

ki (’la() (‘mbe() (’leti) (‘mbele) (‘mbo() (’tomu) pa(‘ro() ka (’ramu) (‘mi() (‘sini) (’Igani)

‘know’ ‘belt’ ‘bellbottoms’ ‘program’ ‘machine-gun’

In words with both light and heavy syllables, each heavy syllable corresponds to a foot, and is stressed. Right-to-left footing is thus disrupted by heavy syllables, and has to work around them. In the disyllabic form with a heavy final syllable, in (24a), the initial syllable is left unfooted. And the form in (24d), which has five syllables, two light syllables, the first and the third, are left unfooted. All syllables are footed in the remaining forms in (24). The inventory of feet in (22) captures the distribution of stress in a number of trochaic quantity-sensitive systems, including some of the cases presented in §2. In particular, stress in Latin follows the same pattern as in Fijian, with one notable difference: The final syllable is ignored for the purposes of stress (another case of so-called extrametricality, see §6.2). As a result, trisyllabic forms with only light syllables have initial stress, as in (’ani)ma ‘soul (nom sg)’. Likewise, final heavy syllables are not stressed: in (’gau)dbns ‘rejoicing (nom sg)’ the penultimate heavy, but not the final heavy, is footed, and stressed (for a detailed analysis, see Mester 1994; Hayes 1995). We also present a case of prosodic morphology that employs the foot inventory in (22). In the system of Japanese hypocoristic formation, as characterized in

Draga Zec

18

Poser’s (1990) detailed study, hypocoristics are formed by adding the suffix -tjan to proper names, either to their full or modified form. As shown by Poser, what is considered to be modification is really a case of template satisfaction. Crucially, the template corresponds to a trochaic foot: either to two light syllables or one heavy syllable. Japanese, a type 1 language, has light CV and heavy CVV and CVC syllables. As shown by the truncated versions of the proper name Hanako, the suffix is added to two light syllables, as in (25a) or one heavy, as in (25b) and (25c). The truncated form cannot be smaller than a foot, corresponding to a single light syllable, as in the ill-formed (25d). Nor can the truncated form be greater than a foot, corresponding to three light syllables, as shown by the ill-formed (26b). Proper names corresponding to a light syllable are converted to a heavy syllable, that is, to a foot; in (27a) this is accomplished by vowel lengthening. Note that (27b) is also available, as -tjan can be added to any proper name in its full form regardless of its size. (25)

Hypocoristic forms for Hanako a. b. c. d.

(26)

Hypocoristic forms for Takatugu a. b.

(27)

hanatjan haatjan hattjan *hatjan

takatjan *takatutjan

Hypocoristic forms for Ti a. b.

tiitjan titjan

Thus, in trochaic prosodic morphology, just as in trochaic stress systems, a heavy syllable is functionally equivalent with two light syllables. Quantity-sensitive iambic feet differ somewhat in shape from the trochaic set, as shown by the inventory in (28). Iambic feet are right-headed, indicated by the underlining. (28)

Iambic foot inventory a. b. c.

qH qL q L qL q H

In this case, as well, syllable quantity plays a central role: for a foot to be wellformed, it needs to contain syllables of the correct weight. The iambic system of quantitative feet captures the distribution of stress in Asheninca (Hayes 1995; Payne 1990). Asheninca has a type 2 weight system, with only CVV heavy syllables. The forms in (29a) contain only light syllables: binary right-headed feet are computed from right to left. The final syllable is regularly left unfooted, which yields initial stress in disyllables, as in /’haka/ ‘here’. Crucial are the forms in (29b), which contain both light and heavy syllables, and can therefore exemplify all members of the foot inventory.

Quantity-sensitivity (29)

19

Asheninca stress system a.

b.

(pa.’me).(na.’ko).(wen.’ta).ke.ro (ha.’ma).(nan.’ta).(ke.’ne).ro (no.’ko).(wa.’we).ta.ka (no.’ton).(ka.’men).to (ka.’man).ta.ke (no.’ma).(ko.’rjaa).(’wai).(ta.’paa).ke (pi.’Jaa).(’paa).ke (i.’kjaa).(’piin).ti (’poo).(ka.’na).ke.ro (’paa).(ti.’ka).ke.ri

‘take care of her’ ‘he bought it for her’ ‘I wanted (it) in vain’ ‘my gun’ ‘he/she said’ ‘I rested a while’ ‘you saw on arrival’ ‘he always enters’ ‘you threw it out’ ‘you stepped on him’

Quantity-sensitive iambic feet also figure in prosodic morphology. In Ulwa, which has a type 1 weight system, the suffix /-ka/ is attached to the leftmost iambic foot, as in (30). It occurs at the right edge of a stem only when the entire stem corresponds exactly to a foot, as in (30a). In (30b), the only way for /-ka/ to be attached to an iambic foot is to occur stem-internally. (30)

Ulwa construct state (from McCarthy and Prince 1990: 228) a.

b.

base al bas kii sana amak sapaa suulu kuhbil baskarna siwanak anaalaaka karasmak

possessed al-ka bas-ka kii-ka sana-ka amak-ka sapaa-ka suu-ka-lu kuh-ka-bil bas-ka-karna siwa-ka-nak anaa-ka-laaka karas-ka-mak

‘man’ ‘hair’ ‘stone’ ‘deer’ ‘bee’ ‘forehead’ ‘dog’ ‘knife’ ‘comb’ ‘root’ ‘chin’ ‘knee’

Trochaic and iambic systems differ with regard to the role of quantity, as noted in Hayes (1985) and Prince (1990) as well as chapter 44: the iambic–trochaic law. The preferred type of trochaic disyllabic feet includes two light syllables, while iambic feet optimally correspond to a sequence of a light and heavy syllable. Thus, disyllabic trochaic feet are preferably even, while iambic feet are preferably of uneven quantity. Evidence for even trochaic quantity comes from the so-called trochaic shortening, which makes an uneven trochee even by vowel shortening, as exemplified by Fijian. The form in (31a), with an underlying long vowel, undergoes shortening when integrated into a disyllabic foot, as in (31b). (31)

Fijian: Trochaic shortening a. b.

’ta( ’ta-ja

‘chop’ ‘chop-trans-3sg obj’

By contrast, uneven quantity is an important feature of iambic systems. A number of iambic stress systems are characterized by iambic lengthening, including

Draga Zec

20

Menomini, Hixkaryana, and Kashaya (Hayes 1995). In Hixkaryana (Caribian), prominent CV syllables undergo vowel lengthening, as in (32a) and (32b), and thus become heavy. Note that prominent CVC syllables, which are already heavy, are not subject to lengthening, as in the second foot in (32c), and the initial syllables in (32a) and (32b). (32)

Hixkaryana: Iambic lengthening a. b.

owtohona tohkurjehonahaœaka

c.

mqhananqhno

(’ow)(to’ho()na (’toh)(ku’rje()(ho’na() (ha’œa()ka (mq’ha()(na’nqh)no

‘to the village’ ‘finally in Tohkurye’ ‘you taught him’

Generalizations about the quantity of trochaic and iambic groupings are stated in Hayes (1995) as the Iambic–Trochaic Law (see chapter 44: the iambic–trochaic law): (33)

The Iambic–Trochaic Law a. b.

8

Elements contrasting in intensity naturally form groupings with initial prominence. Elements contrasting in duration naturally form groupings with final prominence.

Scalar quantity systems

While binary quantity systems are based primarily on grouping syllables into feet, scalar quantity systems are based on prominence, defined along some dimension (Prince and Smolensky 1993; Hayes 1995). A central prominence dimension is syllable weight, although other dimensions, such as tone and vowel height, have been evidenced as well. We present two cases with syllable weight as the prominence dimension. One is Kashmiri, with examples given in (34) (Kenstowicz 1993; Rosenthall and van der Hulst 1999). In Kashmiri, CVV syllables are heavier than CVC, which in turn are heavier than CV. Thus, in words with only CV and CVV syllables, stress falls on the leftmost CVV, as in (34a). In words with only CV and CVC, stress falls on the leftmost CVC, as in (34b). In words with both CVC and CVV syllables, stress falls on the CVV syllable, as in (34c). Finally, with only CV syllables present, stress is initial, as in (34d). The final syllable is excluded from scansion. (None of the sources supply glosses for Kashmiri forms.) (34)

Kashmiri stress: CVV > CVC > CV a. b.

mu’si(bah a’jo(gjH ta( ba’gandarladin juni’varsiti

c. d.

am’ri(ka mas’ra(wun ’tsaripop ’paharadari(

Quantity-sensitivity

21

Languages in which stress is assigned on the basis of scalar syllable prominence may have several degrees of syllable weight. Thus Hindi (for the dialect described in Kelkar 1968) has three degrees of syllable weight: superheavy syllables CVVC and CVCC are more prominent than heavy syllables, CVV and CVC, which in turn are more prominent than CV syllables, as stated in (35). (35)

CVVC, CVCC > CVV, CVC > CV

Excluding the final syllable from scansion, stress is assigned to the heaviest available syllable, as in (36). In both forms stress falls on a CVVC syllable, which in (36b) wins over a CVV syllable, and in (36a) over both a CV and CVV syllable. (36)

a. b.

’œo(x–aba(ni( ’re(zga(ri(

‘talkative’ ‘small change’

If there is a tie, stress is assigned to the rightmost (non-final) syllable: to a CV syllable in (37a), and a CVV syllable in (37b) and (37c). (37)

a. b. c.

sa’miti ro(’za(na( ka(’ri(gari(

‘committee’ ‘daily’ ‘craftsmanship’

Interestingly, when the final syllable is the heaviest in the word, it is not excluded from scansion, as in (38): (38)

ki’dhar ru’pia as’ba(b

‘which way’ ‘rupee’ ‘goods’

Quantity in Hindi is thus computed along a scale of syllable weight, with the superheavy syllable being most prominent, followed by the heavy syllable and then by the least prominent light syllable. This case is analyzed in precisely these terms in Hayes (1995) and Prince and Smolensky (1993), although in different frameworks: in rule-based metrical theory and in Optimality Theory, respectively. An interesting mode of computing prominence is found in Pirahã, a Mura language of Brazil (Everett 1988; Hayes 1995). The Pirahã prominence scale combines syllable weight and onset quality (on onsets, see chapter 55: onsets). While CVV syllables are more prominent than CV syllables, voiceless onsets are more prominent than voiced onsets, and presence of onset is more prominent than its absence, yielding the scale in (39). (39)

KVV > GVV > VV > KV > GV

[K = voiceless, G = voiced]

Stress falls on one of the last three syllables of the word that is highest on this scale, as in (40a). In the event of ties, the rightmost syllable wins, as in (40b). (40)

Pirahã prominence-based stress a.

’ka(gai ?apa’ba(si ’?ibogi

b.

ko’po ?aba’pa paohoa’hai

22

Draga Zec

Further dimensions for computing prominence are in no obvious way related to patterns of syllable quantity we surveyed here. Yet, because of their scalar nature, they are highly reminiscent of quantity-based systems of prominence. One such dimension is vowel quality: given the sonority scale, stress falls on the most sonorous vowel. Prominence systems of this type have been analyzed in Kenstowicz (1997) and de Lacy (2004). In Mordwin, for example, non-high vowels are more prominent than high vowels (chapter 21: vowel height). In words with only non-high vowels, or with only high vowels, stress falls on the leftmost syllable. However, in words that contain both high and non-high vowels, stress falls on a non-high vowel, regardless of its position in the word. Another dimension is tonal prominence: syllables associated with High tones are more prominent than syllables associated with Low tones, and thus more likely to be associated with stress. Prominence systems of this type are described in Hayes (1995) and de Lacy (2002); for a somewhat different perspective, see Zec (2003). Of particular interest is the complex case of Nanti, a Kampa language of Peru: its stress system, which is of the iambic type, is also governed by several types of prominence, including syllable quantity and vowel quality (Crowhurst and Michael 2005).

9

Quantity-sensitivity at the higher levels of the prosodic hierarchy

When focusing on higher levels of the prosodic hierarchy, the prosodic word and the prosodic phrase, we are in fact dealing with morphosyntax/prosody interface. Quantity-sensitivity is a specifically prosodic phenomenon and is not known to play any role in other modules of the grammar. Any effects of quantitysensitivity in either morphology or syntax are therefore to be attributed to prosody. We addressed the interfaces with morphology in §8, with two cases of affixes that select not only a morphological class, but also a prosodic type of the stem; both in this case select for the foot. Many more such cases are found in the literature (McCarthy and Prince 1986, 1990, 1995; among others). The word level of the prosodic hierarchy is constituted by a grouping of feet (chapter 41: the representation of word stress; chapter 51: the phonological word). In practice, however, one foot is sufficient for a prosodic word to achieve the desired quantity, as documented by numerous cases of minimal word size. Moreover, in a number of languages, no minimal size beyond a single syllable is imposed on prosodic words. This is broadly documented in Hayes (1995) and Downing (2006), among others. In sum, the prosodic word provides no evidence for quantity-sensitivity of the sort found at the level of the syllable and the foot: its binary structure is not distinct from that of a foot. There are no known cases of a prosodic word minimally branching into two feet, yet this would be expected, based on the situation at the lower end of the prosodic hierarchy. However, quantity-sensitivity has been evidenced at the higher end of the hierarchy, that is, at the level of the prosodic phrase. The distribution of a syntactic constituent should not be affected by its length or internal complexity. When such effects arise, they are generally attributed to prosody. We focus here

Quantity-sensitivity

23

on cases of branching in prosodic phrases, typical cases of apparent quantitysensitivity of syntactic constituents. Cases of binary branching prosodic phrases were reported by Nespor and Vogel (1986), with evidence from Italian, French, and English. In Italian, for example, a prosodic phrase preferably contains more than one prosodic word, as shown by the following cases (Nespor and Vogel 1986): (41)

Prosodic phrase formation in Italian a.

Av’ra trovato (il pescecane)t ‘He will have found the shark.’

b.

(I cari’bu nanni)t sono estinti ‘Dwarf caribous are extinct.’

c.

Hanno dei (car’ibú)t (molto piccoli)t ‘They have very small caribous.’

While complements that correspond to single prosodic words, as in (41a), form one-word prosodic phrases, multiple word complements, as in (41b) and (41c), correspond to branching prosodic phrases. The prosodic phrasing in (41c) further shows that complements with three prosodic words do not correspond to a single prosodic phrase, as branching prosodic phrases contain at most two prosodic words. By contrast, Serbo-Croatian sentence-initial topics have to include at least two prosodic words (Zec and Inkelas 1990), and thus exemplify obligatory branching in prosodic phrases. This line of research has been further continued by Ghini (1993), Selkirk (2000), and Sandalo and Truckenbrodt (2002), among others.

10 Remarks on markedness It is important to note that the markedness (chapter 4: markedness) of light and heavy constituents is not identical across prosodic levels: heavy constituents are marked at the level of the syllable, while light constituents are marked at the level of the foot. This is directly encoded in Optimality Theory. Constraints listed in (42) assign marked status to heavy syllables: to CVV syllables, as in (42a), and to syllables with coda consonants, as in (42b) and (42c). While (42b) targets any coda consonant, (42c) targets any weight-bearing segment. (42)

a.

NoLongVowel (Rosenthall 1994) A vowel should not be long, i.e. linked to more than one mora.

b.

NoCoda (Prince and Smolensky 1993) Syllables must not have a coda.

c.

*Mora[seg] (Morén 1999) Do not associate a mora with a particular type of segment.

These constraints, which belong to the markedness family, penalize binary structures at the syllable level, thus favoring a simple CV syllable, which is light. Thus, light syllables emerge as the unmarked case: all languages have light syllables, and some may also have heavy syllables. Superheavy, i.e. trimoraic, syllables are,

24

Draga Zec

of course, also marked, and are penalized as such by a constraint against trimoraic syllables proposed by Sherer (1994). By contrast, heavy feet are preferred over light ones. Binary constituents are highly desirable at the level of the foot, both in trochaic and iambic systems. Non-binary, or light, feet are permitted in some languages under very special conditions and banned in others. The unmarked condition for feet is thus to be binary, that is, heavy, either under moraic or syllabic analysis, and this is codified in Optimality Theory by a corresponding constraint: (43)

FootBinarity (McCarthy and Prince 1993) Feet must be binary under a syllabic or moraic analysis.

At the higher prosodic levels, constituent size is largely determined by morphosyntax, as is the distribution of light and heavy constituents. However, where permitted by morphosyntax, heavy, i.e. branching, constituents are preferred over light ones (see §9).

11

Conclusion

Quantity-sensitivity is an important property of prosodic structure, evidenced at each of its levels. As we have seen, constituents at any level of the prosodic hierarchy can be classified into those that are light and those that are heavy. While quantity-sensitivity is typically associated with the syllable and the foot, all prosodic levels exhibit this property. Whether a syllable is light or heavy crucially depends on its segmental setup; quantity at the level of the foot relies on, and is largely characterized in terms of syllable quantity; quantity-sensitivity of the prosodic word is non-distinct from that of the foot; and quantity-sensitivity at the higher prosodic levels is heavily influenced by morphosyntax. While characterization of quantity largely depends on level-specific criteria, a general property of heavy constituents is their greater size and complexity, and often their binary structure. It is interesting, however, that preference, or dispreference, for heavy constituents varies across prosodic levels. The unmarked condition for syllables is to be light while the unmarked condition for feet is to be heavy. The latter condition persists through the higher levels of the prosodic hierarchy. Thus, while light syllables are preferred over heavy ones, feet and prosodic words are preferably heavy. Heavy prosodic phrases are preferred as well, although in a very weak sense.

ACKNOWLEDGMENTS I am most grateful to Adam Cooper, Michael Weiss, and Kimberly Will, to an anonymous referee, and to the editors, Marc van Oostendorp and Keren Rice, for their invaluable comments and suggestions, which have led to major improvements to this chapter.

Quantity-sensitivity

25

REFERENCES Allen, W. Sidney. 1973. Accent and rhythm. Cambridge: Cambridge University Press. Boas, Franz. 1947. Kwakiutl grammar with a glossary of the suffixes. Transactions of the American Philosophical Society: New Series 37. 201–377. Buckley, Eugene. 1998. Alignment in Manam stress. Linguistic Inquiry 29. 475–496. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. & Samuel J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Crowhurst, Megan J. & Lev Michael. 2005. Iterative footing and prominence-driven stress in Nanti (Kampa). Language 81. 47–95. Davis, Stuart. 1987. Coda extrametricality in English nouns. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23. 66–75. Dawson, Willa. 1980. Tibetan phonology. Ph.D. dissertation, University of Washington. de Lacy, Paul. 2002. The interaction of tone and stress in Optimality Theory. Phonology 19. 1–32. de Lacy, Paul. 2004. Markedness conflation in Optimality Theory. Phonology 21. 145–199. Downing, Laura J. 2006. Canonical forms in prosodic morphology. Oxford: Oxford University Press. Everett, Daniel L. 1988. On metrical constituent structure in Pirahã phonology. Natural Language and Linguistic Theory 6. 207–246. Ghini, Mirco. 1993. X-formation in Italian: A new proposal. Toronto Working Papers in Linguistics 12. 41–78. Gordon, Matthew. 2006. Syllable weight: Phonetics, phonology, typology. London: Routledge. Halle, Morris & William J. Idsardi. 1995. General properties of stress and metrical structure. In John A. Goldsmith (ed.) The handbook of phonological theory, 403–443. Cambridge, MA & Oxford: Blackwell. Halle, Morris & Jean-Roger Vergnaud. 1980. Three-dimensional phonology. Journal of Linguistic Research 1. 83–105. Halle, Morris & Jean-Roger Vergnaud. 1987. An essay on stress. Cambridge, MA: MIT Press. Hayes, Bruce. 1980. A metrical theory of stress rules. Ph.D. dissertation, MIT. Hayes, Bruce. 1982. Extrametricality and English stress. Linguistic Inquiry 13. 227–276. Hayes, Bruce. 1985. Iambic and trochaic rhythm in stress rules. Proceedings of the Annual Meeting, Berkeley Linguistics Society 11. 429–446. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Hayes, Bruce. 1994. Weight of CVC can be determined by context. In Jennifer Cole & Charles W. Kisseberth (eds.) Perspectives in phonology, 61–79. Stanford: CSLI. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hyman, Larry M. 1977. On the nature of linguistic stress. In Larry M. Hyman (ed.) Studies in stress and accent, 37–82. Los Angeles: Department of Linguistics, University of Southern California. Hyman, Larry M. 1985. A theory of phonological weight. Dordrecht: Foris. Jakobson, Roman. 1931. Die Betonung und ihre Rolle in der Wort- und Syntagmaphonologie. Travaux du Cercle Linguistique de Prague 2. Reprinted (1962) in Selected writings, vol. 1: Phonological studies, 117–136. The Hague: Mouton. Jeanne, LaVerne Masayesva. 1982. Some phonological rules of Hopi. International Journal of American Linguistics 48. 245–270. Kager, René. 1989. A metrical theory of stress and destressing in English and Dutch. Dordrecht: Foris.

26

Draga Zec

Kager, René. 1993. Alternatives to the iambic-trochaic law. Natural Language and Linguistic Theory 11. 381–432. Kelkar, Ashok R. 1968. Studies in Hindi-Urdu I: Introduction and word phonology. Poona: Deccan College. Kenstowicz, Michael. 1993. Peak prominence stress systems and Optimality Theory. Proceedings of the 1st International Conference of Linguistics and Chosun University, 7–22. Gwangju: Foreign Culture Research Institute, Chosun University, Korea. Kenstowicz, Michael. 1997. Quality-sensitive stress. Rivista di Linguistica 9. 157–187. Kiparsky, Paul. 1979. Metrical structure assignment is cyclic. Linguistic Inquiry 10. 421–441. Kiparsky, Paul. 1981. Remarks on the metrical structure of the syllable. In Wolfgang U. Dressler, Oskar E. Pfeiffer & John R. Rennison (eds.) Phonologica 1980, 245–256. Innsbruck: Innsbrucker Beiträge zur Sprachwissenschaft. KuryÓowicz, Jerzy. 1948. Contribution à la théorie de la syllabe. Bulletin de la Société Polonaise de Linguistique 8. 80–113. Larsen, Raymond S. & Eunice V. Pike. 1949. Huasteco intonations and phonemes. Language 25. 268–277. Leer, Jeff. 1985. Toward a metrical interpretation of Yupik prosody. In Michael Krauss (ed.) Yupik Eskimo prosody systems: Descriptive and comparative studies, 159–172. Fairbanks: Alaska Native Language Center Research Papers. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. Lichtenberk, Frantisek. 1983. A grammar of Manam. Honolulu: University of Hawaii Press. McArthur, Henry & Lucille McArthur. 1956. Aguacatec (Mayan) phonemes within the stress group. International Journal of American Linguistics 22. 72–76. McCarthy, John J. 1979. On stress and syllabification. Linguistic Inquiry 10. 443–465. McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. McCarthy, John J. & Alan Prince. 1990. Foot and word in prosodic morphology: The Arabic broken plural. Natural Language and Linguistic Theory 8. 209–283. McCarthy, John J. & Alan Prince. 1993. Generalized alignment. Yearbook of Morphology 1993. 79–153. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249–384. Amherst: GLSA. Mester, Armin. 1994. The quantitative trochee in Latin. Natural Language and Linguistic Theory 12. 1–61. Miller-Ockhuizen, Amanda. 1998. Towards a unified decompositional analysis of Khoisan lexical tone. In Mathias Schladt (ed.) Language, identity, and conceptualization among the Khoisan, 217–243. Cologne: Rüdiger Köppe Verlag. Morén, Bruce. 1999. Distinctiveness, coercion and sonority: A unified theory of weight. Ph.D. dissertation, University of Maryland at College Park. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Newman, Paul. 1972. Syllable weight as a phonological variable. Studies in African Linguistics 3. 301–323. Payne, Judith. 1990. Asheninca stress patterns. In Doris L. Payne (ed.) Amazonian linguistics: Studies in lowland South American languages, 185–209. Austin: University of Texas Press. Poppe, Nicholas N. 1960. Buriat grammar. Bloomington: Indiana University Publications. Poser, William J. 1990. Evidence for foot structure in Japanese. Language 66. 78–105. Prince, Alan. 1983. Relating to the grid. Linguistic Inquiry 14. 19–100. Prince, Alan. 1990. Quantitative consequences of rhythmic organization. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26(2). 355–398. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder.

Quantity-sensitivity

27

Rice, Curt. 1995. An Optimality Theoretic analysis of weight variability and variation in primary stress in English, Dutch, and Norwegian. Unpublished ms., University of Tromsø. Rosenthall, Sam. 1994. Vowel/glide alternation in a theory of constraint interaction. Ph.D. dissertation, University of Massachusetts, Amherst. Rosenthall, Sam & Harry van der Hulst. 1999. Weight-by-position by position. Natural Language and Linguistic Theory 17. 499–540. Ryding, Karin C. 2005. A reference grammar of Modern Standard Arabic. Cambridge: Cambridge University Press. Sandalo, Filomena & Hubert Truckenbrodt. 2002. Some notes on phonological phrasing in Brazilian Portuguese. Delta 19. 1–30. Selkirk, Elisabeth. 1978. On prosodic structure and its relation to syntactic structure. In Thorstein Fretheim (ed.) Nordic prosody II, 111–140. Trondheim: Tapir. Selkirk, Elisabeth. 1980. Prosodic domains in phonology: Sanskrit revisited. In Mark Aronoff & Mary-Louise Kean (eds.) Juncture, 107–129. Saratoga: Anma Libri. Selkirk, Elisabeth. 2000. The interaction of constraints on prosodic phrasing. In Merle Horne (ed.) Prosody: Theory and experiment, 231–261. Dordrecht: Kluwer. Sherer, Timothy. 1994. Prosodic phonotactics. Ph.D. dissertation, University of Massachusetts, Amherst. Steriade, Donca. 1982. Greek prosodies and the nature of syllabification. Ph.D. dissertation, MIT. Steriade, Donca. 1988. Review of Clements & Keyser (1983). Language 64. 118–129. Steriade, Donca. 1990. Moras and other slots. Proceedings of the Formal Linguistics Society of Midamerica 1. 254–280. Trubetzkoy, Nikolai S. 1939. Grundzüge der Phonologie. Göttingen: van der Hoeck & Ruprecht. Walker, Rachel. 1996. A third parameter for unbounded stress. Papers from the Annual Meeting of the North East Linguistic Society 26. 441–455. Watkins, Laurel J. 1984. A grammar of Kiowa. Lincoln & London: University of Nebraska Press. Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press. Young, Robert & William Morgan. 1987. The Navajo language: A grammar and dictionary. Albuquerque: University of New Mexico Press. Zec, Draga. 1988. Sonority constraints on prosodic structure. Ph.D. dissertation, Stanford University. Zec, Draga. 1995. Sonority constraints on syllable structure. Phonology 12. 85–129. Zec, Draga. 2003. Prosodic weight. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 123–143. Cambridge: Cambridge University Press. Zec, Draga & Sharon Inkelas. 1990. Prosodically constrained syntax. In Sharon Inkelas & Draga Zec (eds.) The phonology–syntax connection, 365–378. Chicago: University of Chicago Press. Zhang, Jie. 2002. The effects of duration and sonority on contour tone distribution: Typological survey and formal analysis. New York & London: Routledge.

58

The Emergence of the Unmarked Michael Becker Kathryn Flack Potts

1

Introduction

The term “The Emergence of the Unmarked” (TETU), originally coined by McCarthy and Prince (1994), refers to situations where some marked structure is generally allowed in a language, but banned in particular contexts; the complementary unmarked structure thus “emerges.” In Nuu-chah-nulth (Wakashan, referred to by McCarthy and Prince by its former name, Nootka), for example, syllables can generally have codas; reduplicants, however, are exceptional in that codas are banned. This results in words like [Œi-Œim.s’i(p] ‘hunting bear’ and [wa(-wa(s.Œix] ‘naming where’, in which unmarked (codaless) syllables emerge in reduplicants despite the presence of marked codas in bases. TETU effects came to prominence in phonological theory with the advent of Optimality Theory (OT; Prince and Smolensky 1993). In OT terms, these effects typically follow from rankings like (1), where a markedness constraint M is dominated by a faithfulness constraint F1, which blocks M’s activity in some, though crucially not all, contexts. M is free to become active in contexts where F1 isn’t relevant; here, M can motivate violation of still lower-ranked faithfulness constraints (F2). (1)

F1 >> M >> F2

The Nuu-chah-nulth pattern described above results from a ranking of this type, as shown in (2) and (3). The markedness constraint NoCoda is dominated by the anti-deletion constraint IO-Max; this ranking protects underlying codas from deletion, eliminating the unmarked, codaless candidate (2b). Since reduplicants are assumed not to stand in correspondence with inputs, however (chapter 100: reduplication), high-ranking IO-Max is irrelevant in their evaluation.1 Because NoCoda dominates BR-Max, the emergence of unmarked CV syllables is permitted 1

Correspondence between input and output candidates is evaluated by input–output (IO) faithfulness constraints. Reduplicants stand in correspondence relationships with the output forms of their bases, and are evaluated by base–reduplicant (BR) faithfulness constraints (McCarthy and Prince 1999). Faithfulness constraints in this chapter assess IO correspondence, unless otherwise noted. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0058

2

Michael Becker & Kathryn Flack Potts

in reduplicants. Concretely, candidates (3a) and (3b) are identical, except that the reduplicant in (3b) contains a copy of the coda of the root-initial syllable, while the reduplicant in (3a) doesn’t. Because NoCoda dominates BR-Max, the additional NoCoda violation in (3b) rules out this candidate in favor of the less marked (3a). (2)

/Œims-’i(p/ ☞ a. Œim.s’i(p b. Œi.s’i(

(3)

IO-Max NoCoda BR-Max ** **!

/red-Œims-’i(p/ IO-Max NoCoda BR-Max ☞ a. Œi.Œim.s’i(p b. Œim.Œim.s’i(p

** ***!

**** ***

Increasing attention to TETU effects was a natural result of inquiry into Optimality Theory. As McCarthy and Prince note, TETU is a direct result of two fundamental properties of OT. First, OT is a theory of ranked, violable constraints. Constraints are frequently active in a language even if they are not always satisfied; this is at the heart of TETU effects, which occur when a markedness constraint is dominated but still active. They observe that this “sharply differentiates OT from approaches to linguistic structure and interlinguistic variation based on parameters, rules, or other devices that see linguistic principles in globally all-or-nothing terms” (1994: 363–364).2 Second, distinctions between marked and unmarked structures are fundamental to OT, allowing the existence and emergence of unmarkedness to be formally defined. As McCarthy and Prince explain, “OT (Prince and Smolensky 1993) offers an approach to linguistic theory that aims to combine an empirically adequate theory of markedness with a precise formal sense of what it means to be ‘unmarked’ ” (1994: 333). At the heart of OT are two basic constraint types: those demanding identity, typically between inputs and outputs (faithfulness), and those penalizing particular output structures (markedness) (chapter 63: markedness and faithfulness constraints; see also chapter 4: markedness). Marked structures are defined as exactly those structures which violate a markedness constraint.3 “Emergence” can be defined with similar precision, again by reference to basic properties of OT: an unmarked structure can be said to emerge in a language if the markedness constraint violated by that structure is dominated by some (typically faithfulness) constraint which blocks its activity in some, but not all, contexts in that language. §2 elaborates on this basic understanding of TETU as “activity despite domination,” surveying three types of cases in which a dominating constraint is inactive in a particular evaluation, allowing a lower-ranked markedness constraint 2

This view is elaborated in McCarthy (2002: 129–134), where it is noted that theories with ordered rules can mimic some TETU effects with the application of default rules. 3 More precisely, structures which violate some markedness constraint M1 are marked with respect to M1; if these structures do not violate some other markedness constraint M2, no conflict arises in saying that they are also unmarked with respect to M2. In OT, markedness is multidimensional, assessed by each markedness constraint individually.

The Emergence of the Unmarked

3

to emerge. §3 then describes gradient TETU effects found in languages where the emergent markedness constraint is never categorically active. Finally, §4 compares true TETU effects with situations where faithfulness, rather than markedness, constraints are active despite domination and thus emergent.

2

TETU typology

The typical TETU ranking is F1 >> M >> F2, with M emerging in evaluations where F1 is not decisive. This section will discuss three subclasses of TETU rankings, following from three different contexts in which high-ranking F1 may be rendered inactive. §2.1 looks at output segments and structures which have no input correspondents and so are invisible to IO-faithfulness constraints; these include reduplicants, epenthetic segments, and syllable boundaries. §2.2 considers evaluations in which multiple candidates tie on a particular high-ranking constraint, and §2.3 surveys faithfulness constraints which evaluate only some positions or aspects of outputs while ignoring others. In each of these situations, a high-ranking constraint is inactive and a dominated markedness constraint becomes active, choosing the winning output.

2.1

Output segments and structures without input correspondents

TETU is commonly observed in output structures which lack input correspondents and thus cannot be evaluated by IO-faithfulness. Recall the Nuu-chah-nulth ranking in (2) and (3), of the form IO-F >> M >> BR-F. Because reduplicants have no input correspondents in this theory, they cannot be evaluated by IO-faithfulness, allowing the effects of M (NoCoda in Nuu-chah-nulth) to emerge. This section describes similar TETU patterns found in two other structures which are present in outputs but not inputs: epenthetic segments and syllable boundaries.

2.1.1 Epenthesis Kager (1999) observes that markedness constraints which are generally freely violated in a language often determine the quality of epenthetic segments (chapter 67: vowel epenthesis). These segments are typically featurally unmarked; epenthetic vowels like [i], [q], and [H], and consonants like [?], [h], and glides, are cross-linguistically common, while marked segments like [f] and [æ] are rarely epenthesized.4 This is due to TETU rankings like IO-Ident >> M, where M is a featural markedness constraint. When a constraint demanding identity between input and output features outranks markedness (here, IO-Ident >> M), the latter has little power to ban marked features in the language as a whole. While the presence of an epenthetic segment violates the anti-epenthesis constraint Dep, its lack of an input correspondent means that it is invisible to high-ranked IO-Ident; thus, epenthetic segments are subject to markedness constraints which require them to have unmarked feature values. 4

See Vaux (2002, 2008) for a survey of epenthetic segments and a diachronic perspective. See also Steriade (2001, 2009) for the view that epenthetic segments are chosen by faithfulness constraints minimizing the perceptual distance between representations with and without the epenthetic segment.

4

Michael Becker & Kathryn Flack Potts

2.1.2 Syllable structure Not every aspect of linguistic outputs is evaluated by faithfulness constraints; some output properties, like prosodic structure above the mora level, are generally taken to be governed by markedness constraints only (chapter 33: syllable-internal structure). In a language like Timugon Murut (Austronesian), where Dep >> Onset as in (4) and (5) (McCarthy and Prince 1994), the dominated markedness constraint Onset emerges to make decisions in cases where Dep cannot distinguish between candidates. Dep’s high ranking results in a language where epenthesis never occurs in order to avoid onsetless syllables, thus allowing words like [ambi’luo] ‘soul’ in (4). Dep (and similarly Max, Ident, etc.) cannot, however, distinguish between the candidates in (5), which differ only in syllabification. Because faithfulness constraints cannot see these differences, the decision is handed down to the emergent markedness constraint Onset. (4)

/ambi’luo/ ☞ a. am.bi.’lu.o b. ?am.bi.’lu.?o

(5)

/ambi’luo/ ☞ a. am.bi.’lu.o b. am.bil.’u.o

Dep

Onset **

**! Dep

Onset ** ***!

Cross-linguistically, the markedness constraint Onset commonly triggers epenthesis, deletion, and other changes to prevent onsetless syllables. But its effects can also emerge even in languages like Timugon Murut, where Onset is crucially dominated and so cannot require all syllables to have onsets; here, Onset nonetheless requires syllabification of available consonants as onsets rather than codas. This contrasts with a parameter-based view of phonology, where onsetless syllables are present only when the Onset parameter is “off,” and thus cannot affect syllabification in any way.

2.2

Output candidates not distinguished by dominating constraints

Unmarkedness can also emerge when multiple output candidates are evaluated identically by all constraints dominating the emergent markedness constraint. This section discusses allomorph selection, which has been traditionally analyzed as a TETU effect of this sort within OT, as well as a similar example from the syntax–phonology interface.

2.2.1 Allomorphy Mascaró (2004) observes that when a morpheme has multiple underlying forms, Gen supplies candidates that vary in the forms they correspond to (chapter 99: phonologically conditioned allomorph selection). In cases like English a/an, where the indefinite article has two lexically listed allomorphs, some members of the candidate set stand in correspondence with underlying a, while others stand

The Emergence of the Unmarked

5

in correspondence with underlying an. For this reason, the two output candidates shown in (6), a wug and *an wug, tie on all high-ranked IO-faithfulness constraints. While the ranking of faithfulness constraints (here, simply Faith) above NoCoda generally permits codas throughout English, NoCoda nevertheless emerges as decisive here, ruling out *an wug in this unique case where multiple possible outputs are equally faithful to their respective inputs.5 {a, an} /wZg/ Faith NoCoda

(6)

☞ a. H.wZg b. Hn.wZg

* **!

A more complex TETU analysis of lexically specific allomorph selection is offered in Becker’s (2009) discussion of the Turkish aorist (Lees 1961; Napiko-lu and Ketrez 2006). The aorist suffix has two allomorphs: /-Ir/, with a high vowel, is used after all polysyllabic roots; /-Er/, with a non-high vowel, is used after all monosyllabic obstruent-final roots (the backness and height of these vowels are determined by vowel harmony; chapter 118: turkish vowel harmony). Monosyllabic sonorant-final roots allow lexical exceptions: some take /-Ir/, while others take /-Er/. (7)

shape of stem polysyllabic obstruent-final monosyllabic {r l n}-final monosyllabic

affix -Ir -Er -Ir -Er

[gere’k-ir] [sA’t-Ar] [kA’l-qr] [dA’l-Ar]

‘need’ ‘sell’ ‘stay’ ‘dive’

[ŒAlq’œ-qr] [œ’p-er] [gœ’r-yr] [œ’r-er]

‘work’ ‘kiss’ ‘see’ ‘knit’

Turkish vowels are typically faithful to their underlying height specification, both in roots and in affixes; for example, the affix /-E/ (dative) (e.g. [je’re] ‘to the place’) contrasts with the affix /-I/ (3sg poss) (e.g. [je’ri] ‘his/her place’). This indicates that Ident[high] outranks both of the markedness constraints in (8). When two allomorphs are available to choose from, however, as in these aorist examples, Ident[high] is satisfied regardless of the choice of allomorph; the markedness constraints can thus emerge as decisive. (9) illustrates how *’q/high consistently selects the /-Er/ allomorph in monosyllabic obstruent-final roots. (8)

a. b.

(9)

*’q/high No stressed high vowels. *RER No non-high vowels between sonorants. /sAt-{-Er, -Ir}/ Ident[high] *’ /high *RER

☞ a. sA’t-Ar b. sA’t-qr 5

*!

The ranking Faith >> Onset similarly chooses [Hn.Zg] over *[H.Zg].

6

Michael Becker & Kathryn Flack Potts

The situation is more complex in the monosyllabic sonorant-final roots, which don’t behave uniformly. Some of these occur with /-Er/, violating *RER, as shown in (10), while others occur with /-Ir/, violating *’q/high, as in (11). Becker argues that sonorant-final monosyllabic roots are linked to lexically specified constraint rankings: for /-Er/-selecting roots like /dAl/, *’q/high >> *RER, while the opposite ranking holds for /-Ir/-selecting roots like /kAl/. The overall pattern is one where each markedness constraint is emergent for a particular class of roots. See Becker (2009) for further details of the analysis, including the treatment of polysyllables and mechanisms for learning both affix URs and lexically specific rankings. (10)

/dAl-{-Er, -Ir}/ Ident[high] *’ /high *RER ☞ a. dA’l-Ar b. dA’l-qr

(11)

* *!

/kAl-{-Er, -Ir}/ Ident[high] *RER a. kA’l-Ar ☞ b. kA’l-qr

*’ /high

*! *

This TETU analysis of the Turkish aorist accounts for the fact that the lexically specific distribution of this affix is limited to sonorant-final roots. Since *RER is ranked below Ident[high], its effect can only be observed when a root contributes one sonorant and one of two lexically listed allomorphs (here, of the aorist affix) contributes the other. This contrasts with a diacritic-based approach to exceptionality; since such an approach isn’t based on markedness constraints, it runs the risk of missing phonological restrictions on the distribution of exceptions. See Gouskova (2010) for further arguments in favor of a grammar-based approach to exceptionality (also chapter 106: exceptionality). Rankings in each of these allomorphy examples take the form F >> M, where F cannot distinguish between candidates containing different allomorphs. Because multiple candidates are equally faithful, satisfying M does not require violating a lower-ranked F2, as is required in the prototypical TETU cases discussed in §1. The Turkish example shows that satisfying an emergent markedness constraint can also require violating a lower-ranked markedness constraint, in a ranking like F >> M1 >> M2. This occurs because markedness constraints can conflict with each other, as well as with faithfulness constraints. The following discussion of phonological phrasing and the syntax–phonology interface carries this observation further, demonstrating that markedness constraints can also emerge in contexts where a higher-ranked, conflicting markedness constraint is inactive.

2.2.2 Phonological phrasing Because faithfulness constraints do not evaluate prosodic structure, analyses of phonological phrasing are generally based on rankings of conflicting markedness constraints. While most familiar TETU rankings involve domination by a conflicting faithfulness constraint, the dominating constraint may also be a second markedness constraint. That is, dominated M2 may also emerge in a ranking like (12).

The Emergence of the Unmarked (12)

7

M1 >> M2

Truckenbrodt (1999) proposes an analysis of phonological phrasing based only on markedness constraints; in contexts where high-ranking markedness constraints are rendered inactive, lower-ranking markedness constraints emerge. In Chewa (Bantu, referred to by Truckenbrodt as Chicheºa), a complex VP like [V NP NP]VP is produced as single phonological phrase, (V NP NP)PhP, rather than *(V NP)PhP (NP)PhP, as in (13). The large phrase satisfies Wrap-XP, a constraint that penalizes any syntactic phrase whose elements are parsed into smaller phonological phrases. Other dominated markedness constraints express conflicting preferences for smaller phonological phrases: Align-XP demands alignment of the right edge of each syntactic phrase with the right edge of a corresponding phonological phrase. The winner in (13) incurs a violation due to the first NP, which has no phrase break at its right edge. The ranking Wrap-XP >> Align-XP generally thwarts Align-XP’s desire for additional phonological phrases. Align-XP’s effects emerge, however, under focus. Align-Foc requires focused verbs to fall at the end of phonological phrases; the ranking Align-Foc >> Wrap-XP rules out candidate (14a). Candidates (14b) and (14c) both satisfy Align-Foc, and both violate Wrap-XP, rendering Wrap-XP inactive as well in selecting the optimal output. Because these candidates tie on high-ranking constraints, we again see a TETU effect: Align-XP emerges, selecting the unmarked candidate, (14c). (13)

/[V NP NP]VP/

Align-Foc

☞ a. (V NP NP)PhP b. (V NP)PhP (NP)PhP (14)

Wrap-XP

Align-XP *

*!

/[VFOC NP NP]VP/

Align-Foc

a. (VFOC NP NP)PhP b. (VFOC)PhP (NP NP)PhP

*!

☞ c. (VFOC)PhP (NP)PhP (NP)PhP

Wrap-XP

Align-XP

*

* *!

*

Truckenbrodt notes that this analysis of Chewa is particularly interesting, due to the non-local nature of the TETU effect: the appearance of an (unmarked) prosodic break after the focused verb causes another break to appear after a subsequent non-focused noun phrase.

2.3

Output segments not evaluated by specific faithfulness

This final subsection discusses situations where general IO-faithfulness is lowranked, and the emerging markedness constraint is instead dominated by a different type of faithfulness constraint. In other words, these are TETU rankings of the type Special-F >> M >> General-F. We discuss three kinds of faithfulness that can outrank general IO-faithfulness: positional faithfulness, which protects strong positions inside a candidate; output–output faithfulness, which protects the base in a morphologically complex form; and UseListed, which protects correspondents of existing forms in a speaker’s lexicon.

8

Michael Becker & Kathryn Flack Potts

2.3.1 Positional faithfulness Beckman (1999) examines patterns where contrasts are licensed only in strong positions like initial syllables, stressed syllables, and onsets. She analyzes these using positional faithfulness constraints, which assess correspondence only for segments in particular output positions (here, onsets). Catalan (Romance) allows contrastive voicing in onsets, but bans voiced obstruents in codas (chapter 69: final devoicing and final laryngeal neutralization). Beckman accounts for this with the ranking shown in (15)–(17). Underlyingly voiced coda obstruents are devoiced in surface forms, due to the ranking *VoiObs >> Ident[voice], as in (15). In onsets, however, underlying voicing surfaces faithfully, due to the high-ranking positional faithfulness constraint Ident[voice]/Onset, as in (16)–(17). Here the markedness constraint *VoiObs is dominated, yet active in non-onset contexts, making this a TETU effect. (15)

/griz/ ‘gray (masc)’

Ident[voice]/Onset *VoiObs Ident[voice]

a. ’griz ☞ b. ’gris (16)

/gos-a/ ‘dog (fem)’ a. ’go.zH ☞ b. ’go.sH

(17)

/griz-a/ ‘gray (fem)’ ☞ a. ’gri.zH b. ’gri.sH

**! *

*

Ident[voice]/Onset *VoiObs Ident[voice] *!

** *

*

Ident[voice]/Onset *VoiObs Ident[voice] *!

** *

*

In addition to protecting phonologically strong positions (initial syllable, stressed syllables, onsets), positional faithfulness may also protect morphologically strong contexts such as roots (McCarthy and Prince 1995) and nouns (Smith 1999, 2001; see also chapter 102: category-specific effects). Smith notes that in Spanish, stress is lexically marked in nouns but predictable in verbs. She analyzes a range of such patterns using noun-specific faithfulness constraints (F/Noun) in the ranking schema F/Noun >> M >> F. Here, nouns may be faithful to lexically specified stress thanks to high-ranking F/Noun; in verbs, however, stress is instead governed by emergent markedness constraints. The activity of markedness constraints in all of these rankings, despite their domination by a conflicting (here position-specific) faithfulness constraint, identifies these as TETU effects. There is a significant difference, however, between many positional faithfulness patterns and most other TETU patterns. In the TETU ranking schemata discussed in previous sections, general IO-faithfulness constraints dominate emergent markedness constraints. Here, though, markedness dominates (general, though not position-specific) IO-faithfulness. This results in different surface distributions of the emergent unmarked structures. Typically, when IO-F >> M, marked structures are licensed in most contexts throughout the language; unmarkedness emerges in specific, less frequent contexts like reduplicants, epenthetic segments, or allomorphs. When TETU results

The Emergence of the Unmarked

9

from high-ranking positional faithfulness, however, the reversed ranking M >> IO-F can result in a language which is largely unmarked; in these languages, marked structures are restricted to the specific set of contexts protected by the positional faithfulness constraint. When this set of markedness licensing contexts is small, as for Faith/q1 or Faith/’q (faithfulness to word-initial and stressed syllables, respectively), unmarked structures are required in the majority of contexts: the set of positions in which markedness may occur is atypically smaller than those where unmarkedness is required.6 This distributional pattern will be discussed further in §4.

2.3.2 Output–output faithfulness Another family of constraints which evaluates only some outputs and so gives rise to TETU effects is output–output (OO) faithfulness (Benua 1997). OO-faithfulness constraints evaluate correspondence between the base of a morphologically complex word and that base’s stand-alone surface form. Harris (1990) discusses examples of Aitken’s Law in dialects of the Central Scottish Lowlands. Here, stressed vowels in roots have predictable length: when followed by any consonant other than /r v Ï z/, vowels are short; otherwise, they are long. /> Z / are exceptions, remaining short in all positions. (See also chapter 20: the representation of vowel length.) For example, stop-final feed has a short vowel, while the open syllable key has a long vowel. The past tense keyed, however, keeps the long vowel which is present in its base key, despite its final stop coda. This can be attributed to protection from high-ranking OO-Faith, as described below (Benua 1997; McCarthy 2002). Because OO-faithfulness constraints target only a subset of a language’s output forms – those which are morphologically complex – they can give rise to TETU effects. A ranking like OO-F >> M >> IO-F operates much like the positional faithfulness TETU ranking discussed above. The markedness constraint *V(C] (“no long vowels in syllables closed by any consonant other than /r v Ï z/”) dominates IO-faithfulness; tableau (18) shows that this results in a language which is typically unmarked: long vowels are absent from closed syllables. Long vowels appear in open syllables, as in (19); because OO-Ident(length) >> *V(C], long vowels also appear in closed syllables in morphologically complex forms derived from roots with long vowels, as in keyed [ki(d] (cf. key [ki(]) in (20). (18)

/fi(d/ OO-Id(length) V(C] IO-Id(length) ☞ a. fid b. fi(d

(19)

/ki(/ a. ki ☞ b. ki(

6

* *! OO-Id(length) V(C] IO-Id(length) *!

Positional faithfulness TETU rankings can also result in languages where, as is more typical of TETU, unmarkedness is the less frequent pattern; this occurs when the positional faithfulness constraint targets a broad set of positions, e.g. Faith/Root.

10 (20)

Michael Becker & Kathryn Flack Potts /ki(-d/ OO-Id(length) V(C] IO-Id(length) a. kid ☞ b. ki(d

*!

* *

Here, again, a markedness constraint is active in the language despite its domination by (here, OO) faithfulness. Similar TETU effects are possible in other theories that use faithfulness relations between members of a paradigm, such as McCarthy’s (2005) Optimal Paradigms.

2.3.3

UseListed

Zuraw (2000) proposes an additional novel kind of faithfulness constraint, UseListed, which protects items that are listed in a speaker’s lexicon. Listed items include all roots and all morphologically complex forms that a particular speaker has heard, with more frequent items assumed to be more strongly listed. In producing a previously heard, morphologically complex form, the speaker has two options: they could use either the lexically listed forms of the root and affixes as inputs to the grammar, or they could instead use the single lexically listed complex form (again, as input to the grammar). Zuraw proposes that these two possible input structures compete in a single evaluation, with UseListed penalizing outputs derived from productive combinations of morphemes. For novel roots and novel complex forms (i.e. novel combinations of roots and affixes, even if a speaker is familiar with each morpheme in other contexts), however, no lexical listing is available. Thus outputs based on any of these forms will violate UseListed equally. Markedness constraints ranked below UseListed can therefore emerge in evaluations of unfamiliar items, as in Hayes and Londe’s (2006) analysis of Hungarian vowel harmony. The Hungarian dative appears with a back vowel when the root’s final syllable has a back vowel ([glyko(z-nDk] ‘glucose-dat’), and it appears with a front vowel when the root’s final syllable has a front rounded vowel ([œofø(r-nek] ‘chauffeur-dat’) (chapter 123: hungarian vowel harmony). When the root’s final syllable has a front unrounded vowel, some items take a front suffix ([tsi(m-nek] ‘address-dat’) and others take a back suffix ([hi(d-nDk] ‘bridge-dat’). Taking a back suffix is especially likely when the final front unrounded vowel is preceded by a back vowel ([a(tse(l-nDk] ‘steel-dat’). Here, the relevant markedness constraints will be Local[e(], which penalizes back vowels in the syllable immediately following an [e(], and Distal[back], which penalizes front vowels in any syllable following a back vowel. Hungarian speakers agree on the dative forms of familiar (lexically listed) items such as [a(tse(l-nDk]. UseListed is decisive in these cases, preferring the listed form over productive combinations of the root and the suffix, and thus rendering lower-ranked markedness constraints on vowel harmony inactive. The two candidates (21a) and (21b) are generated from the listed form [a(tse(l-nDk], and thus satisfy UseListed (despite the unfaithful surface form of this input in (21b)). The second two candidates are generated productively by combining the root /a(tse(l/ with the dative suffix, and are thus ruled out by UseListed.

The Emergence of the Unmarked (21)

/a(tse(l-{nek, nDk}/, listed: [a(tse(l-nDk]

11

Use Ident Local Distal Listed [back] [e(] [back]

☞ a. /a(tse(l-nDk/ → a(tse(l-nDk b. /a(tse(l-nDk/ → a(tse(l-nek

*

* **

*

*

*!

c. /a(tse(l-{nek, nDk}/ → a(tse(l-nDk

*!

d. /a(tse(l-{nek, nDk}/ → a(tse(l-nek

*!

**

But when Hungarian speakers hear a novel root containing a back vowel followed by a front unrounded vowel, e.g. [ha(de(l], the suffix vowel in the dative forms can agree with either root vowel: some speakers prefer [ha(de(l-nDk], as in (22a), while others prefer [ha(de(l-nek], as in (22b). Both candidates in (22) violate UseListed, since no lexical listing exists for this novel item, and thus the dative form must be derived productively by combining the root /ha(de(l/ with the dative suffix. (22)

/ha(de(l-{n k, nDk}/, listed: [ ] ☞ a. /ha(de(l-{n k, nDk}/ → ha(de(l-nDk ☞ b. /ha(de(l-{n k, nDk}/ → ha(de(l-n k

Use Ident Local Distal Listed [back] [e(] [back] * *

*

* **

Hayes and Londe argue that a particular speaker’s actual output depends on a stochastic ranking between the two competing markedness constraints on vowel harmony, Local[e(] and Distal[back]. Crucially, as in the Turkish example in §2.2.1, one of these two low-ranked markedness constraints emerges; here, this occurs when dominating UseListed cannot distinguish between candidate outputs for a novel input.

3

Gradient TETU

The previous sections have surveyed various ways in which high-ranking constraints can be rendered irrelevant in particular evaluations, allowing lower-ranked markedness constraints to emerge. Of course, not all markedness constraints which are dominated in a particular language emerge; many are ranked too low to ever be active in choosing a winning surface form. Recent work suggests, however, that subtle TETU effects can be identified even for markedness constraints which never distinguish between grammatical and ungrammatical forms. Consonants in Arabic roots are subject to various co-occurrence restrictions (chapter 86: morpheme structure constraints). Among grammatical consonant combinations, preferences for particular combinations are found: some are much more frequent than others in the lexicon, and novel words conforming to the more frequent patterns are judged as more well-formed in rating tasks (chapter 90: frequency effects). Coetzee and Pater’s (2008) analysis of these preferences among grammatical forms casts them as gradient TETU effects, where markedness constraints which are never categorically obeyed nevertheless exert subtle preferences for unmarked forms (chapter 89: gradience and categoricality in

12

Michael Becker & Kathryn Flack Potts

phonological theory). Rankings giving rise to these gradient effects are illustrated in (23) and (24). Roots including both coronal stops and fricatives (e.g. /dasar/ ‘to push’, represented here as TS) and those containing coronal stops and sonorants (e.g. /dalaq/ ‘to spill’, represented here as TL) both surface faithfully in Arabic, although these combinations are underrepresented, i.e. they are attested less often than expected, given the overall frequency of each type of consonant. In the lexicon, however, TS roots are more severely underrepresented than TL roots, suggesting that TL roots are in some sense more easily tolerated. Coetzee and Pater argue that both combinations violate a constraint against roots with two coronals (*TT), while only the dispreferred TS roots violate an additional constraint against roots with two coronals of similar sonority (*TT[son]). (23)

/TS/ ☞ a. TS b. PS

(24)

/TL/ ☞ a. TL b. PL

Ident(place) *TT[son] *TT *

*

*! Ident(place) *TT[son] *TT * *!

Here, no markedness constraint is ranked highly enough to ban TS or TL outputs. These consonant combinations are protected by faithfulness, and are thus attested and grammatical, but they are not judged by speakers to be quite as well formed as roots that lack OCP violations. TS’s additional violation of *TT[son] contributes to its decreased acceptability relative to TL, as observed in the results of word-likeness tasks and similar psycholinguistic experiments. In other words, the markedness constraint *TT[son] is active in Arabic even though it is crucially dominated by Ident(place). This activity is evidenced by the underattestation of actual TS roots and the decreased acceptability of novel TS roots, even though it doesn’t force unfaithful mappings. The incorporation of gradient generalizations into the grammar can also be used to identify relative rankings of undominated markedness constraints, i.e. the opposite of gradient TETU. If neither of two markedness constraints is ever crucially dominated by a conflicting constraint in some language, the relative ranking of these constraints cannot be determined from either categorical phonotactics or paradigmatic information. This approach, however, allows evidence for their relative ranking to come from gradient phonotactics and psycholinguistic data. Coetzee (2009) compares the grammaticality of English homorganic stops after [s], noting that coronals are attested, as in state, but labials and dorsals are not, as in *skake or *spape (see also Davis 1984, 1991; Frisch 1996; Frisch et al. 2004). Coetzee’s psycholinguistic experiments show that speakers rate *spape as less acceptable than *skake, and both are less acceptable than state. He uses this result to propose that while *spVp and *skVk are both undominated in English, the constraint penalizing *spVp is more highly ranked than the constraint penalizing *skVk. This view is also supported by the existence of words that come close to

The Emergence of the Unmarked

13

violating *skVk, such as skag, skulk, or squeak, compared with the non-existence of *spab, *spulp, or *spweep.

4

The emergence of the faithful

As mentioned in §2.3.1, there is an important distinction between the formal definition of TETU and the most intuitive surface-oriented descriptions of these patterns. McCarthy and Prince (1994: 334) define TETU as follows: Even in languages where [some markedness constraint] C is crucially dominated and therefore violated, the effects of C can still be observed under conditions where the dominating constraint is not relevant. . . . this [is] “emergence of the unmarked.”

The same passage describes the typical surface pattern that results from constraint activity despite domination: in the language as a whole, C may be roundly violated, but in a particular domain it is obeyed exactly. In that particular domain, the structure unmarked with respect to C emerges.

Patterns where high-ranking positional faithfulness constraints allow unmarkedness to emerge (e.g. Ident[voice]/Onset >> *VoiObs >> Ident[voice]) demonstrate that activity-despite-domination rankings can also give rise to a converse pattern: a markedness constraint may in fact be obeyed in the language as a whole, but violated in a particular domain. In these cases, the structure unmarked with respect to the markedness constraint emerges in the language as a whole, despite its ungrammaticality in a particular domain. The lack of a necessary connection between a markedness constraint’s activity despite domination and the relative rarity of the resulting unmarkedness is also illustrated in patterns following from the activity of positional markedness constraints. Like positional faithfulness constraints, these are versions of regularly attested markedness constraints which evaluate only structures in particular output positions, e.g. Onset/q1, a constraint that penalizes onsetlessness in the initial syllable only (chapter 55: onsets). An example of this pattern comes from Arapaho (Algonquian, Smith 2002: 127, from Salzman 1956: 53–54). In this language, onsetless syllables occur in non-initial syllables (e.g. the onsetless third syllable in [wo.’?o.u(.so(] ‘kitten’), as shown in (25). Word-initial vowels are, however, banned (e.g. *[o.to?]), as shown in (26). These patterns follow from the ranking Onset/q1 >> Dep >> Onset, and are identical in character to the typical TETU surface pattern: marked structures are licensed in most of the language, but a small pocket of enforced unmarkedness is found in initial syllables. Many other languages of this type are discussed by Smith (2002) and Flack (2009). (25)

/wo’?ou(so(/ Onset/ 1 Dep Onset ☞ a. wo.’?o.u(.so( b. wo.’?o.?u(.so(

* *!

14 (26)

Michael Becker & Kathryn Flack Potts /oto?/ Onset/ 1 Dep Onset ☞ a. ho.to? b. o.to?

* *!

*

Despite the surface similarities between positional markedness patterns and classic TETU patterns, the formal structure of these rankings distinguishes them from TETU rankings. These follow the schematic form M1 >> F >> M2, rather than the TETU form F1 >> M >> F2. These patterns thus might be dubbed “The Emergence of the Faithful”: F emerges (i.e. is active despite domination) in cases where dominating Onset/q1 is inactive. Albright (2004) discusses patterns of this sort, using the term “The Emergence of the Marked” to describe their surface pattern. In Lakota (Siouan), codas are banned in roots but licensed elsewhere (e.g. affixes, reduplicants, function words). This pattern results from the positional markedness ranking NoCoda-root >> F >> NoCoda; as Albright explains, this pattern is a “mirror image of the TETU configuration: here, greater faithfulness emerges outside roots, when a higher-ranked markedness constraint (NoCoda-root) is inapplicable” (2004: 7). Here, markedness “emerges” in distributionally rare root-external contexts. To be clear, the distributional sense of “emerge” used here is different from the formal sense used by McCarthy and Prince: formally, effects of the constraint which is dominated yet active emerges; distributionally, whichever pattern is not generally permitted (markedness vs. unmarkedness) “emerges” in specific, restricted contexts.

5

Conclusion

TETU is a property of theories with violable constraints, and sets these theories apart from those with parameters or inviolable constraints. In TETU rankings, a markedness constraint is shown to be dominated in a language, yet active in situations where the dominating constraints are irrelevant. Three types of such situations are surveyed in §2. Active-yet-dominated markedness constraints have also been used in the analysis of gradient patterns, as discussed in §3. Finally, patterns mirroring TETU which result from active-yet-dominated faithfulness constraints are discussed in §4. TETU effects, which demonstrate the violability of OT constraints, set OT apart from theories with inviolable constraints, also known as parameters in Principles and Parameters Theory (Chomsky 1981, 1986). In Principles and Parameters Theory, the learner starts with parameters set to their default, or unmarked, position; parameters can be switched off given evidence from the ambient language. The NoCoda parameter, for instance, will remain on for a speaker of Hawaiian, as this language doesn’t allow codas. However, speakers of English or Nuu-chah-nulth will switch the NoCoda parameter off, as codas are generally allowed in these languages. Once off, however, the NoCoda parameter can no longer be used to account for the contexts in which these two languages prefer codaless syllables (see §1 and §2.2.1 above), causing a loss of generality in the analysis of these languages (McCarthy 2002: 131–132). Interest in TETU effects initially brought attention to a variety of cases where constraints were shown to be active even in languages where they were roundly

The Emergence of the Unmarked

15

violated, e.g. NoCoda in English. This lent support to the view that there is a single, universal constraint set for all languages, which in turn led to fruitful research on how language-specific rankings of these universal constraints could be learned (see e.g. Tesar 1995; Tesar and Smolensky 1998; and much work since). Early work in OT typically assumed that this universal constraint set was innate; assumptions of both innateness and constraint universality have begun to lose favor in recent years with the advent of proposals that some or all constraints are induced by learners (Flack 2007; Hayes and Wilson 2008; Moreton 2010). TETU effects were a major focus of interest in the early days of Optimality Theory, when the concept of violable constraints was new to the linguistic community. With the increased acceptance of violable constraints in theoretical work, cases of TETU no longer attract special attention, even as interest turns to other theories that incorporate violable constraints, including OT-CC (McCarthy 2007), Harmonic Grammar (Legendre et al. 1990; Pater 2009), and MaxEnt (Goldwater and Johnson 2003; Hayes and Wilson 2008).

REFERENCES Albright, Adam. 2004. The emergence of the marked: Root-domain markedness in Lakhota. Handout of paper presented at the 78th Annual Meeting of the Linguistic Society of America, Boston. Becker, Michael. 2009. Phonological trends in the lexicon: The role of constraints. Ph.D. dissertation, University of Massachusetts, Amherst. Beckman, Jill N. 1999. Positional faithfulness: An optimality theoretic treatment of phonological asymmetries. New York: Garland. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst. Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris. Chomsky, Noam. 1986. Knowledge of language: Its nature, origin, and use. New York: Praeger. Coetzee, Andries W. 2009. Grammar is both categorical and gradient. In Steve Parker (ed.) Phonological argumentation, 9–42. London: Equinox. Coetzee, Andries W. & Joe Pater. 2008. Weighted constraints and gradient restrictions on place co-occurrence in Muna and Arabic. Natural Language and Linguistic Theory 26. 289–337. Davis, Stuart. 1984. Some implications of onset–coda constraints for syllable phonology. Papers from the Annual Regional Meeting, Chicago Linguistic Society 20. 46–51. Davis, Stuart. 1991. Coronals and the phonotactics of nonadjacent consonants in English. In Carole Paradis & Jean-François Prunet (eds.) The special status of coronals: Internal and external evidence, 49–60. San Diego: Academic Press. Flack, Kathryn. 2007. The sources of phonological markedness. Ph.D. dissertation, University of Massachusetts, Amherst. Flack, Kathryn. 2009. Constraints on onsets and codas of words and phrases. Phonology 26. 269–302. Frisch, Stefan A. 1996. Frequency and similarity in phonology. Ph.D. dissertation, Northwestern University. Frisch, Stefan A., Janet B. Pierrehumbert & Michael B. Broe. 2004. Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22. 179–228. Goldwater, Sharon & Mark Johnson. 2003. Learning OT constraint rankings using a maximum entropy model. In Jennifer Spenader, Anders Eriksson & Osten Dahl (eds.) Proceedings of the Stockholm Workshop on Variation within Optimality Theory, 111–120. Stockholm: Stockholm University.

16

Michael Becker & Kathryn Flack Potts

Gouskova, Maria. 2010. Unexceptional segments. Unpublished ms., New York University. Harris, John. 1990. Derived phonological contrasts. In Susan Ramsaran (ed.) Studies in the pronunciation of English: A commemorative volume in honour of A. C. Gimson, 87–105. London: Routledge. Hayes, Bruce & Zsuzsa Cziráky Londe. 2006. Stochastic phonological knowledge: The case of Hungarian vowel harmony. Phonology 23. 59–104. Hayes, Bruce & Colin Wilson. 2008. A maximum entropy model of phonotactics and phonotactic learning. Linguistic Inquiry 39. 379–440. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Lees, Robert B. 1961. The phonology of Modern Standard Turkish. Bloomington: Indiana University. Legendre, Géraldine, Yoshiro Miyata & Paul Smolensky. 1990. Harmonic Grammar: A formal multi-level connectionist theory of linguistic well-formedness – an application. In Proceedings of the 12th Annual Conference of the Cognitive Science Society, 884–891. Mahwah, NJ: Lawrence Erlbaum. Mascaró, Joan. 2004. External allomorphy as emergence of the unmarked. In John J. McCarthy (ed.) Optimality Theory in phonology: A reader, 513–522. Malden, MA & Oxford: Blackwell. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. McCarthy, John J. 2005. Optimal paradigms. In Laura J. Downing, T. A. Hall & Renate Raffelsiefen (eds.) Paradigms in phonological theory, 170–210. Oxford: Oxford University Press. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. McCarthy, John J. & Alan Prince. 1994. The emergence of the unmarked: Optimality in prosodic morphology. Papers from the Annual Meeting of the North East Linguistic Society 24. 333–379. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249–384. Amherst: GLSA. McCarthy, John J. & Alan Prince. 1999. Faithfulness and identity in prosodic morphology. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.) The prosody–morphology interface, 218–309. Cambridge: Cambridge University Press. Moreton, Elliott. 2010. Connecting paradigmatic and syntagmatic simplicity bias in phonotactic learning. Paper presented at MIT, April 2010. Napiko-lu, Mine & Nihan Ketrez. 2006. Children’s overregularizations and irregularizations of the Turkish aorist. In David Bamman, Tatiana Magnitskaia & Colleen Zaller (eds.) Proceedings of the 30th Annual Boston University Conference on Language Development, vol. 2, 399–410. Somerville, MA: Cascadilla Press. Pater, Joe. 2009. Weighted constraints in generative linguistics. Cognitive Science 33. 999–1035. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Salzman, Zdenek. 1956. Arapaho I: Phonology. International Journal of American Linguistics 22. 49–56. Smith, Jennifer L. 1999. Noun faithfulness and accent in Fukuoka Japanese. Proceedings of the West Coast Conference on Formal Linguistics 18. 519–531. Smith, Jennifer L. 2001. Lexical category and phonological contrast. In Robert Kirchner, Joe Pater & Wolf Wikely (eds.) Papers in experimental and theoretical linguistics 6: Workshop on the Lexicon in Phonetics and Phonology, 61–72. Edmonton: University of Alberta.

The Emergence of the Unmarked

17

Smith, Jennifer L. 2002. Phonological augmentation in prominent positions. Ph.D. dissertation, University of Massachusetts, Amherst. Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual account. In Elizabeth Hume & Keith Johnson (eds.) The role of speech perception in phonology, 219–250. San Diego: Academic Press. Steriade, Donca. 2009. The phonology of perceptibility effects: The P-map and its consequences for constraint organization. In Kristin Hanson & Sharon Inkelas (eds.) The nature of the word: Studies in honor of Paul Kiparsky, 151–179. Cambridge, MA: MIT Press. Tesar, Bruce. 1995. Computational Optimality Theory. Ph.D. dissertation, University of Colorado, Boulder. Tesar, Bruce & Paul Smolensky. 1998. Learnability in Optimality Theory. Linguistic Inquiry 29. 229–268. Truckenbrodt, Hubert. 1999. On the relation between syntactic phrases and phonological phrases. Linguistic Inquiry 30. 219–255. Vaux, Bert. 2002. Consonant epenthesis and the problem of unnatural phonology. Handout from paper presented at Yale University. Vaux, Bert. 2008. Why the phonological component must be serial and rule-based. In Bert Vaux & Andrew Nevins (eds.) Rules, constraints, and phonological phenomena, 20–60. Oxford: Oxford University Press. Zuraw, Kie. 2000. Patterned exceptions in phonology. Ph.D. dissertation, University of California, Los Angeles.

59

Metathesis Eugene Buckley

The term metathesis – Greek for ‘transposition’ – refers to a reordering of segments. This chapter outlines the range of phenomena that fall under this description, and theoretical perspectives on their insightful analysis. Other cross-linguistic surveys of this topic include Webb (1974), Ultan (1978), Hock (1985), Wanner (1989), Blevins and Garrett (1998, 2004), Becker (2000), and Hume (2001, 2004). The term has traditionally been best known for the description of historical sound changes (chapter 93: sound change), often described as sporadic. For example, Osthoff and Brugmann (1878: xiv, n. 1) cite metathesis, along with dissimilation (chapter 60: dissimilation), as lacking the “mechanical” character of regular sound change. Hock (1985), however, argues that diachronic metathesis is regular when it serves to enforce a structural constraint. For example, in early attestations of Persian, as well as in reconstructed forms, clusters of an obstruent or nasal plus a liquid can be found before a final vowel. Loss of that vowel leads to a final cluster with a rising sonority profile (chapter 49: sonority; chapter 46: positional effects in consonant clusters); this configuration is repaired by metathesis of the two consonants, so that the more sonorous liquid is closer to the vowel. (The segments involved are underlined in (1).) (1)

Persian liquid metathesis (Hock 1985: 534) suxra vafra asru *namra

> > > >

surx barf ars narm

‘red’ ‘snow, ice’ ‘tear’ ‘soft’

Although much of the literature discusses historical metathesis – where copious examples can be found – this chapter focuses on instances of metathesis that are active synchronically. By this I mean alternations in the ordering of segments that appear to be part of a speaker’s productive grammatical knowledge, and therefore must be accounted for in theories of linguistic competence. There is of course an intimate connection between diachronic metathesis and the synchronic alternations that may persist in the grammar as a result, but I will take care to distinguish examples for which only diachronic change is well attested, The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0059

Metathesis

2

and where the results of the change appear to be new underlying forms rather than a new phonological alternation. Similarly, although the emphasis is on phonologically defined patterns, some types of metathesis require reference to morphological context, even if the specific change is expressed in terms of phonological categories. For most of the twentieth century, metathesis was described either in prose or, as formalisms became more sophisticated, as reorderings of indexed objects in a string. Chomsky and Halle (1968: 361) describe metathesis as “a perfectly common phonological process,” and permit transformations that effect permutation. In their notation, /skt/ → [kst] metathesis in Faroese, shown below in (3), could be expressed as follows. (2)

Metathesis as a transformation Structural description

s

k

t

Structural change

1 2

3

→ 2

1

3

The need for indexation distinguishes metathesis from most other processes, such as insertion (chapter 67: vowel epenthesis), deletion (chapter 68: deletion), and featural assimilation (chapter 81: local assimilation). In those sorts of changes, whatever elements of the representation remain after the change maintain their relative ordering on their tier. A true featural equivalent to segmental metathesis would be a swap in feature values on the same tier (or at some non-root node), such as a change from LH to HL tone in a context where underspecification of the L with simple shift of H is not a plausible analysis. As noted in §1.4, there is limited evidence for tonal metathesis of this sort. Following the most common modern usage, in this chapter I apply the term metathesis to permutations of segments regardless of intervening material. §1 deals with local metathesis, including the sequences CC, CV, and VV, followed by brief consideration of other types. §2 considers the long-distance metathesis of non-adjacent segments, as well as the displacement of a segment that is not exchanged with another. §3 considers the relation of metathesis to other phenomena with which it shares some formal properties, such as infixation.

1

Local metathesis

In local metathesis, two adjacent segments are swapped, without any necessary change in their features, although in some cases other processes may affect the outcome. These can be classified formally according to the segments involved in the reversal: two consonants, a consonant and a vowel (in either order), or two vowels.

1.1

CC metathesis

To organize this presentation, I group the processes according to the features of the segments involved. These include the special role of sibilants, place of articulation, and manner of articulation.

3

Eugene Buckley

1.1.1 Sibilants Sibilant consonants are often observed to reverse order with an adjacent stop consonant (Silva 1973; Hume 2001: 12–14; Seo and Hume 2001; Steriade 2001: 234f.; Blevins and Garrett 2004: 139f.; Hume and Seo 2004: 36–39). An example is found in Faroese, where /sk/ followed by /t/ is reversed (Lockwood 1955: 23f.). (3)

Faroese metathesis of /sk/ (Lockwood 1955: 24) masculine fesk-Ár rask-Ár dansk-Ár

neuter feks-t raks-t daIks-t

‘fresh’ ‘energetic’ ‘Danish’

As noted above, metathesis has typically been considered a sporadic or irregular process, unlike phenomena such as assimilation that can often be described in very general and regular terms (see Hume 2001: 1f. for representative quotations). But the Faroese reversal illustrates that a process of metathesis can be fully regular while also quite restricted in scope, simply because the necessary configuration does not often arise. Thus the neuter noun suffix /t/ provides the environment for reversal of stem-final /sk/; but a similar environment in verbs also triggers the changes, as can be seen in /ÁõIks-ti/ ‘wish (past sg)’ compared to the present singular /Áõnsk-ir/ with the underlying ordering thanks to the following vowel (Hume 1999: 294). It was traditionally claimed that metathesis yields sequences that are in some way better formed than the input ordering, usually in the sense of “ease of articulation” or satisfying a language’s phonotactic constraints (Wechssler 1900: 497; Grammont 1933: 239; Ultan 1978: 390). More recent work has placed greater emphasis on the role of perception (chapter 98: speech perception and phonology), and on historical explanations for how metathesis arises (chapter 93: sound change). Faroese can be seen as auditory metathesis – the temporal decoupling of the noise of a fricative, especially a sibilant, from the surrounding signal, which can lead to a sibilant and an adjacent stop being reinterpreted as occurring in the opposite of the original order (Blevins and Garrett 2004: 120). A segment often moves to a position in which it is more easily perceptible, especially due to the formant transitions in an adjacent vowel (Hume 1999: 295f.; Seo and Hume 2001: 215–217). Thus Faroese metathesis places the stop /k/ in a more perceptible position, adjacent to the preceding vowel, while the sibilant remains perceptible without an adjacent vowel. This directionality suggests that confusibility in the ordering of the segments is not the sole factor, since symmetrical confusion predicts random reordering according to the two possible interpretations of an ambiguous auditory signal (Steriade 2001: 233–235); but see Blevins and Garrett (2004: 119f.) for a defense of the misperception account. The outcome in particular languages may depend on prosody, such as the location of stress, and phonetic detail, such as the release of final stops; such differences may explain the symmetrically opposite changes in Late West Saxon (/frosk/ → [froks] ‘frog’) and a certain variety of colloquial French (/fiks/ → [fisk] ‘fixed’) (Blevins and Garrett 2004: 139f.). A transformational rule that reverses the order of segments does not make reference to the apparent motivations of the reordering, such as an improvement in markedness (chapter 4: markedness). But like other phonological processes,

Metathesis

4

metathesis may operate in order to satisfy the phonotactic restrictions of a language. That is, just as the place assimilation in anba → amba satisfies a condition that nasal codas must agree in place with a following stop, so a metathesis such as inma → imna in (6) satisfies a condition on the sequencing of coronal and labial consonants. Recent approaches have attempted to capture this insight and to treat metathesis more on a par with other processes. In the surface orientation of Optimality Theory (Prince and Smolensky 2004), the expected Faroese sequence [skt] can be penalized by a constraint against a stop that occurs between two other consonants (Hume 1999: 298), whether it is defined directly in terms of perceptibility or as a more abstract configuration. This pressure must dominate the correspondence constraint Linearity, which otherwise prevents reorderings of segments, and obviously plays a central role in the analysis of metathesis (Hume 1998: 149, 68f.; McCarthy and Prince 1995: 371f.; McCarthy 2000: 173). Metathesis occurs only when Linearity is ranked below faithfulness constraints such as Max and Dep; these prevent deletion or insertion of material that otherwise might serve to remedy the surface constraint that metathesis addresses. /raskt/ *Stop/C__C Max Dep Linearity

(4)

☞ a. raskt b. rast

*! *!

c. rask

*!

d. raskit

*!

e. rakst

*

Naturally, no violation of Linearity is required in a form such as [raskÁr], where the stop /k/ is adjacent to a vowel, and the sequence surfaces intact. Another relatively restricted case of stop–sibilant metathesis is the Tiberian Hebrew hitpa‘el verb form, where the /t/ of the prefix reverses with a steminitial sibilant (Malone 1993: 52f.; Coetzee 1999: 106; see Malone 1971 for similar facts in other Semitic languages; see also chapter 108: semitic templates). The examples in (5a) show the lack of metathesis with non-sibilants. (5)

Tiberian Hebrew metathesis (Coetzee 1999: 106) a. b.

hit-pallel hit-qaddeœ hit-sappex hit-œammer hit-Wakker hit-zakker hit-s#addeq

→ → → → → → →

hitpallel hitqaddeœ histappex hiœtammer hiWtakker hizdakker his#t#addeq

‘he ‘he ‘he ‘he ‘he ‘he ‘he

prayed’ sanctified himself’ felt attached to’ protected himself’ gave himself into service’ remembered’ considered himself righteous’

For Coetzee (1999: 122f.), the motivation for metathesis in exactly this context, when a /t/ would otherwise precede a sibilant, is that a [t] + sibilant sequence would be subject to reinterpretation as an affricate, a type of segment disfavored in Tiberian Hebrew. He proposes a constraint *t+Sibilant against that sequence,

5

Eugene Buckley

again with relatively low-ranked Linearity. Hume (2004: 222f.), discussing the equivalent metathesis in Modern Hebrew, argues that the poor attestation of [t] + sibilant sequences in Hebrew sets the stage for a reinterpretation with the sibilant in first position: an ambiguous acoustic signal is likely to be interpreted sequentially according to the most commonly attested ordering of those segments, dependent not necessarily on universal principles, but on the lexicon and grammar of the language in question.

1.1.2 Place of articulation Many instances of CC metathesis depend on place of articulation (chapter 22: consonantal place of articulation), with certain orderings of place favored over others. Permutation of this type is found in a range of Malayo-Polynesian languages (Blevins and Garrett 2004: 136). In Cebuano, for instance, a coronal stop or nasal followed by a labial or velar consonant is reversed, optionally in some cases (Blust 1979: 110).1 The consonants come to be adjacent as the result of vowel syncope after a vowel-initial suffix is added. (6)

Cebuano metathesis of coronal + non-coronal clusters (Blust 1979: 110) stem lutuk gitik atup inum

suffixed form lukt-un gitk-anun ~ gikt-anun atp-an ~ apt-an imn-a

‘put the finger in’ ‘ticklish’ ‘roof’ ‘drink’

Stems such as /lakat/ ‘walk’ that already have the preferred ordering maintain it ([lakt-un]), showing that the process is not simply an across-the-board reversal in consonant clusters. In this case, the favored ordering places the coronal in second position. The two changes in Cebuano – deletion of the vowel and reversal in the resulting cluster – were likely ordered historical events, and this history can be modeled easily by ordered synchronic rules. But the same facts are also consistent with simultaneous satisfaction of two surface constraints in OT. The candidates *[lutukun] and *[lutkun] both violate one of these constraints – by the lack of syncope, or the disfavored consonant ordering – whereas [luktun] satisfies both, and wins under low ranking of Linearity and Max-V. (7)

/lutuk-un/ *VCVCV *TK Linearity Max-V a. lutukun b. lutkun ☞ c. luktun

*! *!

* *

*

A phonetic explanation for this type of reordering is co-articulatory metathesis, which results from the overlap in adjacent consonant gestures (Blevins and Garrett 2004: 1

Similar patterns are found in a number of other Philippine languages (Blust 1971: 85f.; 1979: 104f.; Crowhurst 1998: 597), including Tagalog, where, however, it is poorly attested and classified with irregular verbs (Schachter and Otanes 1972: 375–380).

Metathesis

6

136–138); for example, overlapping coronal (T) and non-coronal closures (K) are perceived as the non-coronal, which leads to reversals such as TK → KT in Cebuano (6). A general preference for apicals to follow non-apicals has been cited with regard to metathesis in other languages such as Greek, which may be related to the tendency for coronal codas to assimilate to following non-coronals (Bailey 1970: 348). One abstract phonological approach formalizes the licensing properties of different places of articulation (Rubin 2001: 194–199); unmarked Coronal is a natural head and licenses the place features of a preceding non-coronal, favoring KT over TK. See also Blust (1979: 102f.) and Winters (2001) on the general preference for coronals to occur second in a cluster. Some reorderings have considerably more complex origins. In the Kondh branch of Dravidian, sequences of a velar /k g/ plus a labial /p b/ are reversed. The Pengo examples below illustrate two allomorphs /-pa/ and /-ba/ of the intensivefrequentative or plural action suffix, both of which also occur in contexts without metathesis as seen in (8a). For similar Kui examples, see Hume (2001: 8). (8)

Pengo velar + labial metathesis (Burrow and Bhattacharya 1970: 82f., 201) a. b.

gru(t-pahuz-baÕrik-paku(k-paÈa(k-batog-ba-

→ → → → → →

gru(t-pahuz-baÕripkaku(pkaÈa(bgatobga-

‘fell’ ‘roast’ ‘break’ ‘call’ ‘sacrifice’ ‘be split’

According to Garrett and Blevins (2009: 538ff.), this metathesis pattern arose by re-analysis of complex allomorphy deep in the history of Dravidian. Briefly, causative /p/ could replace the last consonant of the stem, as in Kolami [melg-] ‘grow (intr)’ and derived [mel-p-] ‘rear’. This was interpreted as a rule deleting the velar before the labial, which was extended to other labial-initial suffixes, including the plural action containing [-p-]. This would yield an alternation between simple */ku(k-/ ‘call’ and plural action */ku(k-p/→ *[ku(-p-]. But there is another basic allomorph of the plural action suffix containing /-k-/; if this were added to the existing plural action in order to make the exponence of that category clearer, the result is the pair *[ku(k-] and *[ku(-p-k-], which then gives the appearance of metathesis of the suffixal /p/ and the stem-final /k/. Whatever the historical origin of Pengo metathesis, it became part of the grammar thanks to learners treating it as an active synchronic process. The constraint encoding the Pengo alternation must penalize a velar + labial sequence; call it *KP. In addition to Max and Dep, it is especially relevant here to include the constraint Ident to prevent changes to the features targeted by the phonotactic constraint. (9)

/togba/ *KP Max Dep Ident Linearity a. togba b. toba c. togiba d. tomba ☞ e. tobga

*! *! *! *! *

7

Eugene Buckley

Perceptual factors may contribute to such re-analyses. Placing the labial first in the cluster, at least in related Kui, puts it in the stressed syllable, which may enhance its perceptibility; the weak bursts of labial stops reduce the benefit of being located in the onset (Hume 1999: 296). Experimental evidence indicates that labial place is more perceptible in codas than is velar, and that velars benefit more in perceptibility from being in the onset than do labials (Winters 2001: 238–241). This means that the ordering PK is overall more likely to be heard correctly than KP. Emphasizing historical origin, however, Blevins and Garrett (2004: 136) consider the perceptually based prediction to be PK → KP, as attested in other languages such as Mokilese /apkas/ → [akpas] ‘now’, and claim that the Dravidian pattern favoring PK could arise only by such means as re-analysis of a morphological pattern. Segments undergoing metathesis may originate in different morphemes, as in Pengo, but may also occur inside a single morpheme. Across a morpheme boundary, the offending cluster is created by concatenation; within a morpheme, the context for metathesis may be created directly by a syncope rule that brings the consonants into contact (as in Cebuano (6)), or a triggering context introduced by concatenation but affecting two consonants that are underlyingly adjacent (as in Faroese (3)).

1.1.3 Manner of articulation Classes of consonants defined by manner, such as liquids or sonorants (see chapter 13: the stricture features), are often targeted specifically by metathesis. (One might also include here the sibilants discussed in §1.1.1.) Metathesis involving the class of liquids is found in a number of languages (Blevins and Garrett 2004: 128f.); a historical example from Persian was cited in (1). In Rendille (Cushitic, Kenya), an /r/ and a preceding obstruent or nasal reverse in order after they become adjacent upon deletion of the intervening vowel (Sim 1981: 7, 9f.; Hume 1998: 178; Blevins and Garrett 2004: 129). (10)

Rendille metathesis of /r/ in clusters (Sim 1981: 7, 9) feminine údur-te ágar-te pámar-te

masculine úrd-e árg-e párm-e

‘s/he slept’ ‘s/he saw’ ‘s/he shivered’

In the framework of Blevins and Garrett (2004: 121–125), this perceptual metathesis arises when the cues for a sequence of sounds are perceived by the listener as reordered relative to the speaker’s intention, which is possible when some feature is realized over a relatively long duration and therefore contains ambiguity of analysis. This is true for consonant clusters as well as vowel–consonant sequences (see §1.2). Besides liquids (chapter 30: the representation of rhotics; chapter 31: lateral consonants), other segment types with elongated cues include pharyngeals (chapter 25: pharyngeals), secondary labialization (chapter 29: secondary and double articulation), palatalization (chapter 71: palatalization), and glottalization or aspiration (Blevins and Garrett 2004: 123). Hume (2004: 220–227) argues similarly that ambiguity or indeterminacy in the auditory signal sets the stage for a reinterpretation of linear order, but places a

Metathesis

8

special emphasis on the role of the specific attested sequences in the language, as discussed for Hebrew above. In Kambata (East Cushitic, Ethiopia), a suffix-initial nasal transposes with a preceding obstruent, and is also subject to place assimilation in this position, as illustrated by [Ik] and [mb] resulting from an /n/-initial suffix (Hudson 1980: 105); similarly the related language Sidamo (Vennemann 1988: 55) and several other East Cushitic languages (Garrett and Blevins 2009: 532f.). (11)

Kambata metathesis of obstruent + nasal clusters (Hudson 1980: 105) it-ne(mmi t’u(d-na(mmi oros-na(mmi sok-ne(mmi hab-no(mmi

→ → → → →

inte(mmi t’u(nda(mmi oronsa(mmi soIke(mmi hambo(mmi

‘we ‘we ‘we ‘we ‘we

have eaten’ will see’ will take’ have sent’ forgot’

This metathesis is part of a conspiracy (chapter 70: conspiracies) of changes (including complete assimilation and vowel epenthesis) that avoid ill-formed consonant clusters, in this case obstruent + sonorant; see Hume (1999: 300– 302) for a perceptual-optimization account. In a novel strategy that anticipates Optimality Theory, Hudson (1980: 109) proposes that affixation generates two outputs with alternate orderings of juxtaposed consonants (such as [itne(mmi], [inte(mmi]), where the choice between the outputs is made according to conditions on phonotactics. This technique would not, however, generalize to examples such as Faroese and Cebuano, in which the relevant consonants are not juxtaposed across a morpheme boundary. The obstruent + nasal reversal in East Cushitic has been cited as an example of a metathesis that does not result from a conventional source such as a misperception of the ordering of the cues (Garrett and Blevins 2009: 532–537). Rather, it appears to reflect the pressure of other consonant interactions; I illustrate with the facts of Kambata, but follow the argument of Garrett and Blevins, who use data from related Bayso. In some clusters that occur at stem boundaries, we find regressive assimilation, as in /rn/ → [nn] and /mt/ → [nt]; but in others, there is apparent progressive assimilation to create a geminate, as in /bt/ → [bb]. (12)

Kambata assimilation in clusters (Hudson 1980: 105) a.

b.

im-to(?i ful-na(mmi kam-no(mmi mar-ni ub-to(?i dag-tonti t’u(d-tenti oros-ta(nti

→ → → → → → → →

into(?i funna(mmi kanno(mmi manni ubbo(?i daggonti t’u(ddenti orossa(nti

‘she dug’ ‘we will go out’ ‘we forbade’ ‘we, going’ ‘she fell’ ‘you knew’ ‘you have seen’ ‘you will go’

If the learner seeks to generalize to a single process of regressive assimilation, then an intermediate step of metathesis is necessary to create the right outcome: /bt/ → tb → [bb]. Extending this to instances of obstruent + nasal, such as /bn/

9

Eugene Buckley

→ nb → [mb], also has the effect of preserving the features of the root-final consonant and yielding a nasal + obstruent sequence of the sort that is common in other concatenations. Interestingly, whereas Kambata assimilation and metathesis apply to all places of articulation, in Bayso both are restricted to coronals, which reinforces the connection; this correlation is found across the East Cushitic languages (Garrett and Blevins 2009: 536f.). From the perspective of synchronic phonology, an unavoidable conclusion is that metathesis processes are available to the learner, whether the pattern results from a misperception of the phonetic signal or a generalization of an existing pattern.

1.2

CV metathesis

Ordering reversals of a consonant and vowel involve many of the same principles of explanation and analysis as CC reversals – for example, the historical reinterpretation of an ambiguous signal, and a synchronic constraint that dominates Linearity. From the examples in the literature, however, synchronic CV metathesis appears to be strongly associated with specific morphological contexts, and the reordering may be the main exponence of a grammatical category, something that is not typical of CC metathesis. But before considering such cases, we examine a few more strictly phonological examples.

1.2.1 Phonological reorderings A well-known case that has been treated as metathesis is Cayuga (Iroquoian), in which a laryngeal consonant /h ?/ transposes with a preceding vowel when it occurs in an odd-numbered, non-final syllable (Foster 1982: 69f.; Blevins and Garrett 1998: 509–512). The necessary prosodic context can be analyzed as the weak branch of an iambic foot (Hayes 1995: 222f.; see also chapter 44: the iambic–trochaic law). (13)

Cayuga laryngeal metathesis (Foster 1982: 69f.) kahwista?eks ko?nikõha? akekaha? ahanohae?

→ → → →

kh+’wisd?aes g?o’nikhwa? a’gekhaa? a’hanhwae?

‘it strikes, chimes’ ‘her mind’ ‘my eye’ ‘he washed it’

To some degree, however, it is uncertain whether this process is truly a reversal of segment order or instead a spreading of features across the vowel, resulting in overlap rather than reordering (Foster 1982: 70). Somewhat similar metathesis of vowel + /h/ occurs in Cherokee when a stop consonant precedes the vowel; the result is that the laryngeal is realized as aspiration on the stop (Flemming 1996; Blevins and Garrett 1998: 520f.). In the framework of Blevins and Garrett (1998: 509f., 2004: 121–125), Cayuga and Cherokee show the results of perceptual metathesis. Just as with the CC metathesis involving liquids and other segment types, the spread of laryngealization or devoicing through the vowel leads to the possibility of reinterpretation. Diachronic instances of the same phenomenon include liquid metathesis in Slavic, as in *orbota ‘work’ > Polish /robota/; and reordering of /r/ with schwa in Le Havre French, such as [bHrbi] ‘ewe’ compared to standard [brHbi] (Blevins and Garrett

Metathesis

10

1998: 513, 16f.). The Slavic example is somewhat unusual, in that it involves an initial sequence undergoing metathesis; reordering is cross-linguistically disfavored for root-initial segments, since a disruption in that position interferes with effective word recognition more than metathesis of other segments (Hume and Mielke 2001). A clearer example of synchronic CV metathesis is found in the Austronesian language Leti, discussed in detail by Hume (1998) and Blevins and Garrett (1998: 541–547). Alternating stem forms in Leti are phonologically conditioned according to the following context; in particular, morpheme-final VC reverses to CV to avoid an illicit consonant cluster within a phrase. (14)

Leti VC metathesis (Hume 1998: 153) a. ukar lavan b. ukar ppalu ukar muani

→ → →

ukarlavan ukrappallu ukramwani

‘thumb, big toe’ ‘index finger’ ‘middle finger’

There is, additionally, the same reversal in phrase-final position, so that ‘finger’ appears as [ukra]; here, rather than serving general phonotactics, the metathesized form appears to mark the word as phrase-final (Bonthuis 2001: 37f.). Because metathesis in Leti affects all consonant types – compare /ulit/ → [ulti] ‘skin’, /metam/ → [metma] ‘black’ – it cannot be attributed to the elongated phonetic realization of a class such as laryngeals, and is not perceptual metathesis. Instead, Blevins and Garrett (1998: 539–547) identify it as pseudo-metathesis. By this they mean an alternation in ordering that did not arise historically as a direct reinterpretation of segment order. In the case of Leti, two main steps are posited, with evidence from other patterns within Leti and in related languages. First, an epenthetic vowel was inserted after final consonants, /ulit/ → [ulit”]. Although the inserted vowel was not a copy of the preceding vowel, it nevertheless would have been subject to co-articulatory effects of the more palatal or labial quality of a preceding /i/ or /u/, as in [ulit”]. Second, syncope of medial vowels led to loss of that schwa preceding another word beginning CV ([ulit”] → [ulit]), but loss of the original medial vowel in other contexts ([ulit”] > [ult”]); here, however, the vowel quality of the deleted vowel is preserved in the final schwa due to the coarticulation ([ult”] > [ulti]). Words containing the low vowel, such as /ukar/, do not show palatal or labial co-articulation, but result from the fact that schwa more generally became /a/ in the history of Leti (thus /ukarH/ > /ukrH/ > /ukra/). From the point of view of synchronic phonology, the crucial point is that alternations such as [ulit] ~ [ulti] were successfully integrated into the grammar as learners re-analyzed the historical patterns.

1.2.2 Morphological context As noted above, CV metathesis often appears to occur in the presence of a particular morphological trigger, even if the reordering that occurs can be defined phonologically. A famous example, also from Austronesian, is found in Rotuman. In this language, words appear in two different “phases,” called complete and incomplete (Churchward 1940). The incomplete form is derived from the complete by a variety of means, but the default strategy is metathesis of the final CV to VC, often forming a short diphthong with the preceding vowel.

11 (15)

Eugene Buckley Metathesis in Rotuman phase alternations (Churchward 1940: 14) complete ho.sa i.?a pu.re ti.ko se.se.va

incomplete hoas ia? puer tiok se.seav

‘flower’ ‘fish’ ‘to rule, decide’ ‘flesh’ ‘erroneous’

The process applies to loanwords as well, such as /pe.pa/ → [peap] ‘paper’. Consistent with many other languages, Rotuman short diphthongs must consist of two vowels with rising sonority (i.e. movement from a higher to a lower vowel). Where this condition is not met, the incomplete phase is realized in other ways: by dropping a final vowel, as in /to.ki.ri/ → [to.kir] ‘to roll’; by fusing two vowels brought together by metathesis, as in /mo.se/ → [møs] ‘to sleep’; or by directly changing a vowel sequence to a long diphthong, as in /ke.u/ → [keu] ‘to push’. Blevins and Garrett (1998: 527–529) categorize the Rotuman alternation as compensatory metathesis. Historically, this entails an anticipation or perseveration of vowel features across an intervening consonant toward the stressed vowel, leading to “extreme vowel-to-vowel coarticulation.” In Rotuman there would have been anticipation of the final vowel in the direction of the preceding stressed syllable, followed by loss of the final vowel, essentially /hosa/ > /hoasa/ > /hoas/. In some vowel sequences, further changes occurred, such as /mose/ > /moese/ > /moes/ > /møs/. Metathesis of similar origin is also pervasive in the related language Kwara’ae, where final CV changes in most communicative contexts to VC to mark phrasal boundaries (Sohn 1980: 311f.); the details of Kwara’ae vowel realization lend particular support to the proposed historical origin as compensatory metathesis (Blevins and Garrett 1998: 530f.). The phases of Rotuman were originally described by Churchward in complex syntactic and semantic terms, but some recent work has argued that their specific realization depends on prosody, and therefore that they are basically phonologically determined rather than triggered by a morphological or other grammatical context (Hale and Kissock 1998: 120–123). For example, it has been proposed that the desired outcome achieved by metathesis as well as the other processes is a word-final heavy syllable (Blevins and Garrett 1998: 531–534; McCarthy 2000: 159, 73f.). From this point of view, the reversal in order is just one way of satisfying the heavy-syllable constraint; there is no specific rule demanding metathesis. This analysis relies on the claim of Hale and Kissock (1998) that the complete phase occurs before monomoraic morphemes such as /-me/ ‘hither’ in [ho?a-me] ‘to bring’, and the incomplete before bimoraic morphemes such as transitive /-kia/ in [hoa?-kia] ‘to take (trans)’. The essential idea is that right-aligned prosodic structure in [ho(?a-me)] requires the stem-final CV syllable to be grouped with the CV suffix in a proper bimoraic foot, and the pressure for a stem-final heavy syllable is thwarted. But in [(hoa?)-(kia)], the stem and the suffix are footed independently, and the stem undergoes metathesis to ensure a stem-final heavy syllable. The same result is predicted in the absence of a suffix. Kurisu (2001: 187) cites, in addition to certain exceptional suffixes, minimal pairs from Churchward (1940: 15) showing that the two phases can occur in an

Metathesis

12

identical phonological context, such as complete [?epa la hoa?] ‘the mats will be taken’ and incomplete [?eap la hoa?] ‘some mats will be taken’. This overlap indicates that the phase changes must in some way be triggered by the presence of a morphosyntactic category, the Incomplete Phase. For Kurisu, the high-ranked constraint RealizeMorpheme forces the incomplete to be phonologically distinct from the complete phase (chapter 103: phonological sensitivity to morphological structure); the relative ranking of phonological constraints, including Linearity, determines exactly how the base form is modified. The metathesis outcome is favored by the constraint ranking, although particular configurations (such as vowel sequences with falling sonority) lead to other outcomes, including fusion of the vowel features. Notably, in this more morphologically oriented approach, there is still no specific morphological demand for metathesis; rather, the drive for distinctness of word forms interacts with phonological constraints to produce metathesis, among other results. The examples presented so far involve underlying CV changing to VC, especially stem-finally. The converse change at the left edge, where initial VC changes to CV, is attested in some Northern Paman languages such as Ngko}, following the historical loss of initial consonants (Hale 1976: 17f., 23–28; Blevins and Garrett 1998: 537f., 2004: 135f.). (16)

Ngko} initial VC metathesis (Hale 1976: 23–28) *njipul*nji(na*kulan*puIa*Iali*kami-

> > > > > >

*ipul*ina*ulan*uIa*ali*ami-

> > > > > >

pjulnjalwanIwalajmaj-

‘you (non-sg)’ ‘to sit’ ‘possum’ ‘son’ ‘we (dual incl)’ ‘mother’s mother’

Unlike in Rotuman and a number of other Austronesian languages, however, this metathesis appears to be diachronic only. A somewhat similar pattern is found synchronically for a number of verbs in the Nilo-Saharan language Fur (Jakobi 1990: 57f., 64–74; Hume and Mielke 2001: 141f.). These verbs, when preceded by a monoconsonantal person-marking prefix, undergo reversal of the initial CV. (17)

Fur CV metathesis under prefixation (Jakobi 1990: 57f., 64–74) k-bak-teerk-lat-

→ → →

kabketerkald-

‘we drink’ ‘we forge’ ‘we beat, hit’

Some alternations are quite irregular, such as /li-/ → [al-] ‘wash’ and /tii-/ → [ei-] ‘catch’, so that a plausible alternative is that the allomorphs are lexically listed. This account would also address the formal problems in alternations such as /bul-/ → [ulb-] (→ [ulm-]) ‘find’, which involve two apparent metatheses (Hume 2001: 18f.); see §2.3 below. Even if the allomorphs are listed, however, metathesis was a crucial historical source.

Eugene Buckley

13

1.2.3 Metathesis in templates Languages with templatic morphology express certain inflectional or derivational categories by changes to the syllable structure of the stem (see chapter 105: tier segregation; chapter 108: semitic templates). If a particular paradigm includes different orderings of C and V elements, then the result is a form of metathesis. Templatically created metathesis generally does not derive from general phonological properties of a language, but rather from potentially arbitrary exponence of a morphological category. For example, a relatively productive metathesis applies to derived Classical Arabic nominal stems two syllables in length; initial /Ca/ is reversed to [aC] and a glottal stop onset is inserted (McCarthy and Prince 1990: 213f., 279f.). Examples include /kabar/ → [?akbar] ‘greater, greatest’ and /–anib-at/ → [?a–nib-at] ‘wings’; compare the underived forms [kabi(r] ‘great’ and [–ana(b-at] ‘wing’, without metathesis. The change cannot be treated as a general phonological process, because it is limited to certain morphological categories, and does not occur in verbs such as /katab/ ‘he wrote’ (*[?aktab]). In Mutsun, a Costanoan language of northern California, templatic alternations include the reversal of a stem-final sequence of vowel and consonant; the primary stem is consonant-final and the derived stem is vowel-final (Okrand 1979: 126f.). The choice between alternate verb stem forms depends on what suffix is added; in other cases (18b) the primary stem is a noun, and the derived stem is a verb with related meaning. The derived stem has the uniform shape CVCCV, despite considerable variation in the primary stem shape. (18)

Mutsun stem alternations (Okrand 1979: 126f.) a.

b.

primary pasikliŒ(ejmat(allul(upto(herla(lakposo(l-

derived paskiliŒjematlalulputohrelalkaposlo-

‘to visit’ ‘to stand’ ‘to be face down’ ‘flute / to play the flute’ ‘a cough / to cough’ ‘goose / to get geese’ ‘posole (stew) / to make posole’

Okrand observes that while the vowel-final derived stem is the form used with all suffixes that would create an illicit consonant cluster if added to the primary stem ([liŒje-hte] ‘standing’, *[liŒ(ej-hte]), it also occurs with some suffixes that would be phonotactically well-formed with a preceding consonant ([matla-nu] ‘put (someone) face down’, alongside [mattal-pu] ‘put oneself face down’). Therefore, this morphologically defined reordering does not merely repair phonological violations, even if it sometimes conspires to avoid phonotactically problematic concatenations. It has been pointed out that for similar alternations in related Sierra Miwok, a representation with V/C segregation makes a specific metathesis rule unnecessary (Smith 1985: 366f.; Goldsmith 1990: 91; Stonham 1994: 157f.); more on this below. In Tunisian Arabic, a stem-internal alternation is a cleaner example of metathesis than what we find in Classical Arabic (Kilani-Schoch and Dressler 1986; Becker 2000: 579f.). Historical changes to vowels within stems have led to minimal differences

Metathesis

14

defined by ordering, such as Classical Arabic /malak-a, milk-u/ > Tunisian /mlHk, mHlk/; this pattern is now productive in relating triliteral surface forms. (19)

Tunisian Arabic stem alternations (Kilani-Schoch and Dressler 1986: 62, 65f.) mlHk fhHm prHm kfDr

‘he ‘he ‘he ‘he

possessed’ understood’ forbade’ blasphemed’

mHlk fHhm parm kDfr

‘property’ ‘understanding’ ‘prohibition’ ‘blasphemy’

A similar alternation is found in Alsea (Buckley 2007). In this coastal Oregon language, stems generally show at least two forms; the full stem contains a root vowel, while the short stem lacks it. For stems with a medial sonorant consonant, an additional distinction is found: the full stem occurs in two varieties, light and heavy, according to whether the root vowel follows or precedes the sonorant. The stem choice depends on the presence of particular suffixes as well as an aspectual distinction. (20)

Alsea stems with a medial sonorant (Buckley 2007: 8f.) light (CV) stlaktwihtmus-

heavy (VC) stalktiwhtums-

short (no V) stlktwhtms-

‘slide’ ‘pour’ ‘close’

In the analysis of Buckley (2007: 15–18), the light stem is the underlying form; the short stem is created by deletion of the root vowel, and the heavy stem results from VC metathesis. Since only sonorants undergo this potential reordering, they alone are treated as weight-bearing in the coda, and therefore only in that case can metathesis yield satisfaction of the heavy template requirement. The same approach might be applied to Tunisian Arabic, with the difference that all consonant classes are moraic, and therefore metathesis applies to stems regardless of the medial consonant. The larger point is that the requirement for a heavy syllable is morphologically determined, but the effect is generated phonologically. Similar is the stem alternation found in Klallam and other Straits Salish languages, in two forms called the actual and non-actual aspect (Thompson and Thompson 1969: 215–217; Demers 1974: 17f.; Montler 1989: 96f.). (21)

Klallam stem alternations (Thompson and Thompson 1969: 216) non-actual Œkwuqq’iIq’HmtHqw-

actual Œukwqiq’IHq’mHtqw-

‘shoot’ ‘restrain’ ‘swallow’ ‘put in water’

Anderson (2005: 9–11) argues that synchronically, Klallam and similar languages require a processual rule of metathesis to express this morphological category. Montler (1989: 93), however, expresses this effect for Saanich as a CVCC template

15

Eugene Buckley

that causes metathesis in a form such as /’søHt/ ‘push it’ → [’sHøt] ‘pushing it’. In roots where the CVCC template cannot be satisfied by metathesis, other strategies are available, such as glottal stop insertion after the vowel in /’weqHs/ ‘yawn’ → [’we?qHs] or reduplication to achieve this templatic result, as in /’qen’/ ‘steal’ → [’qeqn’] (→ [’qeqHn’] by epenthesis). The same additional strategies occur in Klallam as well. In a prosodic version of the template approach, Stonham (1994: 173f.) proposes insertion of a mora in Klallam and Saanich that forces CCV to surface as CVC, a heavy syllable, and also causes the related effects of coda insertion and reduplication. The templatic and moraic approaches treat metathesis as one possible means of satisfying the morphologically determined, but phonologically expressed, restriction on shape. As with Rotuman, metathesis is one change among several, and not necessarily the direct goal of the morphological category. In a more strictly phonological approach for closely related Lummi, Demers (1974: 16) proposes a rule that deletes unstressed schwa between obstruents. In this view, the actual and non-actual forms are both based on a CHCH root, but have different stress placement. Schwa deletion yields apparent metathesis in pairs with the surface shapes seen in/’CHCH/ → [’CHC] and /CH’CH/ → [’CCH]. Although the synchronic evidence for exactly this derivation is missing in Klallam, the Lummi pattern suggests the likely diachronic origin of metathesis as a reinterpretation of vowel deletion. Such a historical origin can explain why these templatic changes normally involve reorderings of consonants and vowels, but not of consonants. For example, suppose that (similar to Lummi) the Alsea stem ‘to close’ that alternates between [tmus] and [tums] derives from original *tumus, with deletion of the unstressed vowel in forms with distinct stress patterns due to different suffixation: *’tumus-a > /’tums-a/ ‘door’ and *tu’mus-ø > /’tmus-ø/ ‘is closed’ (Buckley 2007: 22f.). The alternate forms that preserve different vowels are subject to reinterpretation as a stem with a single underlying vowel that is reordered with the adjacent consonant in different suffixal contexts; but vowel deletion by itself will not result in the reordering of consonants. Given the frequency of vowel harmony and syncope, patterns like this can be expected to arise rather often. Despite the crucial role of morphological context in conditioning these reorderings, phonological techniques can often be used to generate the necessary effects. One important tool has been the segregation of vowels and consonants onto different tiers (see chapter 105: tier segregation), so that they have no underlying ordering and no actual metathesis occurs in the derivation (McCarthy 1989: 5, 22f.). The advent of Optimality Theory, with its emphasis on output constraints rather than restricted input representations, makes V/C segregation “superfluous” (McCarthy 2000: 180f.). Even in an approach that does not treat the consonantal root as a morpheme listed independent of any vowels, derivational and inflectional morphemes often consist of vowels that overwrite the underlying vowels of the stem (Ussishkin 2005). Apparent VC metathesis among surface forms is merely the result of different overwriting patterns, as when the elements /h i i/ are imposed on Modern Hebrew /gadal/ ‘grow’ to form [h-igdil] ‘enlarge’. Constraints on the realization of affixal material in the stem lead to particular overwriting patterns, but the vowels of the affixes still have no underlying ordering relation to the consonants of the input word.

Metathesis

16

It is less clear how a vowel-overwriting approach for Semitic can extend to language families such as Miwok-Costanoan and Yokuts, where the vowels and consonants can be reordered, but the vowels do not have the status of separate morphemes (McCarthy 1989: 74, 78). Thus in Mutsun, the verb ‘to visit’ is lexically specified with not only the consonants /psk/ but also the vowels /ai/, combined in different ways, including [paski-] and [pasik-] (18). The overwriting operation would have to be available for subparts of one lexical entry, rather than independent morphemes, in order to account for languages like Mutsun.

1.3

VV metathesis

Although CV and CC metathesis are robustly attested, there is weak evidence for VV metathesis. Webb (1974: 8) states that “[e]ven as a sporadic change metathesis of vowels appears to be quite uncommon.” Kiparsky and O’Neil (1976: 531, n. 7) believe “there are few if any rules that metathesize contiguous syllabic segments in any language.” McCarthy (2000: 176) observes that the few synchronic analyses that posit VV reversals “involve very abstract analyses, in which the underlying representations and/or the consequences of metathesis are by no means apparent.” The rarity of such reversals may be related to the much longer typical duration of vowel gestures compared to consonants, so that a considerable temporal shift would be required for re-analysis of the ordering of two vowels (Steriade 1990: 390f.; McCarthy 2000: 176). A classic example is VV reversal in Kasem, to which Chomsky and Halle (1968: 361) first applied the transformational rule format for metathesis. In particular, the vowel sequence /ia/ is reversed to [ai] when followed by the plural suffix /i/; but the first /i/ deletes and then the remaining vowels coalesce, as in /pia-i/, which surfaces as [pe] ‘sheep (pl)’. Needless to say, on first inspection /piai/ → [pe] is not an obvious example of metathesis. Phelps (1975: 303f., 10f., 25, 1979: 56f.) argues against the Chomsky and Halle VV metathesis rule, but in favor of an entirely different CV metathesis, in derivations such as /boa(l-u/ → [bola(-u] (→ [bolo] ‘valley’). This derivation is again complex, although with different assumptions about underlying forms. Both of Phelps’s general conclusions regarding Kasem metathesis – CV is transposed but not VV – are endorsed, in a more modern framework, by Haas (1988: 241–253, 45f.); see also Burton (1989: 29f.) for an analysis of vowel coalescence without an intermediate reordering. Similar re-analyses have been proposed for other languages with apparent VV metathesis. Keyser (1975: 404) posits a rule for Old English that reverses vowels in order to feed a vowel elision rule, as in /lufa-i/ → lufia → [lufa] ‘love!’; Kiparsky and O’Neil (1976: 535f.) argue that a revised formulation of vowel elision makes metathesis unnecessary. A rule of VV metathesis has been claimed to play “a central role” in Latvian phonology; it reverses the order of elements in the diphthongs /ai au æi æu/, although under restricted conditions (Halle and Zeps 1966: 108). In a more recent treatment of Latvian vowels, although not focusing on metathesis, Anderson and Durand (1988: 34, n. 7) reject some of the synchronic abstractness assumed by Halle and Zeps; instead they assume raising of a monophthongal vowel that then undergoes breaking to form a diphthong, where no metathesis is required.

17

Eugene Buckley

A few diachronic examples of VV metathesis can be cited, especially if we include vowel/glide reversals in this category, since the same set of segmental features may serve as a glide or vowel before and after the metathesis (Ultan 1978: 375f.). Two examples from Portuguese are /genukulum/ > /geoOo/ > /ÚoeOo/ ‘knee’ and /dehonesta(re/ > /deostar/ > /doestar/ ‘to insult’ (Williams 1962: 111); these reversals may have occurred “on the analogy of the more familiar sequence oe” (Ultan 1978: 376).

1.4

Other types of metathesis

Permutations involving something larger than a segment may fall under broader definitions of metathesis (Ultan 1978: 370). These include syllable reversals in language games or ludlings, such as Chasu /i.ku.mi/ → [i.mi.ku] ‘ten’ (Bagemihl 1995: 704). Metathesis has also been proposed for elements such as location in sign language phonology (Sandler 1993: 246). Hyman and Leben (2000: 590) state that there are “sporadic reports of tonal metathesis in the literature”; some examples include Bamileke-Dschang (Pulleyblank 1986: 41, 50), Mixtec (Goldsmith 1990: 25), and Dangme (Holscher et al. 1992: 126). These processes typically involve the movement of a floating tone, originating at the edge of a word or other domain, past a single linked tone. But – like VV metathesis in §1.3 – they are also embedded in complex derivations, and depend on multiple assumptions about how the pieces of the analysis fit together. Under other assumptions, metathesis may not be required. For example, Pulleyblank (1986: 41) proposes that in Bamileke-Dschang a floating L tone moves leftward across a H, and remains floating to represent downstep; Hyman (1985: 71, 73), on the other hand, links the L directly to the H on a second tonal tier as a direct representation of a downstepped H. In essence, the new rule is a merger rather than a reordering, similar to Zoque palatalization in §3 below. It should be kept in mind that “metathesis” of syllabicity, as in French /oj/ > /we/ > modern /wa/ and Proto-Slavic *ew > /ju/ (Ultan 1978: 376), does not involve transposition of segmental features but rather a shift in affiliation relative to the head of the syllable. Thus in French /oj/ > /we/, the round vocoid continues to precede the front vocoid; in Slavic *ew > /ju/, the round element is second, but remains there. The same observation can be made for English /iw/ > /ju/, found in words such as few (Jespersen 1949: 101), which is quite similar to the Proto-Slavic change. None of these represents segmental metathesis.

2

Non-local effects

Often grouped with local metathesis is the exchange of segments that are not adjacent, called long-distance or non-contiguous metathesis (Ultan 1978: 380–383). In fact, metathesis or its equivalent in another language has been used, especially by earlier writers, specifically for such long-distance effects (Blevins and Garrett 1998: 525). Grammont (1933: 239ff., 339ff.) devotes separate chapters to longdistance métathèse and local interversion, a terminology found more recently, for example, in Pierret (1994: 61); but Wechssler (1900: 496) already uses Metathese for both local and long-distance transpositions. As noted at the beginning of this chapter, metathesis here refers to either type of reordering.

Metathesis

2.1

18

Diachronic

A famous diachronic example of transposition over intervening segments is the Spanish metathesis of r . . . l > l . . . r, observable in a few modern words (Ultan 1978: 381; Penny 2002: 36). (22)

Spanish liquid metathesis (Penny 2002: 36) Latin mi(ra(kulum peri(kulum parabola

> > >

Spanish milagro peligro palabra

‘miracle’ ‘danger’ ‘word’

These pronunciations were probably influenced by the greater frequency of consonant + r in the lexicon; various sound changes had eliminated many inherited instances of consonant + l (Ultan 1978: 391; Penny 2002: 70–72); the change can also be viewed as two steps, first the change of /l/ to /r/ in a cluster, and then the well-attested dissimilation of identical liquids (Wanner 1989: 444f.). Comparison of cognate words in the Yuman family of the American Southwest reveals a variety of historical metathesis processes, including root consonants in Walapai /’pil/ ‘burns’ ~ Cocopa /’lip/ ‘flames up’, or Havasupai /ka’to/ ~ Walapai /ta’ko/ ‘chin’ (Langdon 1976). There are also variant forms within languages, such as Ipai Diegueño /mHxH’tun/ ~ /xHmH’tun/ ‘knee’. These alternations are widespread, but remain lexically specific. Swapping of the consonants in largely CVC roots is also common in the Salish family, as seen in apparent cognate pairs such as Shuswap /xwej/ ~ Twana /jHxw/ ‘disappear’, and Klallam /ts’Hq’w/ ~ Upper Chehalis /q’wHts’/ ‘dirty’ (Noonan 1997: 482). The pervasiveness of this pattern in Salish is unusual, and is possibly best explained by historical processes of reduplication and consonant deletion rather than direct metathesis (Hume and Mielke 2001: 143, n. 4; Noonan 1997: 513). Prunet (2006: 57–61) discusses examples of consonant metathesis within Semitic roots. These are said to be particularly common in the Hebrew lexicon, as in synonymous variants such as [keveW] ~ [keWev] ‘lamb’ and related meanings such as [?a(raz] ‘tie packages’ ~ [?a(zar] ‘bind, girdle’ (Horowitz 1960: 228–234). More dramatic examples of non-contiguous consonant metatheses are found in language games in Bedouin Hijazi and Moroccan Arabic, in which the root consonants are scrambled (Prunet et al. 2000: 623f.); in Hijazi, /kattab/ ‘caused to write’ can be realized as [battak], [takkab], [kabbat], [tabbak], and [bakkat]. Although this radical permutation is not part of the basic grammar, such language games show an impressive computational capacity for synchronically active metathesis (Bagemihl 1995: 703f.; Anderson 2005: 11f.).

2.2

Synchronic

Typological surveys have claimed that permutation of non-adjacent segments does not occur as a regular synchronic process (Webb 1974: 5; Wanner 1989: 445; Hume and Mielke 2001: 145f.). Certainly, the permutation of non-adjacent segments is common in speech errors, such as classic Spoonerisms, but such errors

19

Eugene Buckley

also involve strings of segments such as complex onsets and rhymes (Fromkin 1971: 31f.). (23)

Metathesis in speech errors (Fromkin 1971: 31f.) intended kip H te>p fAr mDr pe> ske>l swe7Ì draj>I hip Hv –ZIk

→ → → → →

error tip H ke>p mAr fDr ske> pe>l dre7Ì swaj>I hZIk Hv –ip

‘keep a tape’ ‘far more’ ‘pay scale’ ‘sweater drying’ ‘heap of junk’

These form part of a larger phenomenon of anticipation, perseveration, deletion, and so forth. Speech errors may, however, be a source of sporadic metathesis in historical change (Wanner 1989: 445). The outputs of speech errors, like more systematic metathesis, overwhelmingly respect the existing phonotactics of the language (Wells 1951: 26; Fromkin 1971: 40–42; Dell 1995: 200); but transposition of adjacent consonants, so common in regular metathesis, is “exceptionally rare” as a speech error, such as whipser for whisper (Berg 1987: 9). Elements that transpose by error usually occupy parallel syllable positions, which is not the case for adjacent consonants; instead, as discussed above, metathesis of such segments normally arises historically by misperception rather than production or planning errors. An interesting comparison is an optional metathesis reported for a few words in Turkana (Dimmendaal 1983: 48f.; Hume and Mielke 2001: 139f.; Hume 2004: 218). Here two consonants with the same value of [sonorant] that serve as onsets to successive syllables, and are adjacent to identical vowels, are optionally transposed in fast speech. (24)

Turkana onset metathesis (Dimmendaal 1983: 48f.) preferred Ia-kèmèr-a Ii-kwaIGrDmDk-à e-s>k>n-a`

alternate Ia-kèrèm-a ‘mole’ Ii-kwaIGmDrDk-à ‘a kind of tree’ e-k>s>n-a` ‘breast’

These alternations have the appearance of a common speech error that has become somewhat conventionalized. In particular, it has been widely observed that exchange (and other) errors are more likely when the sounds in question are found in similar phonological environments, so that for example left hemisphere → heft lemisphere, where the initial consonants are both followed by /e/, is more likely than the parallel error in right hemisphere, where the vowels are different (MacKay 1970: 325–328; Dell 1984: 222). Morphologically restricted metathesis (§1.2.2) can apply synchronically to surface non-adjacent segments. For example, Akkadian has a /t/ infixed in reciprocal verbs; it surfaces there in most cases, exemplified by [pitrus-] (25a), which motivates the stem-internal position as basic. But this stop is transposed to word-initial position when the root has an initial coronal obstruent as in (25b) (Caldwell et al. 1977: 118; McCarthy 1981: 381; Buccellati 1996: 233f.; Huehnergard 2005: 390, 531, 611).

Metathesis (25)

20

Akkadian long-distance metathesis (Caldwell et al. 1977; Huehnergard 2005) a.

b.

root /prs/ /rkb/ /kmr/ /s#bt/ /snq/ /zkr/ /dkœ/

infinitive para(sraka(bkama(rs#aba(tsana(qzaka(rdaka(œ-

reciprocal pitrusritkubkitmurtis#buttisnuqtizkurtidkuœ-

*s#itbut*sitnuq*zitkur*ditkuœ-

‘divide’ ‘mount, ride’ ‘heap up’ ‘seize’ ‘be near’ ‘declare’ ‘swell’

The same metathesis occurs in iterative stems: [pitarrus-] but [tis#abbut-] (*[s#itabbut-]). In the analysis of Ìubowicz (2009), Akkadian metathesis serves to move the /t/ outside the stem domain, where it would cause a violation of the Obligatory Contour Principle (OCP) penalizing two tier-adjacent coronal consonants; thus [tizkur] does not violate this constraint, whereas *[zitkur] would, because the /t/ is located within the stem. This approach gives a relatively prominent role to phonology (the OCP constraint on coronals) while maintaining a crucial morphological component, due to the role of the stem domain.

2.3

Displacement

A related phenomenon, which is also called metathesis by many authors (Grammont 1933: 339; Ultan 1978: 372), involves the shift or displacement of a segment over more than one intervening segment. A famous example comes from the Occitan dialect of Bagnères-de-Luchon in southwestern France (Grammont 1905–6: 74, 85, 1933: 341; Blevins and Garrett 1998: 526). Among other processes, a liquid following a stop shifts leftward to form a cluster in the preceding syllable. (26)

Bagnères-de-Luchon long-distance shift of liquids (Grammont 1905–6: 74, 85, 1933: 341) *’kabra *’bespras *’pawpro *’tendro *’kambra *kum’pra *e’spingla

> > > > > > >

’krabo ’brespes ’prawbe ’trende ’krambo krum’pa e’splingo

‘goat’ ‘vespers’ ‘poor’ ‘tender’ ‘room’ ‘to buy’ ‘pin’

A shift like this is formally identical to metathesis when just one segment is skipped. However, with intervening material that includes non-constituent strings such as /esp/, it must be movement of /r/ rather than exchange. A similar shift of /r/ to the initial syllable is attested in South Italian Greek (Rohlfs 1930; Blevins and Garrett 2004: 130f., 34f.). If it is to be classified with long-distance metathesis, displacement might be expected to be absent from synchronic grammars. But synchronically active long-distance displacement is attested at least for laryngeal and pharyngeal features (Blevins and Garrett 2004: 132–134; see also chapter 25: pharyngeals). For example, in the Interior Salish language Colville (Nxilxcín), the pharyngeal

21

Eugene Buckley

consonant of a root is displaced to a stressed suffix, where it lowers the adjacent vowel to [a] (Mattina 1979). (27)

Colville pharyngeal displacement (Mattina 1979: 17f.) ’q’w#aj ’q’w#aj-’us ’q’w#aj-’lstsut

→ → →

’q’w#aj q’wHj’#as q’wHjlsts’#at

‘black’ ‘black man’ ‘his clothes are dirty’

Both pharyngealization and laryngealization can be seen as suprasegmental features in Salish (Mattina 1979: 19f.). Displacement of these features appears to reflect the spread of features over multiple syllables that may then be localized to a salient position (Blevins and Garrett 2004: 122f.). Such displacement resembles the mobility of a tone that shifts from the morpheme with which it is underlyingly affiliated to some phonologically defined position such as the penultimate syllable (Yip 2002: 65f., 89f., 132). A reasonable synchronic analysis is a [pharyngeal] feature affiliated with the root, which is attracted to the stressed syllable and possibly realized there as a segment. By the same token, the displacement of /r/ to the initial syllable in South Italian Greek reflects the salience and perceptual prominence of such syllables due to their location in the word (Blevins and Garrett 2004: 134); in Luchonnais, both stress and initial position favor the first syllable. Patterns of this sort have similar historical origins to simple exchanges of segments – in particular, phonetic cues that are relatively long in the temporal dimension and therefore subject to re-analysis, as discussed above for perceptual metathesis. But since they are displacements rather than exchanges, they are not formally equivalent to true metathesis as the exchange of positions. In particular, if local metathesis is seen as a minimal displacement (across a single segment), then long-distance metathesis would have to involve two simultaneous displacements, one leftward and one rightward, as in /abXcdYef/ → [abYcdXef]. This extra formal complexity may account for the rarity of synchronic non-local metathesis, which seems to be restricted to limited examples such as the optional reversal in Turkana (24) and the morphologically defined environment in Akkadian (25). An ordering alternation in the form of two suffixes is reported for several Costanoan languages, including Mutsun (Okrand 1979; Hume 1998: 170f., 1999: 300f., 2004: 223f.). Both suffixes have the shape CCV after a vowel-final stem, and CVC after a consonant-final stem, which makes phonotactic sense insofar as a three-consonant cluster at the stem boundary would be ill-formed. (28)

Mutsun suffix alternations (Okrand 1979: 127f., n. 17) CCV pire-tka rukka-kma

‘on the ground’ ‘houses’

CVC ?urkan-tak wimmah-mak

‘in the mortar’ ‘wings’

Although the locative [tka] ~ [tak] can be treated as local metathesis, in the plural [kma] ~ [mak], the [k] appears to move across two other segments. If Linearity is gradiently violable, with one violation for each segment over which another is displaced, the minimal change (one ordering reversal) is generally optimal (Hume 1998: 168–171, 2001: 17–19). Gradient violation does still permit Mutsun /mak/ → [kma] when other constraints force multiple violations of

Metathesis

22

Linearity, and the alternation can be seen as part of the synchronic grammar; but such cases seem to be quite rare and limited in scope. In the more recent version of OT that incorporates candidate chains (OT-CC), changes to the representation occur by minimal steps, and non-contiguous metathesis is subject to the requirement that each change in linear order increase well-formedness (McCarthy 2007: 87f.). In a suffix alternation requiring the synchronic derivation /mak/ → mka → [kma], the first step might be motivated by a preference for sonorant codas, but that change does not appear to be motivated more generally in Mutsun; in fact, Hume (1998: 170f.) specifically gives the constraint *m]coda. It might therefore be that the architecture of OT-CC forces the Mutsun alternation to be treated as listed allomorphy (chapter 99: phonologically conditioned allomorph selection).

3

Related processes

In some cases, a pattern that was originally considered to be metathesis was later seen as non-metathesis – and occasionally vice versa. A good example is the Zoque 3rd person singular prefix /j-/, which never surfaces as a strict prefix, but has been described as permuting with the stem-initial consonant, as in /j-pata/ → [p jata] ‘his mat’ (Wonderly 1951: 117f.; Dell 1973: 110). Sagey (1986: 105–111) argues that the glide /j/ actually merges featurally with the following consonant to produce a palatalized segment (chapter 71: palatalization), which may be realized with an offglide, as implied by transcriptions such as [p j]. This position is supported by the independent need for a non-metathesis source of palatalized segments found at compound boundaries; cases such as /kuj-tZm/ → [kujt jZm] ‘avocado’ show that the glide spreads, rather than reversing in order. A similar pattern is found in several languages of Nigeria and Cameroon. Prefixes that can be reconstructed as the high vowels *i and *u are realized as a glide – or a secondary articulation on the consonant – after the stem-initial consonant, as in Noni /k-w-en/ ‘firewood’ from the base /ken/ (Blevins and Garrett 1998: 514–516). See also the cases in Webb (1974: 12f.). Another phenomenon that has a certain affinity to metathesis is infixation, since it likewise requires a reordering from the expected position. In particular, “infixation and metathesis commonly show the potential mobility of full segments” rather than just subsegments such as features or nodes (Zoll 2001: 51). The closest analogy can be found in the infixation of a single consonant across one other consonant, as in the active neutral infix /-m-/ of Atayal /t-m-apeh/ ‘beckon’ (Egerod 1965: 265f.); this is formally similar to the metathesis of adjacent consonants. But infixation encompasses a broader set of phenomena that can include multiple segments in the item that undergoes reordering, as well as multiple segments in the span over which the infix is displaced; both are illustrated by the Tagalog actor focus /-um-/ that (optionally) moves over complex onsets in borrowed words such as /gr-um-adwet/ ~ /g-um-radwet/ ‘graduate’ (Orgun and Sprouse 1999: 204). On the other hand, Halle (2001) argues that the apparent Tagalog infixes appearing as /-um-/ and /-in-/ are actually CV underlyingly, with non-local metathesis of the two leftmost onsets, as /mu-tawag/ → [tu-mawag]. Theoretical and empirical problems with this approach are discussed by Klein (2005: 989–991), who advocates an infixation analysis within Optimality Theory.

23

Eugene Buckley

Another phenomenon that might be seen as involving either metathesis or infixation (or even other possibilities) is imbrication in Bantu languages such as Cibemba (Hyman 1995). In this process, the perfective suffix /-il/ combines with a polysyllabic stem, such that the /l/ of the suffix disappears and the /i/ combines with the rightmost vowel in the stem according to the usual coalescence rules of the language, as in /sákat-il-e/ → [sákeete] ‘seize’. The striking fact is that the suffixal vowel appears to skip over the stem-final consonant; in principle, this could be handled a variety of ways, including either CV metathesis, /sákat-il-e/ → sákaitle → [sákeete]; or infixation of the suffix inside the final consonant, /sáka-il-t-/ → sákailt-e → [sákeete]. These approaches assume subsequent simplification of the consonant cluster, as well as vowel coalescence. Hyman (1995: 11–16) argues in favor of infixation, which he relates to the positioning of the perfective (and the applicative) before the passive and causative suffixes. Diachronically, metathesis is the origin of some instances of infixation (Ultan 1975: 178f.; Yu 2007: 139–148). Another point of comparison is found in Horwood (2002: 170, 2004: 11), who uses Linearity to control the displacement of prefixes and suffixes to infixed positions. A crucial difference is that infixation of this sort (that is, excluding infixation tied to prosodically prominent constituents) is inherently edge-oriented; the infixed material remains as close to the left or right edge of the stem as possible, subject to the phonotactic constraints or other pressures that force deviation from simple prefixation or suffixation (McCarthy and Prince 1993; Prince and Smolensky 2004: 40–43; Yu 2007: 67–71). Metathesis, on the other hand, often occurs at stem edges as the result of morpheme concatenation, but in principle can occur anywhere in a word – recall the stem-medial cases in Cebuano and Rendille (§1.1). In addition, the infix has the status of a morpheme, which may happen to consist of a single segment; but in metathesis the single-segment status is fundamental, and not necessarily correlated with a particular morpheme. It can be noted finally that metathesis as a phenomenon is important evidence in favor of the category segment, however it may be formalized (chapter 54: the skeleton). Whether one considers the category of segment to be innate in the language faculty or something that emerges from the coordination of phonological gestures (Bybee 2001: 85f.), it is impossible to describe reorderings coherently in terms of disparate features or phonetic cues: the essential property of metathesis is that it moves all features associated with a segment, and the cues that instantiate these features are affected as a group. Indeed, the features may be implemented by rather different cues in the new position. For example, the Alsea alternation [stlak] ~ [stalk] affects just two of the five segments in this root. Even if /la/ were to be described as a core syllable, which is then reversed in some sense, the notion of “reversal” makes covert reference to the segments within the CV syllable. Otherwise there must be a claim that the prevocalic /l/ has the same phonetic realization as when it occurs in the coda, and that the release of the /t/ into the /a/ is no different from that into the /l/ in the nonmetathesized form. The need to refer to discrete segments even to characterize metathesis, and even more so to provide a theoretical analysis, presents particularly good evidence against suggestions that segments have no psychological reality, and are a mere artifact of an alphabetic writing system (Ladefoged 2005: 191; Silverman 2006: 6, 203).

Metathesis

24

REFERENCES Anderson, John M. & Jacques Durand. 1988. Underspecification and dependency phonology. In Pier Marco Bertinetto & Michele Loporcaro (eds.) Certamen phonologicum: Papers from the 1987 Cortona Phonology Meeting, 3–36. Turin: Rosenberg & Sellier. Anderson, Stephen R. 2005. Morphological universals and diachrony. Yearbook of Morphology 2004. 1–17. Bagemihl, Bruce. 1995. Language games and related areas. In John A. Goldsmith (ed.) The handbook of phonological theory, 697–712. Cambridge, MA & Oxford: Blackwell. Bailey, Charles-James N. 1970. Toward specifying constraints on phonological metathesis. Linguistic Inquiry 1. 347–349. Becker, Thomas. 2000. Metathesis. In Booij et al. (2000), 576–581. Berg, Thomas. 1987. A cross-linguistic comparison of slips of the tongue. Bloomington: Indiana University Linguistics Club. Blevins, Juliette & Andrew Garrett. 1998. The origins of consonant–vowel metathesis. Language 74. 508–556. Blevins, Juliette & Andrew Garrett. 2004. The evolution of metathesis. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 117–156. Cambridge: Cambridge University Press. Blust, Robert. 1971. A Tagalog consonant cluster conspiracy. Philippine Journal of Linguistics 2. 85–91. Blust, Robert. 1979. Coronal–noncoronal consonant clusters: New evidence for markedness. Lingua 47. 101–117. Bonthuis, Fiorieneke. 2001. Metathesis in Leti. In Hume et al. (2001), 26–52. Booij, Geert, Christian Lehmann & Joachim Mugdan (eds.) 2000. Morphologie: Ein internationales Handbuch zur Flexion und Wortbildung. Berlin & New York: Mouton de Gruyter. Buccellati, Giorgio. 1996. A structural grammar of Babylonian. Wiesbaden: Harrassowitz. Buckley, Eugene. 2007. Vowel–sonorant metathesis in Alsea. International Journal of American Linguistics 73. 1–39. Burrow, Thomas & Sudhibhushan Bhattacharya. 1970. The Pengo language: Grammar, texts, and vocabulary. Oxford: Clarendon Press. Burton, Strang. 1989. Kasem coalescence and metathesis: A particle analysis. Toronto Working Papers in Linguistics 10. 21–32. Bybee, Joan. 2001. Phonology and language use. Cambridge: Cambridge University Press. Caldwell, Thomas A., John N. Oswalt & John F. X. Sheehan. 1977. An Akkadian grammar: A translation of Riemschneiders Lehrbuch des Akkadischen. 3rd edn. Milwaukee: Marquette University Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Churchward, C. Maxwell. 1940. Rotuman grammar and dictionary, comprising Rotuman phonetics and grammar and a Rotuman–English dictionary. Sydney: Australasian Medical Publishing Company. Coetzee, Andries W. 1999. Metathesis in Tiberian Hebrew: A perspective from Optimality Theory. Theoretical Linguistics 25. 99–131. Crowhurst, Megan J. 1998. Um infixation and prefixation in Toba Batak. Language 74. 590–604. Dell, François. 1973. Les règles et les sons. Paris: Hermann. Dell, Gary S. 1984. The representation of serial order in speech: Evidence from the repeated phoneme effect in speech errors. Journal of Experimental Psychology: Learning, Memory and Cognition 10. 222–233. Dell, Gary S. 1995. Speaking and misspeaking. In Lila R. Gleitman & Mark Liberman (eds.) An invitation to cognitive science, vol. 1: Language, 183–208. 2nd edn. Cambridge, MA: MIT Press.

25

Eugene Buckley

Demers, Richard. 1974. Alternating roots in Lummi. International Journal of American Linguistics 40. 15–21. Dimmendaal, Gerrit Jan. 1983. The Turkana language. Dordrecht: Foris. Egerod, Sren. 1965. Verb inflexion in Atayal. Lingua 15. 251–282. Flemming, Edward. 1996. Laryngeal metathesis and vowel deletion in Cherokee. UCLA Occasional Papers in Linguistics 16. 23–44. Foster, Michael. 1982. Alternating weak and strong syllables in Cayuga words. International Journal of American Linguistics 48. 59–72. Fromkin, Victoria A. 1971. The non-anomalous nature of anomalous utterances. Language 47. 27–52. Garrett, Andrew & Juliette Blevins. 2009. Analogical morphophonology. In Kristin Hanson & Sharon Inkelas (eds.) The nature of the word: Essays in honor of Paul Kiparsky, 527–545. Cambridge, MA: MIT Press. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Grammont, Maurice. 1905–6. La métathèse dans le parler de Bagnères-de-Luchon. Mémoires de la societé de linguistique de Paris 13. 73–90. Grammont, Maurice. 1933. Traité de phonétique. Paris: Delagrave. Haas, Wim G. de. 1988. Phonological implications of skeleton and feature underspecification in Kasem. Phonology 5. 237–254. Hale, Kenneth. 1976. Phonological developments in particular Northern Paman languages. In Peter Sutton (ed.) Languages of Cape York, 7–40. Canberra: Australian Institute of Aboriginal Studies. Hale, Mark & Madelyn Kissock. 1998. The phonology–syntax interface in Rotuman. In Matthew Pearson (ed.) Recent papers in Austronesian linguistics, 115–128. Los Angeles: Department of Linguistics, University of California, Los Angeles. Halle, Morris. 2001. Infixation versus onset metathesis in Tagalog, Chamorro, and Toba Batak. In Michael Kenstowicz (ed.) Ken Hale: A life in language, 153–168. Cambridge, MA: MIT Press. Halle, Morris & Valdis Zeps. 1966. A survey of Latvian morphophonemics. MIT Research Laboratory of Electronics Quarterly Progress Report 83. 104–113. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hock, Hans Henrich. 1985. Regular metathesis. Linguistics 23. 529–546. Holscher, Dan, Monica Macaulay & Marnie Jo Petray. 1992. Tone metathesis in the Dangme imperative. Proceedings of the Annual Meeting, Berkeley Linguistics Society 17. 20–33. Horowitz, Edward. 1960. How the Hebrew language grew. New York: Jewish Educational Committee Press. Horwood, Graham. 2002. Precedence faithfulness governs morpheme position. Proceedings of the West Coast Conference on Formal Linguistics 21. 166–179. Horwood, Graham. 2004. Order without chaos: Relational faithfulness and position of exponence in Optimality Theory. Ph.D. dissertation, Rutgers University. Hudson, Grover. 1980. Automatic alternations in nontransformational phonology. Language 56. 94–125. Huehnergard, John. 2005. A grammar of Akkadian. 2nd edn. Winona Lake: Eisenbrauns. Hume, Elizabeth. 1998. Metathesis in phonological theory: The case of Leti. Lingua 104. 147–186. Hume, Elizabeth. 1999. The role of perceptibility in consonant/consonant metathesis. Proceedings of the West Coast Conference on Formal Linguistics 17. 293–307. Hume, Elizabeth. 2001. Metathesis: Formal and functional considerations. In Hume et al. (2001), 1–25.

Metathesis

26

Hume, Elizabeth. 2004. The indeterminacy/attestation model of metathesis. Language 80. 203–237. Hume, Elizabeth & Jeff Mielke. 2001. Consequences of word recognition for metathesis. In Hume et al. (2001), 135–158. Hume, Elizabeth & Misun Seo. 2004. Metathesis in Faroese and Lithuanian: From speech perception to Optimality Theory. Nordic Journal of Linguistics 27. 35–60. Hume, Elizabeth, Norval Smith & Jeroen van de Weijer (eds.) 2001. Surface syllable structure and segment sequencing. Leiden: Holland Institute of Generative Linguistics. Hyman, Larry M. 1985. Word domains and downstep in Bamileke-Dschang. Phonology Yearbook 2. 47–83. Hyman, Larry M. 1995. Minimality and the prosodic morphology of Cibemba imbrication. Journal of African Languages and Linguistics 16. 3–39. Hyman, Larry M. & William R. Leben. 2000. Suprasegmental processes. In Booij et al. (2000), 587–594. Jakobi, Angelika. 1990. A Fur grammar: Phonology, morphophonology and morphology. Hamburg: Buske. Jespersen, Otto. 1949. A Modern English grammar on historical principles. Part I: Sounds and spellings. London: George Allen & Unwin. Keyser, Samuel J. 1975. Metathesis and Old English phonology. Linguistic Inquiry 6. 377–411. Kilani-Schoch, Marianne & Wolfgang Dressler. 1986. Métathèse et conversion morphologiques en arabe tunisien. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung 39. 61–75. Kiparsky, Paul & Wayne O’Neil. 1976. The phonology of Old English inflections. Linguistic Inquiry 7. 527–557. Klein, Thomas B. 2005. Infixation and segmental constraint effects: UM and IN in Tagalog, Chamorro, and Toba Batak. Lingua 115. 959–995. Kurisu, Kazutaka. 2001. The phonology of morpheme realization. Ph.D. dissertation, University of California, Santa Cruz. Ladefoged, Peter. 2005. Vowels and consonants: An introduction to the sounds of languages. 2nd edn. Malden, MA & Oxford: Blackwell. Langdon, Margaret. 1976. Metathesis in Yuman languages. Language 52. 866–883. Lockwood, W. B. 1955. An introduction to modern Faroese. Copenhagen: Munksgaard. Ìubowicz, Anna. 2009. Infixation as morpheme absorption. In Steve Parker (ed.) Phonological argumentation: Essays on evidence and motivation, 261–284. London: Equinox. MacKay, Donald G. 1970. Spoonerisms: The structure of errors in the serial order of speech. Neuropsychologia 8. 323–350. Malone, Joseph L. 1971. Systematic metathesis in Mandaic. Language 47. 394–415. Malone, Joseph L. 1993. Tiberian Hebrew phonology. Winona Lake, IN: Eisenbrauns. Mattina, Anthony. 1979. Pharyngeal movement in Colville and related phenomena in the interior Salishan languages. International Journal of American Linguistics 45. 17– 24. McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12. 373–418. McCarthy, John J. 1989. Linear order in phonological representation. Linguistic Inquiry 20. 71–99. McCarthy, John J. 2000. The prosody of phase in Rotuman. Natural Language and Linguistic Theory 18. 147–197. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. McCarthy, John J. & Alan Prince. 1990. Foot and word in prosodic morphology: The Arabic broken plural. Natural Language and Linguistic Theory 8. 209–283.

27

Eugene Buckley

McCarthy, John J. & Alan Prince. 1993. Generalized alignment. Yearbook of Morphology 1993. 79–153. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249–384. Amherst: GLSA. Montler, Timothy. 1989. Infixation, reduplication, and metathesis in the Saanich actual aspect. Southwest Journal of Linguistics 9. 92–107. Noonan, Michael. 1997. Inverted roots in Salish. International Journal of American Linguistics 63. 475–515. Okrand, Marc. 1979. Metathesis in Costanoan grammar. International Journal of American Linguistics 45. 123–130. Orgun, Cemil Orhan & Ronald L. Sprouse. 1999. From MParse to Control: Deriving ungrammaticality. Phonology 16. 191–224. Osthoff, Hermann & Karl Brugmann. 1878. Morphologische Untersuchungen auf dem Gebiete der indogermanischen Sprachen. Leipzig: Hirzel. Penny, Ralph J. 2002. A history of the Spanish language. 2nd edn. Cambridge: Cambridge University Press. Phelps, Elaine. 1975. Simplicity criteria in generative phonology: Kasem nominals. Linguistic Analysis 4. 297–332. Phelps, Elaine. 1979. Abstractness and rule ordering in Kasem: A refutation of Halle’s maximizing principle. Linguistic Analysis 5. 29–69. Pierret, Jean-Marie. 1994. Phonétique historique du français et notions de phonétique générale. Louvain-la-Neuve: Peeters. Prince, Alan & Paul Smolensky. 2004. Optimality Theory: Constraint interaction in generative grammar. Malden, MA & Oxford: Blackwell. Prunet, Jean-François. 2006. External evidence and the Semitic root. Morphology 16. 41–67. Prunet, Jean-François, Renée Béland & Ali Idrissi. 2000. The mental representation of Semitic words. Linguistic Inquiry 31. 609–648. Pulleyblank, Douglas. 1986. Tone in Lexical Phonology. Dordrecht: Reidel. Rohlfs, Gerhard. 1930. Etymologisches Wörterbuch der unteritalienischen Gräzität. Halle, Saale: Niemeyer. Rubin, Dominic. 2001. Place-licensing: Opposite orderings are cognitive, not phonetic. In Hume et al. (2001), 189–209. Sagey, Elizabeth. 1986. The representation of features and relations in nonlinear phonology. Ph.D. dissertation, MIT. Sandler, Wendy. 1993. A sonority cycle in American Sign Language. Phonology 10. 243–279. Schachter, Paul & Fe Otanes. 1972. Tagalog reference grammar. Berkeley: University of California Press. Seo, Misun & Elizabeth Hume. 2001. A comparative account of metathesis in Faroese and Lithuanian. In Hume et al. (2001), 210–229. Silva, Clare M. 1973. Metathesis of obstruent clusters. OSU Working Papers in Linguistics 14. 77–84. Silverman, Daniel. 2006. A critical introduction to phonology: Of sound, mind, and body. London & New York: Continuum. Sim, Ronald J. 1981. Morphophonemics of the verb in Rendille. Afroasiatic Linguistics 8. 1–33. Smith, Norval. 1985. Spreading, reduplication and the default option in Miwok nonconcatenative morphology. In Harry van der Hulst & Norval Smith (eds.) Advances in nonlinear phonology, 363–380. Dordrecht: Foris. Sohn, Ho-min. 1980. Metathesis in Kwaraae. Lingua 52. 305–323. Steriade, Donca. 1990. Gestures and autosegments: Comments on Browman and Goldstein’s paper. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 382–397. Cambridge: Cambridge University Press.

Metathesis

28

Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual account. In Elizabeth Hume & Keith Johnson (eds.) The role of speech perception in phonology, 219–250. San Diego: Academic Press. Stonham, John T. 1994. Combinatorial morphology. Amsterdam & Philadelphia: John Benjamins. Thompson, Laurence C. & M. Terry Thompson. 1969. Metathesis as a grammatical device. International Journal of American Linguistics 35. 213–219. Ultan, Russell. 1975. Infixes and their origins. In Hansjakob Seiler (ed.) Linguistic workshop III, 157–205. Munich: Fink. Ultan, Russell. 1978. A typological view of metathesis. In Joseph H. Greenberg, Charles A. Ferguson & Edith A. Moravcsik (eds.) Universals of human language, vol. 2: Phonology, 367–402. Stanford: Stanford University Press. Ussishkin, Adam. 2005. A fixed prosodic theory of nonconcatenative templatic morphology. Natural Language and Linguistic Theory 23. 169–218. Vennemann, Theo. 1988. Preference laws for syllable structure and the explanation of sound change: With special reference to German, Germanic, Italian, and Latin. Berlin: Mouton de Gruyter. Wanner, Dieter. 1989. On metathesis in diachrony. Papers from the Annual Regional Meeting, Chicago Linguistic Society 25. 434–450. Webb, Charlotte. 1974. Metathesis. Ph.D. dissertation, University of Texas, Austin. Wechssler, E. 1900. Giebt es Lautgesetze? Forschungen zur romanischen Philologie: Festgabe für Hermann Suchier, 349–538. Halle: Niemeyer. Wells, Rulon. 1951. Predicting slips of the tongue. Yale Scientific Magazine, December, 9–30. Williams, Edwin B. 1962. From Latin to Portuguese. 2nd edn. Philadelphia: University of Pennsylvania Press. Winters, Steve. 2001. VCCV perception: Putting place in its place. In Hume et al. (2001), 230–247. Wonderly, William L. 1951. Zoque II: Phonemes and morphophonemes. International Journal of American Linguistics 17. 105–123. Yip, Moira. 2002. Tone. Cambridge: Cambridge University Press. Yu, Alan C. L. 2007. A natural history of infixation. Oxford: Oxford University Press. Zoll, Cheryl. 2001. Constraint and representations in subsegmental phonology. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 46–78. Cambridge: Cambridge University Press.

60

Dissimilation Patrik Bye

1

Introduction

Dissimilation prototypically refers to a situation in which a segment becomes less similar to a nearby segment with respect to a given feature. As a synchronic alternation, it can be exemplified by liquid dissimilation in Georgian, where the ethnonym-forming suffix {-uri} becomes [uli] when an /r/ precedes it anywhere within the word (Fallon 1993; Odden 1994). The resulting pattern of alternation is shown in (1). (1)

Georgian r-dissimilation a. b. c.

p’olon-uri somø-uri sur-uli p’rusi-uli avst’ral-uri kartl-uri

‘Polish’ ‘Armenian’ ‘Assyrian’ ‘Prussian’ ‘Australian’ ‘Kartvelian’

In (1a), the suffix surfaces in its basic (non-dissimilated) form. The forms in (1b) illustrate the result of unbounded dissimilation within the word. In the word meaning ‘Prussian’ it takes place despite the presence of the intervening consonant. If a lateral /l/ intervenes between the two rhotics, however, dissimilation does not apply. This is shown in (1c). A very similar example of liquid dissimilation comes from Latin (e.g. Kent 1936, 1945; Steriade 1987), where the alternation is reversed. The adjectival suffix -alis, as in navalis ‘naval’, dissimilates to -aris whenever another /l/ precedes it in the word, e.g. lenaris ‘lunar’. Dissimilation is similarly blocked whenever /r/ intervenes between the trigger and the target, e.g. fldralis ‘floral’, *fldraris. As a diachronic change, dissimilation is most often sporadic, applying to random lexical items (Posner 1961). The historical development of Latin and the Romance languages furnish several examples of sporadic liquid dissimilation, e.g. Latin arbor > Spanish arbol ‘tree’, peregrcnus > Late Latin pelegrcnus ‘pilgrim’. Regular synchronic alternations involving dissimilatory processes are far more rare and, The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0060

Dissimilation

2

as a result, dissimilation has been afforded somewhat less systematic attention than other more common segmental patterns like assimilation. Nevertheless, the study of dissimilation phenomena offers a valuable source of insights into the fundamental questions phonologists ask. These questions include (a) the nature of rules and representations, and the relation between the two, (b) the division of labor between the grammar and the lexicon, and (c) whether phonological patterns reflect possibly innate cognitive biases or extralinguistic factors operating during acquisition. The organization of the remainder of this chapter is as follows. §2 sets out the major parameters of dissimilation, explaining which features participate, along with restrictions on the interaction of context and focus determined by locality and domain of application. §3 addresses the contribution that the study of dissimilation phenomena has made to phonological theory, assessing how it has shaped our understanding of both representations and rules. §4 provides an overview of the motivations for dissimilatory patterns proposed in the literature. Conclusions and questions for future research are given in §5.

2

Dissimilatory patterns and their parameters

2.1 Participating features Suzuki (1998) presents a comprehensive survey of cross-linguistic dissimilatory patterns. His survey includes 39 dissimilatory alternations. Table 60.1 provides a somewhat revised summary with a total of 46 alternations, adducing a few additional cases not covered in Suzuki’s original survey, and suppressing cases that on closer inspection turn out not to be true dissimilation.1 The second column of the table specifies the locality condition on the process (for illustration

1

For example, Suzuki’s cases 54 –57 are grouped under “polarity”, but on closer inspection they appear to have little in common. The reasons for reclassifying or not including these cases here are, briefly, as follows. In Russian jakan′e (54) a pre-tonic non-high vowel reduces to [i] or [a], depending on the quality of the following stressed vowel. The high or low quality of the reduced vowel gives the impression of maximizing the contrast in vowel height, e.g. /s j e’mj ju/ → [s j a’m j ju] ‘seven (inst)’, but /dj i’sjatka/ → /d j i’sjatka/ ‘tenfold’. Crosswhite (1999: 79–83), however, argues that the dissimilatory effect is only epiphenomenal, and actually has nothing to do with dissimilation or enhancement of vowel height contrast. Based on work by Alderete (1995), she argues that what is at issue is actually a difference in foot structure. Syllables with prominent nuclei may constitute feet on their own. In [dji(’s j a[[t)ka], the stressed syllable forms a foot on its own, whereas in [(s j a’m j ju)] the pre-tonic syllable must also be incorporated because the stressed nucleus is not sufficiently prominent. The choice of raising or lowering thus comes to depend on whether the focus is parsed into a foot or not. Dinka (55) represents a morphological exchange rule, which is highly controversial in linguistic theory. See Wolf (2007) for an alternative analysis of exchange rules in terms of featural affix allomorphs. Huamelultec Chontal (56) appears to be a case of [spread glottis] dissimilation, not polarity. Margi (57) represents a case of allomorphy. Also not included is Thurneysen’s Law in Gothic, which recent research by Woodhouse (1998) shows to be a case of analogical relexicalization rather than a phonological rule. Suzuki also includes Finnish consonant gradation (Keyser and Kiparsky 1984; Alderete 1997), but it is excluded here, since it is neither synchronically properly phonological nor obviously a dissimilation rule. One language cited as evincing low vowel dissimilation is the Chadic language Kera (Ebert 1974; Kenstowicz and Kisseberth 1979), where /a/ is claimed to dissimilate to [H] preceding another /a/. Recent work by Pearce (2008), however, shows that the effect is due to reduction in unstressed syllables.

3

Patrik Bye

Table 60.1 Dissimilatory alternations (Rt = root adjacency; q = syllable adjacency; unbounded (within the word); P = progressive; R = regressive) Feature

Locality

Example

labial

Rt

Tashlhiyt Berber (P: Elmedlaoui 1985, 1995; Jebbour 1985; Boukous 1987; Larsi 1991; Odden 1994) Cantonese (P: Yip 1982, 1988), Palauan (R: Josephs 1975, 1990; Finer 1986) Akkadian (R: Soden 1969; McCarthy 1979; Yip 1988; Hume 1992; Odden 1994), Tashlhiyt Berber (2×R; see references above), Palauan (R; see references above)

q unbounded

coronal

Rt

Dakota (R: Shaw 1976, 1985)

lateral

unbounded

Kuman (R: Trefry 1969; Walsh Dickey 1997), Latin (P: Kent 1936, 1945; Posner 1961; Johnson 1973; Steriade 1987, 1995; Odden 1994; Walsh Dickey 1997), YidiJ (R: Dixon 1977; Steriade 1995; Walsh Dickey 1997), Yimas (P: Foley 1991; Odden 1994; Walsh Dickey 1997)

rhotic

Rt unbounded

Ainu (R: Maddieson 1984; Shibatani 1990) Georgian (P: Fallon 1993; Odden 1994), Modern Greek (R: Newton 1971; Walsh Dickey 1997), Sundanese (R: Cohn 1992; Holton 1995), Yindjibarndi (P: Wordick 1982)

voice

q

Bantu (R: Bennett 1967; Davy & Nurse 1982; Odden 1994; Lombardi 1995), Japanese (Itô & Mester 1986; Alderete 1997)

spread glottis

q

Ancient Greek (R: Grassmann 1863), Huamelultec Chontal (P: Waterhouse 1949, 1962; Kenstowicz & Kisseberth 1979), Inari Saami (P: Itkonen 1986–91)

constricted glottis

q

Seri (R: Marlett & Stemberger 1983; Yip 1988), Cuzco Quechua (R: Parker 1997)

nasal NC

Rt q unbounded

Chukchi (R: Odden 1987) Gooniyandi (P: McGregor 1990; Odden 1994; Evans 1995) Gunindji (P: McConvell 1988; Odden 1994; Evans 1995), Yindjibarndi (P: Wordick 1982; Odden 1994)

continuant

Rt

Modern Greek (P: Kaisse 1988), Northern Greek (R: Newton 1971), Tsou (P: Szakos 1994), (P: Quintero 2004), North Central Spanish (R: González 2008)

high

q Rt

Guere (R: Paradis & Prunet 1989) Arusa (R: Levergood 1987), Wintu (Pitkin 1984)

low

q

Marshallese (R: Bender 1968, 1969; Kenstowicz & Kisseberth 1977), Woleaian (R: Sohn 1975; Sohn & Tawerilmang 1976; Poser 1982)

length

q

Gidabal (P: Geytenbeek & Geytenbeek 1971), Latin (R: Leumann 1977; Sihler 1995), Oromo (P: Gragg 1976; Lloret 1988; Alderete 1997), Slovak (P: Kenstowicz & Kisseberth 1977, 1979; Rubach 1993)

H

Rt q unbounded

Bantu (P: Goldsmith 1984; Odden 1994) Bantu (P: Goldsmith 1984; Odden 1994) Arusa (P: Levergood 1987; Odden 1994)

L

unbounded

Peñoles Mixtec (P: Daly 1993; Odden 1994)

Dissimilation

4

of dissimilations with different locality conditions, see §2.2). Also indicated is the direction of dissimilation and, in case the language has more than one, the number of dissimilative patterns. The numbers of progressive and regressive dissimilations are more or less evenly split, with 21 and 24 cases respectively. Major class features such as [consonantal], [sonorant], and [approximant] do not appear to participate in dissimilation. The existence of some of these features is indeed contested in the literature. While there is something of a consensus that [sonorant] is necessary,2 Hume and Odden (1996) propose that [consonantal] may be dispensed with. There is also a widespread assumption that the feature [approximant] is not contrastive in language, although see Levi (2008) for evidence to the contrary. Beyond the major class features, all classes of feature may be involved in dissimilation, including place of articulation, laryngeal state, manner (continuancy, liquid, nasality), vowel height, and suprasegmental properties such as length and tone. We shall illustrate some of these in this section; other alternations that raise particular theoretical issues will be illustrated in the relevant sections. Thus, see §3 for examples of nasal dissimilation, and §4.3 for continuant dissimilation. Of the place of articulation features, only [labial] dissimilation is common. Labial dissimilation is illustrated in (2) with data from Tashlhiyt Berber (Odden 1994). The labial nasal /m/ in a prefix dissimilates to [n] if the stem contains a labial consonant anywhere within it. (2)

Labial dissimilation in Tashlhiyt Berber a. b.

las agur # rmi bur # azum

‘shear’ ‘remain’ ‘be tired’ ‘remain celibate’ ‘fast’

am-las am-agur an-#rmi an-bur an-#azum

‘shearer’ ‘abandoned’ ‘tired person’ ‘bachelor’ ‘faster’

Dissimilation of [coronal] is only attested in a single case, Dakota (Shaw 1976, 1985). Underlying coronal non-continuants /t Πn d/ are all neutralized to [k] (or, with regressive voicing assimilation, [g]) before another coronal consonant. The examples in (3) are from Shaw (1985: 184); see also Shaw (1976: 337). (3) Coronal dissimilation in Dakota reduplication a.

b.

2

/Œek/ /Œap/ /t’hs/ /khuœ/ /sut/ /Úat/ /t heŒ/ /Œ heŒ/ /nin/

Œek-’Œeka Œap-’Œapa t’hs-’thza khuœ-’khuÚa suk-’suta Úag-’Úata t hek-’t heŒa Œ hek-’Œ heŒa nig-’nina

‘stagger’ ‘trot’ ‘draw tight’ ‘lazy’ ‘strong’ ‘curved’ ‘be new’ ‘to look like’ ‘very’

This consensus naturally does not extend to those representational theories like Government Phonology, where the elements of representation must have autonomous interpretations (see especially Kaye et al. 1985). Obviously, [sonorant] has no phonetic interpretation independent of the place and manner features with which it is associated.

Patrik Bye

5

There are apparently no attested examples of dissimilation involving the feature [dorsal]. Alternations involving laterals and rhotics are relatively common. We shall provide examples of liquid dissimilation in §2.2 in connection with the discussion of locality parameters. Several Australian languages show dissimilation of prenasalized stops or nasal + stop clusters (NC). In Gurindji (Pama-Nyungan, Northern Territory; McConvell 1988; Odden 1994), this process is unbounded, e.g. /lutcu-Ika/ ‘ridge’, /pinka-Ika/ → [pinka-ka] ‘river-loc’, /kankula-mpa/ → [kankula-pa] ‘high ground-loc’. Several languages have restrictions on consecutive heavy nuclei that do not appear reducible to prosodic structure. In Slovak, for example, a long nucleus becomes short following a long nucleus, according to a rule known as the Rhythmic Law (Rubach 1993: 172–175). Thus, the suffixes {-a(} (fem sg) and {-e(mu} (dat sg) shorten their first vowel following a long vowel, e.g. [ma’l-a] ‘small-fem sg’ vs. [’ml(kv-a] ‘silent-fem sg’; [ma’l-emu] ‘small-masc dat sg’ vs. [’ml(kv-emu] ‘silent-masc dat sg’. The alternation is apparently unrelated to stress (Rubach 1993: 41– 42). At least in Western Slovak, the main stress falls on the initial syllable of the word, and sources report a binary stress pattern, some with the possibility of ternary alternation. The Rhythmic Law nevertheless applies in odd-numbered syllables, where we would expect resumption of secondary stress on a binary alternating pattern. This is shown by derivations with the agentive suffix {-ni(k} and the diminutive {-ik}, e.g. [hutni(k] ‘steelworker’ vs. [Œalu(nnik] ‘wallpaperer’, [xlebi(k] ‘bread’ vs. [’–ba(nik] ‘pot’. Several languages of Vanuatu have productive Low Vowel Dissimilation (Lynch 2003). In Maskelynes (Malayo-Polynesian), the nominalizer prefix is realized as [nH-] when the following vowel is low /e a o/, and [na-] following a high vowel /i H u/. (4)

a.

b.

na-vis na-xHmar na-xut nH-matu nH-gor

‘banana’ ‘men’s house’ ‘louse’ ‘right (hand)’ ‘green coconut’

Dissimilation is occasionally also used to refer to the deletion of one of a pair of similar neighboring sounds. Hall (2009), for example, describes this phenomenon with reference to /r/ in American English, in principle giving alternations like [fAçm] farm vs. [fAmÌ] farmer, and [istÌn] eastern vs. [istHnÌ] easterner. All the cases we have looked at so far involve the elimination of sequences of similar sounds. Preventive dissimilation is when the creation of new sequences of similar sounds is blocked. One example is provided by Inari Saami (Itkonen 1986 –91), which has a morphologically conditioned process of consonant gradation. An overlong obstruent in the “strong” grade generally alternates with the corresponding singleton in the “weak” grade, as shown in (5). In each of the examples below, the strong grade form on the left represents the nominative singular, the weak grade form on the right the accusative-genitive singular. Examples have been adapted from Finno-Ugric transcription into IPA, according to the conventions set out in Bye et al. (2009) ([Z] is a somewhat low central vowel; [∆] is “ultrashort”).

Dissimilation

6

(5) Inari Saami consonant gradation (obstruents) tsuop(ph∆ fAt(t∆ Jeœ(œi

tsuophZ fAAtZ Jeeœi

‘meat of fish’ ‘yard’ ‘mud, slush’

Under normal circumstances, the overlong aspirated velar stop /k(kh/ alternates with /h/, as shown in (6). This may be taken to reflect a general process that debuccalizes /kh/, leaving bare [h]. (6) Inari Saami consonant gradation: Debuccalization of aspirated velar stop kak(khu Œok(kh i kAl(kkh∆

kaahu, *kaak h u Œohii, *Œokh ii kAAlhZ, *kAAlk hZ

‘unleavened rye-bread’ ‘peak, summit’ ‘chalk’

However, there is one situation where debuccalization to [h] fails to take place. As (7) shows, this is when the onset of the preceding syllable is also /h/. (7) Inari Saami consonant gradation: Debuccalization blocked hik(k h i hiik h i, *hiihi ‘hay-basket’ h h hAk(k ∆ hAAk Z, *hAAhZ ‘canon’ h h hul(kk ∆ huulk Z, *huulhZ ‘knife-sheath’

2.2 Locality and domains Dissimilation may be associated with one of three locality conditions listed in (8). This parameter was first addressed in detail by Odden (1994). Suzuki’s (1998) survey largely confirms this picture. (8) Locality conditions a. Root adjacency b. Syllable adjacency c. Unbounded Liquid dissimilation may be used to exemplify all three locality conditions. Ainu, a language isolate of Japan, illustrates the root-adjacency condition (Shibatani 1990: 13). Given an underlying cluster /rr/, the first /r/ dissimilates to [n], as shown in (9). (9) Ainu r-dissimilation kukor kur kor mat

‘my husband’ ‘his wife’

kukon rusuj kon rametok

‘I want to have (something)’ ‘his bravery’

Yimas (Foley 1991: 54), a Sepik-Ramu language of Papua New Guinea, illustrates liquid dissimilation operating under syllable adjacency. An /r/ dissimilates to [t] if there is an /r/ in the immediately preceding syllable. The examples in (10) show variation in the shape of the inchoative suffix {-ara} (1991: 290).

Patrik Bye

7 (10)

Yimas r-dissimilation tuak-ara‘break open’ kamprak-ara- ‘snap’ apr-ata‘open, spread’

Dissimilation may also be unbounded within the word, as we have already seen in the Georgian example in (1) with which we opened this chapter.

3

Dissimilation in the grammar

One of the fundamental issues in generative phonology has always been whether linguistically significant generalizations should be assigned to particular designated levels of representation, such as the underlying or surface level, or to the rules that map one representation onto another (McCarthy 2007; Bye, forthcoming). In earlier approaches in the style of SPE (Chomsky and Halle 1968), dissimilations were described in terms of feature-changing rules of the general form shown in (11). (11)

X → [−F] / __ [+F]

Rules of this kind were criticized because the pairing of structural change and environment was arbitrary. Because of this, they were unable to distinguish between natural assimilations like (12a) and arbitrary rules like (12b) (example from Odden 1987). (12)

a. b.

[+consonantal] → [+voice] / __ [+voice] [+consonantal] → [+voice] / __ [+continuant]

Concerns about the generative power of feature-changing rules thus motivated the development in the mid-1970s and 1980s of non-linear approaches to phonological representation (Goldsmith 1976) that permitted greater elegance and simplicity in the statement of natural rules. Assimilation rules were remodeled as feature-filling spreading (see chapter 81: local assimilation and chapter 77: long-distance assimilation of consonants). In non-linear terms, dissimilation is simply the deletion or delinking of a feature and, in accounts that retain a view of features as binary, independently motivated insertion of a default value (Odden 1987, 1994; chapter 27: the organization of features). For example, Chukchi has a process changing underlying /I/ to [:] before another nasal, shown in (13). (13)

taraI-Hk inawrHI-Hk pit?iI

‘build a dwelling’ ‘to give as a gift’ ‘cold’

nH-tara:-mori inawrH:-nin pit?i:-Iinqij

‘we built a dwelling’ ‘he gave it’ ‘boy with a cold’

Odden (1987: 242) provides an analysis of this alternation as delinking of [+nasal] before another [+nasal], as shown in (14). Subsequently, redundancy rules fill in the feature [−nasal] by default.

Dissimilation (14)

denasalization

default





inawrHI-Hn [+N][+N]

inawr H K- H n [+N]

8

inaw r H : -H n [−N][+N]

Accompanying the development of non-linear representations was a return to the idea that at least certain phonological generalizations are best stated as constraints on surface forms. This conception made it feasible to explain how it was possible that certain rules seemed to share the same functional teleology (the “conspiracy” problem; see Kisseberth 1970 and chapter 70: conspiracies). Such constraints could trigger the application of repairs, such as the deletion of the first of the two [+nasal] features, or block the application of rules that would otherwise apply (see examples of preventive dissimilation in §2). The first such constraint on output representations was the Obligatory Contour Principle (Leben 1973; Goldsmith 1976; McCarthy 1979, 1981, 1986, 1988; Odden 1988; Yip 1988, 1989), one formulation of which is provided in (15). (15)

Obligatory Contour Principle (OCP) (McCarthy 1986, 1988) At the melodic level, adjacent identical elements are prohibited.

The OCP was originally used in accounting for tonal phenomena, especially adjacency of high tones, but it was subsequently extended to include other features. The OCP specifies a negative output target, and dissimilation only represents one strategy for satisfying it. Other repair strategies include merger of adjacent identical nodes, blocking of syncope (McCarthy 1986), and the insertion of epenthetic segments (Yip 1988). The OCP was incorporated into work couched in the framework of Optimality Theory (OT: McCarthy and Prince 1993; Prince and Smolensky 1993), where it became a violable constraint. Alderete (1997) and Itô and Mester (2003) propose that the OCP may represent a local self-conjunction of more primitive markedness constraints (Smolensky 1995, 1997). OCP[F] is violated precisely when *[F] is violated more than once within some local domain. Another major theoretical concern during the 1980s especially was locality conditions on the application of rules, and dissimilation played a major part in this debate. The autosegmentalization of representations into tiers permitted the elimination of many kinds of apparent long-distance effects. Sounds that are nonadjacent on the level of the segmental root may nevertheless dissimilate, provided that the relevant features are adjacent on the same autosegmental tier. Steriade (1987) argues that the Latin facts mentioned at the beginning of this chapter may be accounted for by a version of the OCP with jurisdiction over the [lateral] tier, over which interactions between tier-adjacent liquids may be described. The diagrams in (16) show how the liquids in the words lenaris and fldralis are projected onto separate tier-specifying values of [lateral]. This allows us to explain the ungrammaticality of the counterfactual form *lenalis as a result of the two occurrences of [+lateral] being adjacent on the [lateral] tier, in violation of OCP[lateral]. In fldralis, on the other hand, there is an intervening [−lateral] between the two occurrences of [+lateral], so the OCP is not violated.

9 (16)

Patrik Bye Lateral dissimilation OCP

*OCP

OCP

[+lat] [−lat]

[+lat] [−lat]

[+lat] [−lat] [+lat]

lun-aris

*lun-alis

f l o r - a l i s

Even with the possibility of factoring the representation into tiers, though, there is still an empirical residue that poses a problem for a strict interpretation of locality. In some theories, vocalic place (V-Place) and consonantal place (C-Place) are represented on separate planes (Clements 1991; Clements and Hume 1995; Morén 2003; see also chapter 19: vowel place and chapter 22: consonantal place of articulation). This organization implies that non-adjacent consonants and vowels should not display any interaction, but this expectation is not borne out. Akkadian (Soden 1969: 64ff.), for example, has a nominalizer prefix {ma-} that dissimilates to {na-} if followed by a labial consonant in the stem, e.g. /ma-W?al-t-u/ ‘question’ but /ma-rkab-t/ → [na-rkab-t] ‘chariot’. If a labial vowel or glide intervenes between the trigger and the target, however, dissimilation is blocked, e.g. /ma-wmii-t-um/ → *[ma-amii-t-um} ‘oath’. Odden (1994: 319) argues for an additional adjacency parameter, transplanar locality, to cover these cases, but it is unclear how this is to be formalized.

4

Motivations for dissimilation

There are a number of theories of what causes dissimilation. The purpose of this section is to review the major proposals as well as some others of more limited applicability. Our point of departure will be Ohala’s Co-articulationHypercorrection Theory (CHT; Ohala 1981, 1993, 2003), which is presented in §4.1. According to the CHT, dissimilation results when the listener reverses a perceived co-articulation. The central prediction of the CHT is that dissimilation should only occur with features that have cues that are significantly extended in time. Other theories assume a processing motivation. Frisch et al. (2004) argue that similarity avoidance effects are due to the difficulties associated with processing the sequencing of similar segments. This bias is reflected in the statistical structure of the lexicon and is described in §4.2. Following on from this, §4.3 considers the possibility that dissimilation in manner between pairs of adjacent fricatives or stops may be understood in terms of the enhancement of place cues. In §4.4 we look at dissimilation-like phenomena in certain kinds of reduplication (“echo” reduplication) and language games, which exploit non-identity for aesthetic, ludic, or secret purposes.

4.1 Dissimilation as listener reversal of co-articulation The phonetic realization of certain features may extend over long temporal domains. Long-domain features are interesting because they create an ambiguity for the listener faced with the task of reconstructing the feature’s place in phonological structure. This ambiguity creates conditions favorable to reanalysis,

Dissimilation

10

which – in the case of temporally extended features – may take one of three forms: assimilation, metathesis, or dissimilation (Blevins 2004). In dissimilation, when one instance of a distinctive feature occurs within the phonetic domain of another instance of the same distinctive feature, there is an ambiguity as to whether the phonetic effects should be ascribed to the first or second instance of the feature, or both. In a series of highly influential publications, Ohala (1981, 1993, 2003) argues that dissimilation as a sound change is the result of reversal by the listener of perceived co-articulation (see chapter 98: speech perception and phonology). The driving force of the change on this view is the overzealous application of reconstructive rules, with the result that long-domain effects that are actually intended by the speaker become reversed. This is known as hypercorrection. The mechanism of Ohala’s Co-articulation-Hypercorrection Theory is schematized in (17). (17) Dissimilation as sound change by the listener (after Ohala 1981: 187) Speaker

Listener /yt/

produced as

/ut/ reconstructed as

[yt]

Listener-turned-Speaker

heard as

produced as [yt]

[ut]

In this example, the speaker intends to say [yt], which is also the form that the listener actually hears. However, the listener is in possession of tacit phonetic knowledge that coronal consonants raise the value of F2 on neighboring vowels. Drawing on this knowledge, he concludes that the intended quality of the vowel has been distorted due to its proximity to the coronal consonant. The perceived distortion is then eliminated by reconstructing the intended form as /ut/: the /y/ dissimilates. There are three important entailments of the CHT. The first follows from the assumption that dissimilation involves co-articulation. Segments are only expected to dissimilate to the extent that they entail overlapping articulations. Many dissimilations involve segments that are not phonologically adjacent on the level of the segmental root node. A well-known example is Grassmann’s Law in Indo-European (Grassmann 1863). In Ancient Greek, which provides one instantiate of the sound law, there cannot be more than one aspirated stop in a pair of adjacent syllables (Smyth 1956 [1920]: 31). Thus, earlier /t h rikh-os/ ‘hair (gen sg)’ became [trikhos] rÙíunp (cf. [t h riks] yÙím (nom sg)). Grassmann’s Law apparently represents an interaction between two non-adjacent consonants. Once we take into account the co-articulatory effect of the aspiration on the following vowel, the apparent action-at-a-distance effect evaporates because the aspiration overlaps phonetically with the dissimilation target. Following release of the stop closure, aspiration persists into the following vowel for 60 msecs or so, presenting the listener with an ambiguity as to whether the aspiration represents post-aspiration of the first stop or pre-aspiration of the second. Segments that are outside of each other’s co-articulatory range are not expected to dissimilate according to the CHT.

Patrik Bye

11

The second consequence of the CHT is that dissimilation cannot take the form of a quantitative change within the same category. Dissimilation should always be limited to phonologically contrastive features (cf. Grammont 1895; Kiparsky 2003). This follows directly from the assumption that what listeners are doing, when they hypercorrect, is reconstructing what they believe is the intended form, which must be a distinctive segment of the language. Assimilative changes, on the other hand, may give rise to novel structures or segments. The third consequence of the CHT is that it should not matter which direction the perceived distortion is resolved in. The CHT is neutral with respect to whether the dissimilation is progressive or regressive. In (17), an equally valid outcome would have been dissimilation of the consonant, e.g. to /yk/. The empirical substance of Ohala’s proposal consists of the following predictions. (18)

Ohala’s predictions a.

The likelihood that a given consonantal feature participates in dissimilation depends on whether the associated perceptual cues have a short or long domain. b. The domain of dissimilation should be linked to the temporal extension of the perceptual cues. c. Features whose cues are localized on the segment should not show dissimilatory behavior. On this basis, an up-to-date list of the features shown to have temporal extension, and therefore likely to dissimilate according to the CHT, is shown in Table 60.2, adapted from a corresponding table in a paper on the evolution of metathesis by Blevins and Garrett (2004: 123). Examples are incorporated from the surrounding discussion in their text. Features not likely to dissimilate according to the CHT are fricative, affricate, stop, and voice. The phonetic cues for each of these segment types are localized on the segment itself. For example, stops are cued by high amplitude bursts on release of the closure. These bursts are very short, of the order of 5 msecs to 10 msecs. The temporal extent of voicing and fricative noise is limited by the extent of the segment’s articulation phase. Examples of continuancy and voicing dissimilation nevertheless exist. Examples of continuancy dissimilation are discussed in §4.3, along with a possible phonetic motivation. When Ohala (1981) initially framed his CHT, the existence of liquid dissimilation appeared to present a problem, since at that time no work had been done on temporally extended cues for liquids. Far from being occasional, liquids are, after labials, the most likely to dissimilate. Moreover, they show pronounced action-at-a-distance, as the Georgian example at the beginning of this chapter shows. This is surprising if it is all down to the formant transitions onto neighboring vowels. Starting with Kelly and Local (1986) and Kelly (1989), however, much research has shown that liquids have temporally extended acoustic-perceptual cues. Tunley (1999) demonstrated experimentally that /l/ causes raising in F2 and F3 on neighboring high vowels, while /r/ results in lowering. These effects are moreover observable up to five syllables away from the lateral segment itself (Hawkins and Smith 2001; see also chapter 30: the representation of rhotics). West (1999b) found that when the liquid and its phonetic context were masked with white noise, speakers were nonetheless able

Dissimilation

12

Table 60.2 Temporally extended features. References to acoustic properties are from Ladefoged (1993; L), Ladefoged et al. (1988; LMJ) and Ladefoged and Maddieson (1996; LM) Feature

Acoustic property

Examples

rounding

lowering of all formants (LM 356–358)

French, English (Benguerel & Cowan 1974; Lubker & Gay 1982)

velarization

lowered F2 (LM 361–362)

Arabic (Ghazali 1977; Card 1983)

pharyngealization

lowered F3, raised Fl (LM 307)

Interior Salish (Bessell 1998a, 1998b)

palatalization

raised F2 (LM 364)

Catalan (Recasens 1984, 1987), English (Hawkins & Slater 1994), Japanese (Magen 1984), Marshallese (Choi 1992), Russian (Keating 1988), Bantu (Manuel 1987)

retroflection

lowered F3, F4, clustering of F2, F3, F4 (E 203, LM 28)

Gooniyandi (McGregor 1990), Gujarati (Dave 1977), Hindi (Stevens & Blumstein 1975), Malayalam (Dart 1991), Tiwi (Anderson & Maddieson 1994)

laryngealization

more energy in Fl, F2, more jitter (LMJ)

Cayuga (Dougherty 1993)

aspiration

more energy in F0, more noise (LMJ)

Cayuga (Dougherty 1993)

nasalization

spectral zero, nasal anti-resonance (LM 116)

English (Cohn 1990)

jaw lowering

raised F1

English (Amerman et al. 1970)

rhoticity

lowered F3 (LM 244, 313)

laterality

lateral formants (LM: 193–197), raised/ lowered F2/F3

English (Kelly & Local 1986; Kelly 1989; Tunley 1999; West 1999a, 1999b, 2000; Hawkins & Smith 2001)

to reconstruct the intended liquid from the resonances in vowels up to three syllable nuclei away. These recent findings on the phonetics of liquids thus square well with the prediction that the phonological domain of the dissimilating feature should mirror the temporal extension of the corresponding cues. Ohala does not consider dissimilation between vowels. Öhman (1966) showed that vowels may co-articulate across intervening consonants. Dissimilation of vowels across syllables is thus consistent with Ohala’s broader claims. Interestingly, though, all of the known examples of vowel dissimilation involve vowel height (chapter 21: vowel height). Vowel height dissimilation is certainly consistent with the experimental finding that lowering of the jaw co-articulates (Amerman et al. 1970), but the existence of co-articulation is apparently not a sufficient predictor of dissimilation. Indeed there is a striking complementarity of phonological patterning between vowel height, on the one hand, and vowel color (roundness

13

Patrik Bye

and backness), on the other. To date, no examples of dissimilation involving the labial or front–back dimensions have come to light. Conversely, systems of vowel harmony in which backness or rounding (or both) are active are richly attested in the literature, but the feature [low] is not frequent in vowel harmony (see Krause 1979 for a possible example from Chukchee). Further discussion of this problem may be found in Alderete and Frisch (2007). There is also still a residue of dissimilatory patterns for which the CHT does not seem to offer an explanation, including NC, long vowel, continuancy (see §4.3 below), and voicing. In the next section we will consider an alternative theory of the origins of dissimilation that does not seem to make any prediction about which features participate.

4.2 Similarity avoidance in the lexicon Several recent studies have examined statistical asymmetries in the lexicon, pointing to a preference for phonetic dissimilarity between neighboring consonants in roots (see also chapter 86: morpheme structure constraints). Berkley (2000) studied English monosyllabic words and found evidence of gradient similarity avoidance effects. Focusing on words of the shape C 1VC 2 , she found that there are significantly fewer such words containing homorganic consonants than would be expected if consonants combined randomly, i.e. had the same probability of occurring as two independent events. Under- or overrepresentation is the ratio of observed to expected frequency (see chapter 90: frequency effects). A pair is underrepresented if the observed-to-expected (or O/E) ratio is less than 1, overrepresented if greater than 1. Words in which C 1 and C 2 are homorganic – such as mop, lull, and king – are underrepresented in the English lexicon. For CVVC words with a long vowel intervening between C1 and C2, the homorganic similarity avoidance effect is also present but weaker. These results, adapted from Berkley (2000), are shown in Table 60.3. Shaded cells indicate underrepresented combinations. Frisch et al. (2004) studied similarity avoidance in the lexicon of Arabic triradical verb roots. They found a very strong effect for adjacent pairs of consonants Table 60.3 Similarity avoidance in English monosyllabic roots Labial Cor obs Cor son Dorsopbfvmw td.Ïsz nlrj guttural œÚŒ– kghIw CVC

Labial Cor obs Cor son Dorsoguttural

pbfvmw td.ÏszœÚŒ– nlrj kghIw

0.52

1.15 0.73

1.33 1.03 0.59

0.85 1.09 1.21 0.72

CVVC

Labial Cor obs Cor son Dorsoguttural

pbfvmw td.ÏszœÚŒ– nlrj kghIw

0.61

1.11 0.75

1.11 1.17 0.70

1.14 1.03 1.01 0.71

Dissimilation

14

(C1 and C2, C2 and C3), tending to categorical, as shown in Table 60.4. For nonadjacent C1 and C3 the effect was still strong, although somewhat weaker. Under both adjacency and non-adjacency the similarity avoidance effect is far stronger than the one observed by Berkley for English. Gray cells are homorganic in terms of major class (Labial, Coronal obstruent, Dorsal, Guttural, Coronal sonorant). They also found that the avoidance was stronger the more similar the consonant. Within the major class of coronals, for example, an adjacent pair of coronals was significantly more frequent if they had different values for [continuant]. Frisch et al. argue that when observed co-occurrence deviates from expected co-occurrence, the learner posits a gradient phonological constraint, which they dub the gradient Obligatory Contour Principle that encodes the generalization “roots with repeated homorganic consonants are unusual.” Statistical generalizations like these form the basis of metalinguistic judgments of relative acceptability of novel words (“word-likeness”), influencing which words are actually used, and the phonological forms of novel and borrowed words (cf. Frisch 2004: 346). For Frisch et al., the gradient OCP represents a statistical generalization over a static lexicon; it does not encode tacit phonetic knowledge directly. Despite this, Frisch et al. do propose a functional explanation for the distributional asymmetries in the lexicon. Repetition of similar consonants is difficult to process (Frisch 2004). This finds the beginnings of an explanation in neural network models that encode linearization of segments. Nodes in the network must be excited and inhibited so as to fire in the right sequence. If there is a sequence of similar segments, the periods of excitation and inhibition may overlap, whether or not there is a corresponding overlap in the acoustic signal. Given two segments, C 1 and C 2, in linear sequence that activate the same distinctive feature node, if the node encoding C 1 is still firing when C 2 is perceived, this may result in simultaneous Table 60.4 Similarity avoidance in Arabic verb roots

Adjacent

Labial Cor obs Dorsal Guttural Cor son

Non-adjacent

Labial Cor obs Dorsal Guttural Cor son

Labial

Cor obs

Dorsal

bfm

td .Ïsz t# d # s# z# œ

kgq

Guttural Cor son ø ú p # lrn h?

bfm t d t# d# . Ï s z s # z# œ kgq øú p#h? lrn

0.00

1.37 0.14

1.31 0.52 0.04

1.15 0.80 1.16 0.02

1.35 1.43 1.41 0.07 0.00

1.17 1.25 1.26 1.04 0.07 0.06

1.18 1.23 1.21 1.48 1.39 1.26 0.06

bfm t d t# d# . Ï s z s # z# œ kgq øú p#h? lrn

0.30

1.08 0.38

1.02 1.06 0.24

1.26 1.24 1.16 0.07

1.25 1.05 1.35 0.68 0.25

1.28 1.02 1.14 1.19 0.12 0.34

1.11 0.97 1.23 1.03 1.10 1.13 0.67

15

Patrik Bye

perception of C1 and C2. The resulting blend of the two percepts may result in the same kind of ambiguity that results from co-articulation in the CHT, and is presumably consistent with the same re-analytic strategies. An alternative source of dissimilation in processing may be a refractory period during which the node must be reset in order to detect a second stimulus of the same type. Unlike the CHT and the blending scenario we have just sketched, the refractory period would seem to predict asymmetries in the direction of the resulting pattern. If C 2 occurs within this refractory period of a node that has just fired for C 1, the relevant feature will be perceived on C 1 but may not be perceived on C 2. The effective result is progressive dissimilation. This processing bias may further help explain the pronounced difference in the strength of the effect in English and Arabic. Arabic has non-concatenative morphology and psychologically real abstract consonant roots like /ktb/ ‘write’. In phonological analysis, this has motivated analyses in which consonants and vowels occupy separate tiers in the representation. Roots are vulnerable to speech errors involving misordering of radical consonants. These functional mechanisms and their implications for linguistic patterning provide much fertile ground for further research. First, a greater range of languages must be studied to determine to what extent root co-occurrence constraints have a gradient character or not. Suzuki (1998) presents 16 examples of root cooccurrence restriction, which are summarized in Table 60.5, along with a couple of additional cases. A second challenge concerns the difference between similarity and identity. Arabic shows avoidance of identical radicals. In English, however, segmental identity provides an escape hatch to the OCP. CVC roots where both C 1 and C 2 are labial are generally dispreferred, but not if C 1 and C 2 are identical. For example, selecting /p/ for C 1 and C 2 and permuting the possible nuclei, almost every cell of the paradigm corresponds to an actually occurring word of English: pip, pep, pap, pop, pup, peep, poop, parp, pipe, Pape, pope. See Idsardi and Raimy (2008) for a relevant proposal that segmental identity may be represented in a data structure they call a “linked list.” Finally, the similarity avoidance approach raises anew the question of which features are expected to participate in dissimilation. Are certain features associated with longer periods of excitation in perception than others? And if that turns out to be so, is there a systematic correlation between the length of a feature’s temporal domain in the speech signal and the duration of excitation? To date, place and laryngeal features (MacEachern 1997) have been studied. These studies must therefore be extended to short-domain features, such as stops, fricatives, and voicing.

4.3 Dissimilation and cue robustness Manner dissimilation is predicted not to occur by the CHT. Despite this, a small number of languages display dissimilation of pairs of adjacent stops or fricatives. For example, Osage (Quintero 2004), a Siouan language spoken in Oklahoma, has a rule dissimilating /Ï/ to [t] following /s/, e.g. /œk6œÏa/ → [œk6œta] ‘you want’. Tsou (Szakos 1994), an Austronesian language of Taiwan, has a rule that hardens /h/ to [k] following /s/, giving alternations such as [s-in-uhnu] ‘send someone to do something (actor voice)’ ~ [skuna] (patient voice), [s-m-ohpici] ‘pinch

Dissimilation

16

Table 60.5 Root co-occurrence restrictions Feature

Case

place

Amharic (Bender & Fulass 1978), Arabic (Greenberg 1950; McCarthy 1979, 1981; Mester 1986; Yip 1989; Padgett 1991; Pierrehumbert 1993; Frisch et al. 2004), Cambodian (Yip 1989), French (Plénat 1996), Hawai’ian (McKay 1970), Hebrew (Koskinen 1964), Javanese (Uhlenbeck 1949; Mester 1986; Padgett 1991), Russian (Padgett 1991), Serbian (McKay 1970), Yucatec Mayan (Yip 1989; Lombardi 1991)

labial

Cantonese (Yip 1982, 1988), Ponapean (Rehg & Sohl 1981; Goodman 1995), Yao (Purnell 1965; Ohala 1981), Zulu (Doke 1926)

coronal

Akan (Welmers 1946; McCarthy & Prince 1995)

pharyngeal

Moses-Columbia Salish (Czaykowska-Higgins 1993)

liquid

Javanese (Uhlenbeck 1949; Mester 1986)

rhotic

American English (Hall 2009)

voice

Japanese (Itô & Mester 1986, 2003; Steriade 1987, 1995; Ishihara 1991; Archangeli & Pulleyblank 1994; Alderete 1997; Pater 1999), Bakairi (Gussenhoven & Jacobs 2005)

spread glottis

Sanskrit (Grassmann 1863; Langendoen 1966; Anderson 1970; Phelps & Brame 1973; Sag 1974, 1976; Phelps 1975; Schindler 1976; Borowsky & Mester 1983; Kaye & Lowenstamm 1985; Lombardi 1991)

high

Ngbaka (Thomas 1963; Chomsky & Halle 1968; Mester 1986)

back

Ainu (Itô 1984; Mester 1986; Archangeli & Pulleyblank 1994), Tzeltzal (Slocum 1948; Itô 1984)

length

Japanese

(actor voice)’ ~ [skopica] (patient voice). Non-sibilant fricatives such as [. x] have diffuse spectra. Harris (1958) shows that the F2 transition is required for reliable identification of the fricative. In a cluster of fricatives, one of the transitions, C–V or V–C, is missing. Dissimilating one of the fricatives to the corresponding stop has the effect of sharpening the F2 transition and adding a stop release burst, rendering the place of articulation more easily identifiable.3 In Chontal (Waterhouse 1949, 1962; Kenstowicz and Kisseberth 1979), a Hokan language of Mexico, the imperative suffix is {-la?} after voiceless segments, and {-xa?} after voiced ones, e.g. [fuœ-l j a?] ‘blow it!’, [panx-la?] ‘sit down!’, [ko-xa?] ‘say it!’, [kan-xa?] ‘leave it!’. The pattern seems to involve deleting the second [spread glottis] feature in a 3

One of the apparent cases of continuant assimilation mentioned by Ohala as a potential counterexample to the CHT may in fact turn out to be best understood as an instance of it. Dyen (1972) shows that Proto-Austronesian */s . . . s . . ./ was dissimilated across an intervening vowel in Ngaju-Dayak to /t . . . s . . ./. The evidence, however, only consists of the two words PA *sisik > ND [tisik] ‘fish-scale’ and PA *susu > ND [tuso] ‘breast’. It is perhaps relevant that the vowel immediately following the initial *s is a high vowel. High vowels are known to increase the degree of post-aspiration of a preceding voiceless stop and affrication of a preceding /t/. The initial sibilant may thus have been interpreted as the co-articulatory affrication of an intended /t/.

17

Patrik Bye

cluster of voiceless fricatives – assuming the claim of Vaux (1998) that voiceless fricatives are universally [spread glottis] – allowing the lateral to be more clearly identified as such. Dissimilation between two stops is far more rare, but González (2008) supplies an example from North-Central Spanish, which shows dissimilation of coda /k/ to [.] (and other realizations) before another stop (generally /t/, e.g. [do..’tor] ‘doctor’. González also proposes an explanation in terms of cue robustness, noting that the cues for the first stop are not as salient before another stop as before other segments, due to weaker, or absent, stop release burst and formant transitions. Similar considerations have been argued to condition metathesis in other languages, e.g. Faroese, where final /skt/ metathesizes to [kst] (Hume and Seo 2004; see chapter 59: metathesis).

4.4 The dissimilation game In a different vein, people also apply tacit knowledge of similarity to a variety of ludic and poetic ends. Indeed, the term “dissimilation” entered the field in the 19th century from rhetoric, where it had been in use to describe the variation in style required for good public speaking (cf. Brugmann 1909). The criterion of a perfect rhyme in English, such as pet – bet, is not only that the material following the onset of each stressed syllable is identical, but that the onset of each stressed syllable is different. In considering rhyme in English, we do not appear to count features; we are merely interested in contrastive segments. The pair pet – bet is thus as good a rhyme as the pair pet – set. The same requirement of non-identity turns up in echo reduplication (Alderete et al. 1999; Nevins 2005; chapter 100: reduplication), where the base is reduplicated with an onset determined by convention (fixed segmentism). In Hindi, this kind of reduplication gives a meaning ‘X and the like’ (Singh 1969; Nevins 2005). The fixed segment is /v/ unless the base also begins with a /v/, in which case the echo reduplicant begins with /œ/. Examples from Nevins (2005: 280) are shown in (19).4 (19)

Hindi echo reduplication with fixed segmentism paanii-vaani aam-vaam tras-vras yaar-vaar vakil-œakil, *vakil-vakil

‘water and the like’ ‘mangoes and the like’ ‘grief and the like’ ‘friends and the like’ ‘lawyers and the like’

Similar facts are observed in English shm- reduplication, e.g. potato–shmotato, but shmaltz–shpaltz (Nevins and Vaux 2003), Kannada (Lidz 2001), and Javanese (Yip 1995). Yip (1995, 1998) proposes that these are due to a constraint against the repetition of identical elements, *Repeat, ultimately due to Menn and MacWhinney (1984). Similar facts also turn up in secret languages. In the Kunshan secret language Mo-pa (Yip 1982: 652ff.), a base of the shape C 1V1(C 2) is mapped to a template C1[o]-GV1C 2, where G is a consonant whose value for the feature 4

The forms in (19) represent Nevins’s own fieldwork. The original source on Hindi echo words is Singh (1969).

Dissimilation

18

[continuant] is the opposite of that of C1. Examples, not glossed in the source, are given in (20). (20)

Kunshan secret language Mo-pa tHw k’e d’oI tsa vã sja nHw JHn

to lHw k’o Âe d’o loI tso za vo pã so tsja no tHw JH tuHn

Oral stops are replaced by voiced continuants, while nasals and continuants are replaced by the corresponding voiceless unaspirated stop. This continuant dissimilation is covered neither by the CHT nor cue robustness; the reason for the alternation seems to be purely ludic.

5

Conclusions

There are a number of theories of the origin of dissimilation, and dissimilation may apparently have one of several motivations. According to the Co-articulationHypercorrection Theory (Ohala 1981, 1993, 2003), dissimilation results when the listener reverses a perceived co-articulation. The central prediction of the CHT is that dissimilation should only occur with features that have elongated cues. Other theories assume a functional motivation. Frisch et al. (2004) argue that similarity avoidance effects are due to the difficulties associated with processing the sequencing of similar segments. This bias is reflected in the statistical structure of the lexicon in many languages. However, the predictions of processing-based accounts with respect to the observed featural asymmetries are not yet clear. It was suggested here that manner dissimilation in pairs of adjacent fricatives or stops is best understood as maximizing cues for place of articulation, while dissimilatory phenomena in language games fulfill an aesthetic role. Future research will hopefully extend the empirical base for the study of dissimilation phenomena, and determine more precisely what the division of labor and synergies between the factors discussed here should be.

ACKNOWLEDGMENTS I would like to thank Beth Hume, Marc van Oostendorp, and two anonymous reviewers for helpful feedback on this chapter.

REFERENCES Alderete, John. 1995. Faithfulness to prosodic heads. Unpublished ms., University of Massachusetts, Amherst (ROA-94).

19

Patrik Bye

Alderete, John. 1997. Dissimilation as local conjunction. Papers from the Annual Meeting of the North East Linguistic Society 27. 17–32. Alderete, John & Stefan A. Frisch. 2007. Dissimilation in grammar and the lexicon. In de Lacy (2007), 379–398. Alderete, John, Jill Beckman, Laura Benua, Amalia Gnanadesikan, John J. McCarthy & Suzanne Urbanczyk. 1999. Reduplication with fixed segmentism. Linguistic Inquiry 30. 327–364. Amerman, James, Raymond Daniloff & Kenneth Moll. 1970. Lip and jaw coarticulation for the phoneme. Journal of Speech and Hearing 13. 147–161. Anderson, Stephen R. 1970. On Grassmann’s Law in Sanskrit. Linguistic Inquiry 1. 387–396. Anderson, Victoria B. & Ian Maddieson. 1994. Acoustic characteristics of Tiwi coronal stops. UCLA Working Papers in Phonetics 87. 131–162. Archangeli, Diana & Douglas Pulleyblank. 1994. Grounded phonology. Cambridge, MA: MIT Press. Aronoff, Mark & Richard T. Oehrle (eds.) 1984. Language sound structure. Cambridge, MA: MIT Press. Beckman, Jill N., Laura Walsh Dickey & Suzanne Urbanczyk (eds.) 1995. Papers in Optimality Theory. Amherst: GLSA. Bender, Byron W. 1968. Marshallese phonology. Oceanic Linguistics 7. 16 –35. Bender, Byron W. 1969. Vowel dissimilation in Marshallese. University of Hawai’i Working Papers in Linguistics 1. 88–95. Bender, M. Lionel & Hailu Fulass. 1978. Amharic verb morphology: A generative approach. East Lansing: Michigan State University. Benguerel, A.-P. & H. Cowan. 1974. Coarticulation of upper lip protrusion in French. Phonetica 30. 41– 55. Bennett, Patrick R. 1967. Dahl’s Law and Thagice. African Language Studies 8. 127–159. Berkley, Deborah M. 2000. Gradient Obligatory Contour Principle effects. Ph.D. dissertation, Northwestern University. Bessell, Nicola J. 1998a. Local and non-local consonant-vowel interaction in Interior Salish. Phonology 15. 1– 40. Bessell, Nicola J. 1998b. Phonetic aspects of retraction in Interior Salish. In Ewa Czaykowska-Higgins & M. Dale Kinkade (eds.) Salish languages and linguistics: Theoretical and descriptive perspectives, 125–152. Berlin & New York: Mouton de Gruyter. Blevins, Juliette. 2004. Evolutionary phonology: The emergence of sound patterns. Cambridge: Cambridge University Press. Blevins, Juliette & Andrew Garrett. 2004. The evolution of metathesis. In Hayes et al. (2004), 117–156. Borowsky, Toni & Armin Mester. 1983. Aspiration to roots: Remarks on the Sanskrit diaspirates. In Papers from the Annual Regional Meeting, Chicago Linguistic Society 19. 52–63. Boukous, A. 1987. Phonotactiques et domaines prosodiques en berbère (parler tachelhit d’Agadi, Maroc). Thèse d’état, Université de Paris VIII. Brugmann, Karl. 1909. Das Wesen der lautlichen Dissimilation. Abhandlungen der Königlich Sächsischen Gesellschaft der Wissenschaften 57. 139 –178. Bye, Patrik. Forthcoming. Derivations. In Nancy C. Kula, Bert Botma & Kuniya Nasukawa (eds.) The Continuum handbook of phonology. London: Continuum. Bye, Patrik, Elin Sagulin & Ida Toivonen. 2009. Phonetic duration, phonological quantity and prosodic structure in Inari Saami. Phonetica 66. 199–221. Card, Elizabeth. 1983. A phonetic and phonological study of Arabic emphasis. Ph.D. dissertation, Cornell University. Choi, John D. 1992. Phonetic underspecification and target interpolation: An acoustic study of Marshallese vowel allophony. Ph.D. dissertation, University of California, Los Angeles. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row.

Dissimilation

20

Clements, G. N. 1991. Place of articulation in consonants and vowels: A unified theory. Working Papers of the Cornell Phonetics Laboratory 5. 77–123. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In Goldsmith (1995), 245 –306. Cohn, Abigail C. 1990. Phonetic and phonological rules of nasalization. Ph.D. dissertation, University of California, Los Angeles. Cohn, Abigail C. 1992. The consequences of dissimilation in Sundanese. Phonology 9. 199–220. Crosswhite, Katherine. 1999. Vowel reduction in Optimality Theory. Ph.D. dissertation, University of California, Los Angeles. Czaykowska-Higgins, Ewa. 1993. Cyclicity and stress in Moses-Columbia Salish (Nxa’amxcin). Natural Language and Linguistic Theory 11. 197–278. Daly, John P. 1993. The role of tone sandhi in tone analysis. Notes on Linguistics 61. 5 –24. Dart, Sarah N. 1991. Articulatory and acoustic properties of apical and laminal articulation. Ph.D. dissertation, University of California, Los Angeles. Dave, Radhekant. 1977. Retroflex and dental consonants in Gujarati: A palatographic and acoustic study. Annual Report of the Institute of Phonetics, University of Copenhagen 11. 27–156. Davy, J. I. M. & Derek Nurse. 1982. The synchronic forms of Dahl’s Law, a dissimilation process, in Central and Lacustrine Bantu languages of Kenya. Journal of African Languages and Linguistics 4. 159–195. de Lacy, Paul (ed.) 2007. The Cambridge handbook of phonology. Cambridge: Cambridge University Press. Dixon, R. M. W. 1977. A grammar of YidiJ. Cambridge: Cambridge University Press. Doke, Clement M. 1926. The phonetics of the Zulu language. Johannesburg: University of the Witwatersrand Press. Dougherty, Brian. 1993. The acoustic-phonetic correlates of Cayuga word stress. Ph.D. dissertation, Harvard University. Dyen, Isidore. 1972. Non-gradual regular phonetic changes involving sibilants. In Jacques Barrau, Lucien Bernot, George Condominas, Mariel Jean Brunhes Delamare, Francis Leroy, Alexic Rygaloff & Jacqueline M. C. Thomas (eds.) Langues et techniques natures et société, vol. 1: Approche linguistique, 95 –99. Paris: Klincksieck. Ebert, Karen. 1974. Partial vowel harmony in Kera. Studies in African Linguistics. Supplement 5. 75–80. Elmedlaoui, Mohammed. 1985. Le parler berbère chleuh d’Imdlawn: Segments et syllabation. Doctorat de troisième cycle, Université de Paris VIII, Saint-Denis. Elmedlaoui, Mohammed. 1995. Aspect des representations phonologiques dans certaines langues chamitosemitiques. Rabat: Faculté des lettres et des sciences humaines. Evans, Nicholas D. 1995. Current issues in the phonology of Australian languages. In Goldsmith (1995), 723 –761. Fallon, Paul D. 1993. Liquid dissimilation in Georgian. Proceedings of the 10th Eastern States Conference on Linguistics (ESCOL). 105 –116. Finer, Daniel. 1986. Reduplication and verbal morphology in Palauan. The Linguistic Review 6. 99 –130. Foley, William A. 1991. The Yimas language of New Guinea. Stanford: Stanford University Press. Frisch, Stefan A. 2004. Language processing and segmental OCP effects. In Hayes et al. (2004), 346–371. Frisch, Stefan A., Janet B. Pierrehumbert & Michael B. Broe. 2004. Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22. 179 –228. Geytenbeek, Brian B. & Helen Geytenbeek. 1971. Gidabal grammar and dictionary. Canberra: Australian Institute of Aboriginal Studies. Ghazali, Salem. 1977. Back consonants and backing coarticulation in Arabic. Ph.D. dissertation, University of Texas, Austin.

21

Patrik Bye

Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Goldsmith, John A. 1984. Meeussen’s Rule. In Aronoff & Oehrle (1984), 245–259. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. González, Carolina. 2008. Assimilation and dissimilation of syllable-final /k/ in North-Central Spanish. In Joyce Bruhn de Garavito & Elena Valenzuela (eds.) Selected Proceedings of the 10th Hispanic Linguistics Symposium, 170 –183. Somerville, MA: Cascadilla Press. Goodman, Beverley. 1995. Features in Ponapean phonology. Ph.D. dissertation, Cornell University. Gragg, Gene. 1976. Oromo of Wellegga. In M. Lionel Bender (ed.) The non-Semitic languages of Ethiopia, 166–195. East Lansing: Michigan State University Press. Grammont, Maurice. 1895. La dissimilation consonantique dans les langues indo-européennes et dans les langues romanes. Dijon: Darantière. Grassmann, Hermann. 1863. Ueber die Aspiraten und ihr gleichzeitiges Vorhandensein im An- und Auslaute der Wurzeln. Zeitschrift für Vergleichende Sprachforschung auf dem Gebiete des Deutschen, Griechischen und Lateinischen 12. 81–138. Greenberg, Joseph. 1950. The patterning of root morphemes in Semitic. Word 6. 162–181. Gussenhoven, Carlos & Haike Jacobs. 2005. Understanding phonology. London: Hodder Arnold. Hall, Nancy. 2009. R-dissimilation in American English. Unpublished ms., California State University, Long Beach. Harris, K. S. 1958. Cues for the discrimination of American English fricatives in spoken syllables. Language and Speech 1. 1–7. Hawkins, Sarah & Andrew Slater. 1994. Spread of CV and V-to-V coarticulation in British English: Implications for the intelligibility of synthetic speech. In Proceedings of the 3rd International Conference on Spoken Language Processing, 57–60. Yokohama: Acoustical Society of Japan. Hawkins, Sarah & R. Smith. 2001. An acoustical study of long-domain /r/ and /l/ coarticulation. In Sebastian Heid & Sarah Hawkins (eds.) Proceedings of the 5th Seminar on Speech Production: Models and Data, 77–80. Bavaria: Kloster Seeon. Hayes, Bruce, Robert Kirchner & Donca Steriade (eds.) 2004. Phonetically based phonology. Cambridge: Cambridge University Press. Holton, David. 1995. Assimilation and dissimilation of Sundanese liquids. In Beckman et al. (1995), 167–180. Hume, Elizabeth. 1992. Front vowels, coronal consonants and their interaction in nonlinear phonology. Ph.D. dissertation, Cornell University. Hume, Elizabeth & David Odden. 1996. Reconsidering [consonantal]. Phonology 13. 345–376. Hume, Elizabeth & Misun Seo. 2004. Metathesis in Faroese and Lithuanian: From speech perception to Optimality Theory. Nordic Journal of Linguistics 27. 35 –60. Idsardi, William J. & Eric Raimy. 2008. Reduplicative economy. In Bert Vaux & Andrew Nevins (eds.) Rules, constraints, and phonological phenomena, 149–184. Oxford: Oxford University Press. Ishihara, Masahide. 1991. A lexical prosodic phonology of Japanese verbs. Ph.D. dissertation, University of Arizona. Itkonen, Erkki. 1986 –91. Inarilappisches Wörterbuch. 4 vols. Helsinki: Suomalais-ugrilainen Seura. Itô, Junko. 1984. Melodic dissimilation in Ainu. Linguistic Inquiry 15. 505 –513. Itô, Junko & Armin Mester. 1986. The phonology of voicing in Japanese: Theoretical consequences for morphological accessibility. Linguistic Inquiry 17. 49 –73. Itô, Junko & Armin Mester. 2003. Japanese morphophonemics: Markedness and word structure. Cambridge, MA: MIT Press. Jebbour, Abdelkrim. 1985. La labio-vélarization en berbère dialecte tachelhit (parler de tiznit). In Mémoire de phonologie. Rabat: Université Mohammed V, Faculté des letters et des sciences humaine.

Dissimilation

22

Johnson, Lawrence. 1973. Dissimilation as a natural process in phonology. Stanford Occasional Papers in Linguistics 3. 45 –56. Joseph, Brian D. & Richard Janda (eds.) 2003. The handbook of historical linguistics. Malden, MA & Oxford: Blackwell. Josephs, Lewis S. 1975. Palauan reference grammar. Honolulu: University of Hawai’i Press. Josephs, Lewis S. 1990. New Palauan–English dictionary. Honolulu: University of Hawai’i Press. Kaisse, Ellen M. 1988. Modern Greek continuant dissimilation and the OCP. Unpublished ms., University of Washington. Kaye, Jonathan & Jean Lowenstamm. 1985. A metrical treatment of Grassmann’s Law. Papers from the Annual Meeting of the North East Linguistic Society 15. 220 –233. Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1985. The internal structure of phonological elements: A theory of charm and government. Phonology Yearbook 2. 305–328. Keating, Patricia. 1988. Underspecification in phonetics. Phonology 5. 275 –292. Kelly, John. 1989. On the phonological relevance of some non-phonological elements. In Tamás Szende (ed.) Proceedings of the Speech Research ’89 International Conference, 56–59. Budapest: Linguistic Institute of the Hungarian Academy of Sciences. Kelly, John & John Local. 1986. Long-domain resonance patterns in English. In Proceedings of the International Conference on Speech Input/Output: Techniques and Applications, 304–309. London: Institution of Electrical Engineers. Kenstowicz, Michael & Charles W. Kisseberth. 1977. Topics in phonological theory. New York: Academic Press. Kenstowicz, Michael & Charles W. Kisseberth. 1979. Generative phonology: Description and theory. New York: Academic Press. Kent, Ronald G. 1936. Assimilation and dissimilation. Language 12. 245 –258. Kent, Ronald G. 1945. The sounds of Latin: A descriptive and historical phonology. Baltimore: Waverley Press. Keyser, Samuel J. & Paul Kiparsky. 1984. Syllable structure in Finnish phonology. In Aronoff & Oehrle (1984), 7–31. Kiparsky, Paul. 2003. The phonological basis of sound change. In Joseph & Janda (2003), 313–342. Kisseberth, Charles W. 1970. On the functional utility of phonological rules. Linguistic Inquiry 1. 291–306. Koskinen, K. 1964. Kompatibilität in den dreikonsonantigen hebräischen Wurzeln. Zeitschrift der Deutschen Morgenländischen Gesellschaft 144. 16 –58. Krause, Scott. 1979. Topics in Chukchee phonology and morphology. Ph.D. dissertation, University of Illinois, Urbana-Champaign. Ladefoged, Peter. 1993. A course in phonetics. 3rd edn. New York: Harcourt Brace Jovanovich. Ladefoged, Peter & Ian Maddieson. 1996. The sounds of the world’s languages. Oxford & Malden, MA: Blackwell. Ladefoged, Peter, Ian Maddieson & Michel T. T. Jackson. 1988. Investigating phonation types in different languages. In Osamu Fujimura (ed.) Vocal physiology: Voice production, mechanisms and functions, 292–317. New York: Raven. Langendoen, D. Terence. 1966. A restriction on Grassmann’s Law in Greek. Language 42. 7– 9. Larsi, Ahmed. 1991. Aspects de la phonologie non-lindaire du parler berbère chleuh de Tidli. Thèse de doctorat, Université de la Sorbonne Nouvelle, Paris III. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. Leumann, Manu. 1977. Lateinische Grammatik von Leumann-Hofmann-Szantyr, vol. 1: Lateinische Laut- und Pormenlehre. Munich: Beck. Levergood, Barbara. 1987. Topics in Arusa phonology and morphology. Ph.D. dissertation, University of Texas, Austin.

23

Patrik Bye

Levi, Susannah V. 2008. Phonemic vs. derived glides. Lingua 118. 1956–1978. Lidz, Jeffrey. 2001. Echo reduplication in Kannada and the theory of word-formation. The Linguistic Review 18. 375 –394. Lloret, Maria-Rosa. 1988. Gemination and vowel length in Oromo morphophonology. Ph.D. dissertation, Indiana University. Lombardi, Linda. 1991. Laryngeal features and laryngeal neutralization. Ph.D. dissertation, University of Massachusetts, Amherst. Lombardi, Linda. 1995. Dahl’s law and privative [voice]. Linguistic Inquiry 26. 365 –372. Lubker, J. & T. Gay. 1982. Anticipatory labial coarticulation: Experimental, biological and linguistic variables. Journal of the Acoustical Society of America 71. 437 –448. Lynch, John. 2003. Low vowel dissimilation in Vanuatu languages. Oceanic Linguistics 42. 359–406. MacEachern, Margaret R. 1997. Laryngeal co-occurrence restrictions. Ph.D. dissertation, University of California, Los Angeles. Published 1999, New York: Garland. Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University Press. Magen, Harriet S. 1984. Vowel-to-vowel coarticulation in English and Japanese. Journal of the Acoustical Society of America 75. Supplement S41. Manuel, Sharon Y. 1987. Acoustic and perceptual consequences of vowel-to-vowel coarticulation in three Bantu languages. Ph.D. dissertation, Yale University. Marlett, Stephen A. & Joseph Paul Stemberger. 1983. Empty consonants in Seri. Linguistic Inquiry 14. 617–639. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12. 373 –418. McCarthy, John J. 1986. OCP effects: Gemination and antigemination. Linguistic Inquiry 17. 207–263. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. McCarthy, John J. 2007. Derivations and levels of representation. In de Lacy (2007), 99–117. McCarthy, John J. & Alan Prince. 1993. Prosodic morphology I: Constraint interaction and satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Beckman et al. (1995), 249–384. McConvell, Patrick. 1988. Nasal cluster dissimilation and constraints on phonological variables in Gurindji and related languages. In Nicholas D. Evans & Steve Johnson (eds.) Aboriginal Linguistics 1, 135–165. Armidale: Department of Linguistics, University of New England. McGregor, William. 1990. A functional grammar of Gooniyandi. Amsterdam: John Benjamins. McKay, D. G. 1970. Phoneme repetition in the structure of languages. Language and Speech 13. 199 –213. Menn, Lise & Brian MacWhinney. 1984. The repeated morph constraint: Toward an explanation. Language 60. 519 –541. Mester, Armin. 1986. Studies in tier structure. Ph.D. dissertation, University of Massachusetts, Amherst. Morén, Bruce. 2003. The parallel structures model of feature geometry. Working Papers of the Cornell Phonetics Laboratory 15. 194 –270. Nevins, Andrew. 2005. Overwriting does not optimize in nonconcatenative morphology. Linguistic Inquiry 36. 275 –287. Nevins, Andrew & Bert Vaux. 2003. Metalinguistic, shmetalinguistic: The phonology and morphology of shm- reduplication. Papers from the Annual Regional Meeting, Chicago Linguistic Society 39. 702–722.

Dissimilation

24

Newton, Brian. 1971. Sibilant loss in Northern Greek. Canadian Journal of Linguistics 17. 1–15. Odden, David. 1987. Dissimilation as deletion in Chuckchi. Proceedings of the 3rd Eastern States Conference on Linguistics (ESCOL). 235 –246. Odden, David. 1988. Anti antigemination and the OCP. Linguistic Inquiry 19. 451–475. Odden, David. 1994. Adjacency parameters in phonology. Language 70. 289 –330. Ohala, John J. 1981. The listener as a source of sound change. Papers from the Annual Regional Meeting, Chicago Linguistic Society 17. 178 –203. Ohala, John J. 1993. The phonetics of sound change. In Charles Jones (ed.) Historical linguistics: Problems and perspectives, 237–278. London: Longman. Ohala, John J. 2003. Phonetics and historical phonology. In Joseph & Janda (2003), 669–686. Öhman, Sven E. G. 1966. Coarticulation in VCV utterances: Spectrographic measurements. Journal of the Acoustical Society of America 39. 151–168. Padgett, Jaye. 1991. Stricture in feature geometry. Ph.D. dissertation, University of Massachusetts, Amherst. Paradis, Carole & Jean-François Prunet. 1989. On coronal transparency. Phonology 6. 317–348. Parker, Steve 1997. An OT account of laryngealization in Cuzco Quechua. Work Papers of the Summer Institute of Linguistics, University of North Dakota 41. 47–57. Pater, Joe. 1999. Austronesian nasal substitution and other NY effects. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.) The prosody–morphology interface, 310–343. Cambridge: Cambridge University Press. Pearce, Mary. 2008. Vowel harmony domains and vowel undershoot. UCL Working Papers in Linguistics 20. 115 –140. Phelps, Elaine. 1975. Sanskrit diaspirates. Linguistic Inquiry 6. 447 –464. Phelps, Elaine & Michael Brame. 1973. On local ordering of rules in Sanskrit. Linguistic Inquiry 4. 387 –400. Pierrehumbert, Janet B. 1993. Dissimilarity in the Arabic verbal roots. Papers from the Annual Meeting of the North East Linguistic Society 23. 367–381. Pitkin, Harvey. 1984. Wintu grammar. Berkeley: University of California Press. Plénat, Marc. 1996. De l’interaction des contraints: Une étude de case. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, 585–617. Salford: ESRI. Poser, William J. 1982. Phonological representations and action-at-a-distance. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 121–158. Dordrecht: Foris. Posner, Rebecca R. 1961. Consonantal dissimilation in the Romance languages. Publications of the Philological Society 18. Oxford: Blackwell. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Purnell, Herbert C. 1965. Phonology of a Yai dialect spoken in the province of Chiengrai, Thailand. Hartford, CT: Hartford Seminary Foundation. Quintero, Carolyn. 2004. Osage grammar. Lincoln, NE: University of Nebraska Press. Recasens, Daniel. 1984. Vowel-to-vowel coarticulation in Catalan VCV sequences. Journal of the Acoustical Society of America 76. 1624–1635. Recasens, Daniel. 1987. An acoustic analysis of V-to-C and V-to-V coarticulatory effects in Catalan and Spanish VCV sequences. Journal of Phonetics 15. 299 –312. Rehg, Kenneth L. & Damian G. Sohl. 1981. Ponapean reference grammar. Honolulu: University of Hawai’i Press. Rubach, Jerzy. 1993. The Lexical Phonology of Slovak. Oxford: Clarendon Press. Sag, Ivan. 1974. The Grassmann’s Law ordering pseudoparadox. Linguistic Inquiry 5. 591– 607.

25

Patrik Bye

Sag, Ivan. 1976. Pseudosolutions to the pseudoparadox: Sanskrit diaspirates revisited. Linguistic Inquiry 7. 609 –622. Schindler, Jochem. 1976. Diachronic and synchronic remarks on Bartholomae’s and Grassmann’s Laws. Linguistic Inquiry 7. 622–637. Shaw, Patricia A. 1976. Theoretical issues in Dakota phonology and morphology. Ph.D. dissertation, University of Toronto. Shaw, Patricia A. 1985. Modularisation and substantive constraints in Dakota Lexical Phonology. Phonology Yearbook 2. 173 –202. Shibatani, Masayoshi. 1990. The languages of Japan. Cambridge: Cambridge University Press. Sihler, Andrew L. 1995. New comparative grammar of Greek and Latin. Oxford: Oxford University Press. Singh, Amar Bahadur. 1969. On echo words in Hindi. Indian Linguistics 30. 185 –195. Slocum, Marianna C. 1948. Tzeltzal (Mayan) noun and verb morphology. International Journal of American Linguistics 14. 77–86. Smolensky, Paul. 1995. On the structure of the constraint component Con of UG (ROA-86). Smolensky, Paul. 1997. Constraint Conjunction II. Handout from paper presented at the Hopkins Optimality Theory Workshop/University of Maryland Mayfest 1997. Smyth, Herbert Weir. 1920. Greek grammar. Cambridge, MA: Harvard University Press. Revised 1956 by Gordon M. Messing. Soden, Wolfram von. 1969. Grundriß der akkadischen Grammatik. Rome: Pontificum Institutum Biblicum. Sohn, Ho-min. 1975. Woleaian reference grammar. Honolulu: University of Hawai’i Press. Sohn, Ho-min & Anthony P. Tawerilmang. 1976. Woleaian–English dictionary. Honolulu: University of Hawai’i Press. Steriade, Donca. 1987. Redundant values. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23(2). 339–362. Steriade, Donca. 1995. Underspecification and markedness. In Goldsmith (1995), 114–174. Stevens, Kenneth N. & Sheila Blumstein. 1975. Quantal aspects of consonant production and perception: A study of retroflex consonants. Journal of Phonetics 3. 215 –233. Suzuki, Keiichiro. 1998. A typological investigation of dissimilation. Ph.D. dissertation, University of Arizona. Szakos, József. 1994. Die Sprache der Cou: Untersuchungen zur Synchronie einer Austronesischen Sprache auf Taiwan. Ph.D. dissertation, University of Bonn. Thomas, Jacqueline. 1963. Le parler ngbaka de Bokanga. The Hague: Mouton. Trefry, David. 1969. A comparative study of Kuman and Pawaian: Non-Austronesian languages of New Guinea. (Pacific Linguistics B13.) Canberra: Australian National University. Tunley, Alison. 1999. Coarticulatory influences of liquids on vowels in English. Ph.D. dissertation, University of Cambridge. Uhlenbeck, Eugenius M. 1949. De structuur van het javaanse morpheem. Bandoeng: Nix. Vaux, Bert. 1998. The laryngeal specifications of fricatives. Linguistic Inquiry 29. 497–511. Walsh Dickey, Laura. 1997. The phonology of liquids. Ph.D. dissertation, University of Massachusetts, Amherst. Waterhouse, Viola. 1949. Learning a second language first. International Journal of American Linguistics 15. 106 –109. Waterhouse, Viola. 1962. The grammatical structure of Oaxaca Chontal. Bloomington: Indiana University Research Center in Anthropology. Welmers, William E. 1946. A descriptive grammar of Fanti. Baltimore: Linguistic Society of America. West, Paula. 1999a. The extent of coarticulation in English liquids: An acoustic and articulatory study. In John J. Ohala, Yoko Hasegawa, Manjari Ohala, Daniel Granville & Ashlee Bailey (eds.) Proceedings of the 14th International Congress of the Phonetic Sciences, 1901–1904. Berkeley: Department of Linguistics, University of California.

Dissimilation

26

West, Paula. 1999b. Perception of distributed coarticulatory properties of English /l/ and /P/. Journal of Phonetics 27. 405 –426. West, Paula. 2000. Long-distance coarticulatory effects of English /l/ and /r/. Ph.D. dissertation, University of Oxford. Wolf, Matthew. 2007. For an autosegmental theory of mutation. In Leah Bateman, Michael O’Keefe, Ehren Reilly & Adam Werle (eds.) Papers in Optimality Theory III, 315–404. Amherst: GLSA. Woodhouse, Robert. 1998. Verner’s and Thurneysen’s Laws in Gothic as evidence for obstruent development in Early Germanic. Beiträge zur Geschichte der deutschen Sprache und Literatur 120. 194–222. Wordick, Frank. 1982. The Yindjibarndi language. (Pacific Linguistics C71.) Canberra: Australian National University. Yip, Moira. 1982. Reduplication and CV skeleta in Chinese secret languages. Linguistic Inquiry 13. 637–661. Yip, Moira. 1988. The Obligatory Contour Principle and phonological rules: A loss of identity. Linguistic Inquiry 19. 65 –100. Yip, Moira. 1989. Feature geometry and cooccurrence restrictions. Phonology 6. 349 –374. Yip, Moira. 1995. Repetition and its avoidance: The case of Javanese. In Keiichiro Suzuki & Dirk Elzinga (eds.) Proceedings of the 1995 Southwestern Workshop on Optimality Theory (SWOT). Coyote Papers: Working Papers in Linguistics. Conference Proceedings vol. 5, 238–262. Tucson: University of Arizona Linguistics Circle. Yip, Moira. 1998. Identity avoidance in phonology and morphology. In Steven Lapointe, Diane Brentari & Patrick Farrell (eds.) Morphology and its relation to phonology and syntax, 216 –246. Stanford: CSLI.

61

Hiatus Resolution Roderic F. Casali

1

Overview of hiatus resolution

The term vowel hiatus is commonly used to refer to a sequence of adjacent vowels belonging to separate syllables, as in the following Hawaiian examples from Senturia (1998: 26). (Periods indicate syllable boundaries.) (1)

[ko.a.na] [li.le.a] [ku.a] [hu.e.lo] [hu.i.na] [ko.e.na]

‘space’ (name of a shell) ‘back’ ‘tail’ ‘sum’ ‘remainder’

In some languages, vowel hiatus is permitted quite freely. Other languages place much stricter limits on the contexts in which heterosyllabic vowel sequences can occur, while some disallow them entirely. Languages that do not permit vowel hiatus may employ any of several processes that eliminate it in cases where it would otherwise arise (e.g. where an underlying vowel-final morpheme directly precedes a vowel-initial morpheme). One of the most common forms of hiatus resolution involves the elision of one of the two vowels. (See chapter 68: deletion.) Vowel elision is illustrated below with examples from Yoruba, adapted from Pulleyblank (1988). (2)

/bu ata/ /gé olú/ /ta epo/ /kF èkF/ /7a DwG/

→ → → → →

[ba.ta] [gó.lú] [te.po] [ké.kE] [7D.wG]

‘pour ground pepper’ ‘cut mushrooms’ ‘sell palm oil’ ‘learn’ ‘buy a broom’

In all of these examples, it is the first of the two adjacent vowels (V1) that deletes. Though this is the more common pattern cross-linguistically, cases in which the second vowel (V2) deletes are also attested (and indeed, some instances of V2 deletion are found in Yoruba itself). The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0061

Hiatus Resolution

2

In another very common hiatus resolution process, glide formation, V1 is converted to a semivowel (see also chapter 15: glides). One well-known case, illustrated in (3), is Ganda (Tucker 1962; Katamba 1985; Clements 1986).1 (3)

/mu-iko/ /li-ato/ /mu-ezi/ /mu-ogezi/ /mi-ezi/ /mu-ana/

→ → → → → →

[mwi(.ko] [lja(.to] [mwe(.zi] [mwo(.ge.zi] [mje(.zi] [mwa(.na]

‘trowel’ ‘boats’ ‘moon’ ‘talker’ ‘moons’ ‘child’

cf. [mu-le(nzi] ‘boy’ cf. [li-ggwa] ‘thorn’

In general (we will look at an exception in §2.4 below), glide formation in Ganda applies only where V1 is high. Non-high V1’s are elided before another vowel, with compensatory lengthening of V2 (e.g. /ka-oto/ ‘small fireplace’ > [ko(to]). A third common pattern, coalescence, involves the merger of V1 and V2 to form a third vowel that combines features of both. This is illustrated in the Attic Greek examples below (de Haas 1988: 126). In these examples, various underlying sequences that combine a non-high [−ATR] vowel /a e D/ with a mid [+ATR] vowel /e o/ are realized phonetically as a long mid [−ATR] vowel that retains the backness and roundness of the original [+ATR] vowel. (4)

/gene-a/ /ti(ma-omen/ /ajdo-a/ /de(lo-e(te/ /zde(-omen/

→ → → → →

[gé.ne(] [ti(.m∑(.men] [aj.d∑(] [de.l∑(.te] [zd∑(.men]

(/ea/ > [e(]) (/ao/ > [D(]) (/oa/ > [D(]) (/oe(/ > [D(]) (/e(o/ > [D(])

‘race (nom acc pl)’ ‘honor (1pl pres ind)’ ‘shame (acc sg)’ ‘manifest (2pl pres subj)’ ‘live (1pl pres subj)’

Note that for the pairs /a o/ and /e( o/, coalescence in Attic Greek is symmetric; the phonetic result is the same for both orders of input vowels.2 Other languages with symmetric coalescence include Quebec French, Korean, Rotuman, Old Portuguese, and Classical Sanskrit (all discussed in de Haas 1988), and Afar (Bliese 1981). Symmetric coalescence is relatively uncommon, however. Much more frequently, coalescence applies only when the vowels occur in one of the two possible orders (see §2.3 below). Other languages avoid hiatus by retaining both vowels but syllabifying them into the nucleus of a single syllable, a process generally known as diphthong formation or diphthongization. This occurs in Ngiti, as illustrated in the following examples, adapted from Kutsch Lojenga (1994: 90–91). (5)

1

/abvo àji/ /tÃtG akpà/ /opi àji/ /ÃnÕÈÃ akpà/ /fà FJÈ/ /fÈkË o+i/

→ → → → → →

[a.bvoà.ji] [tÃ.tGa.kpà] [o.pià.ji] [Ãn.ÕÈÃa.kpà] [fàF.JÈ] [fÈ.kËó.+i]

‘widow’ ‘liar’ ‘Lendu woman’ ‘male goat’ ‘our food’ ‘your (pl) knives’

It is also common to find cases in which the second of two vowels becomes non-syllabic, e.g. /gene-i/ ‘race (dat sg)’ > [.ge.nej.] in Attic Greek (de Haas 1988: 126). Generally, such cases are potentially analyzable as diphthong formation. See Senturia (1998: 12–15) for examples and related discussion. 2 This is not the case in Attic for the pair /a e/: /a+e/ yields [e(], while /e+a/ yields [ea].

Roderic F. Casali

3

Kutsch Lojenga states that “both vowels must be realised as a short complex vowel nucleus on one V timing slot.” She further notes (personal communication) that the first vowel in each sequence is shorter in duration than the second vowel, though not to the point where any auditory distinctions among vowels in V1 position are neutralized. This argues against an analysis (i.e. glide formation) in which V1 is syllabified as a consonantal onset. Other languages that exhibit diphthong formation include Haitian Creole (Picard 2003), Indonesian (Rosenthall 1997), Attic Greek (Senturia 1998 and references therein), Obolo (Faraclas 1982), Bakossi (Hedinger and Hedinger 1977), Eastern Ojibwa (Howard 1973), Margi (Tranel 1992), and Larike (Rosenthall 1997). Finally, an obvious means of eliminating vowel hiatus is to epenthesize a consonant between the two vowels. One language in which this occurs is Washo, as illustrated in the examples below, adapted from Midtlyng (2005); in each example a semivowel [j] is inserted between the initial vowel of a suffix and the final vowel of a preceding morpheme. (6)

a.

’la(du-a → [’la(.du.ja] ‘in my hand’ ‘my hand-loc’ b. le’gu?u-i? → [le.’gu.?u.ji] ‘my daughter’s child’ ‘(1sg obj) mother’s mother-attrib-ag’ c. ’lemts’iha-i → [’lém.ts’i.ha.ji] ‘I am waking him up’ ‘I cause to awake-imp’ d. ’lemlu-’e(s-i → [’lém.lu.’je(.si] ‘I am not eating’ ‘I eat-neg-imp’

Though these hiatus resolution strategies have been presented independently using data from different languages, it is common to find two or more different strategies at work in the same language (see §2.5). It is also common to find that languages tolerate hiatus in some contexts but not others. A number of factors are capable of blocking or influencing hiatus resolution, including the nature of the prosodic or morphosyntactic boundary at which hiatus arises (Kaisse 1977; Baltazani 2006), prominence factors such as stress (Senturia 1998), vowel length and tone (Casali 1998: 73), minimal word length or weight conditions, the lexical or functional status of particular morphemes, rate of speech, and sensitivity to particular lexical items. Hiatus resolution also sometimes shows derived environment effects (see chapter 88: derived environment effects), in which hiatus is tolerated in vowel sequences internal to a morpheme, but is eliminated in cases where two vowels come together across a morpheme boundary. Finally, morphemes consisting of just a single vowel are sometimes resistant to loss through elision, presumably due to the loss of semantic content that could result (Casali 1997). Hiatus resolution can also arise in cases where three (or more) underlying vowels occur in sequence. Such cases are considerably less common, and it is difficult to make many strong generalizations about the resolution of /V1V2V3/ sequences. Attested outcomes include gliding of V2 (e.g. Eastern Ojibwa [Howard 1973]; Ganda [Clements 1986: 75]), and elision of both V1 and V2 (Baka [Parker 1985]). The remainder of this paper is organized as follows. In §2, I describe some major respects in which hiatus resolution processes vary across languages. §3 discusses the treatment of hiatus resolution within various theoretical models and

Hiatus Resolution

4

some associated challenges and issues. The paper concludes with a brief summary in §4.

2

Typological variation

Hiatus resolution patterns show considerable variation across languages, and any survey of this variation in a work of the present paper’s scope will necessarily be selective.3 Here we will look at certain aspects of variation involving consonant epenthesis (§2.1), vowel elision (§2.2), coalescence (§2.3), and glide formation (§2.4), as well as the co-occurrence of multiple processes within a single language (§2.5).

2.1 Consonant epenthesis A question that naturally arises in looking at hiatus resolution by consonant epenthesis is which consonants can function epenthetically as hiatus interrupters. Three possibilities seem reasonably well attested: (i) (ii) (iii)

A semivowel, usually one that is homorganic with (i.e. shares the same frontness or roundness as) V1 or V2. A glottal stop ([?]) or fricative ([h]). A coronal consonant, generally [t] or a rhotic.

By far the most common pattern (Picard 2003; Uffmann 2007) is the first one. This is sometimes explained (see for example Uffmann 2007) by assuming that homorganic glide epenthesis is in some sense different from (and less costly than) epenthesis of an entirely new segment, since the glide might be interpreted as a prolongation of phonological content that is already present. However, there are also languages – e.g. Ait Seghrouchen Berber (Senturia 1998), Galacian (Picard 2003), and Washo (Midtlyng 2005) – that consistently epenthesize [j], regardless of the featural content of adjacent vowels, and at least one language, Chamicuro (Parker 1989; de Lacy 2006), with consistent [w]-epenthesis. An example of a language with glottal stop epenthesis is Malay (Ahmad 2001). The examples below show insertion of a glottal stop between a CV prefix and vowel-initial root: (7) /di-ubah/ /sH-indah/ /sH-elok/ /di-olah/ /di-aIkat/

→ → → → →

[di?ubah] [sH?indah] [sH?elo?] [di?olah] [di?aIkat]

‘to ‘to ‘to ‘to ‘to

change (pass)’ be as beautiful as’ be as pretty as’ beguile (pass)’ lift (pass)’

Other languages that epenthesize [?] in at least some hiatus contexts include Ilokano, Selayarese, Tunica, and Indonesian (see Lombardi 2002 and references therein).

3

One topic that is not treated, for reasons of space, is the typology of diphthong formation. See Schane (1987), Sohn (1987), Rosenthall (1994), and Senturia (1998) for some discussion.

5

Roderic F. Casali

A well-known case of epenthesis of a coronal consonant in hiatus contexts is Axininca Campa (Payne 1981; Lombardi 2002; Bakovio 2003), illustrated in the examples below (Payne 1981): (8)

/i-N-koma-i/ /i-N-koma-aa-i/

→ →

[iIkomati] [iIkomataati]

‘he will paddle’ ‘he will paddle again’

These examples show an epenthetic [t] interrupting vowel hiatus in suffixal contexts; hiatus in prefixal contexts is resolved in Axininca Campa by eliding one of the vowels instead. The problem of predicting the range of possible epenthetic consonants has received significant attention in recent theoretical work. This is discussed further in §3.3.3 below.

2.2 Vowel elision A natural question that arises in connection with vowel elision is which of two adjacent vowels elides. Cross-linguistically, elision of V1 is far more common than elision of V2 (Bergman 1968; Lamontagne and Rosenthall 1996; Casali 1997, 1998). Interestingly, it turns out that the contexts in which V2 elision is well attested are not random. Clear cases of V2 elision are largely confined to two contexts: (i) the boundary between a lexical (content) word and a following function word, and (ii) stem–suffix boundaries.4 Examples of the former type, from Etsako (Elimelech 1976), are shown in (9). Note that the latter also display V1 elision of the final vowel of a preceding function word, suggesting rather strongly that it is lexical or non-lexical status, and not simple linear order, that is relevant in this case (see also chapter 104: root–affix asymmetries). (9)

/Dna the /Dna the

aru louse e:i tortoise

Dli/ that Dna/ this



[Dnaruli]

‘that louse’



[Dne:ina]

‘this tortoise’

Examples of the latter type, adapted from Okpe (Pulleyblank 1986), are shown in (10). (10)

/è-sé-ó/ → inf-fall-inf /è-dé-F/ → inf-buy-inf

[èsé]

‘to fall’

[èdé]

‘to buy’

Compare these forms with the additional Okpe words in (11), where the final V suffix is retained following an underlying high vowel, which undergoes glide formation. 4

In addition to the more common cases in which the elided vowel is one that occupies a particular position, there are also cases (see Casali 1996, 1998; Causley 1999b) in which the vowel targeted depends on the featural makeup of the two vowels.

Hiatus Resolution (11)

/è-tí-ó/ inf-pull-inf /è-sË-F/ inf-sing-inf



[ètjó]

‘to pull’



[èswF]

‘to sing’

6

At other kinds of morphosyntactic boundaries, such as that between a prefix and following root or between two content words, elision regularly targets V1. The cross-linguistically well-attested possibilities are summarized below. (See Casali 1997 for more discussion.) (12)

Vowel elision and morphosyntactic position Context Between two content words Content word before function word Prefix + root Root + suffix

Robustly attested possibilities V1 elision V1 elision or V2 elision V1 elision V1 elision or V2 elision

2.3 Coalescence As noted previously, symmetric coalescence, as in the Attic Greek data in (4), is relatively rare. By far the most common form of coalescence is a directionally asymmetric pattern, termed height coalescence in Casali (1998) (see also Lamontagne and Rosenthall 1996; Parkinson 1996), in which a non-high V1 and a high V2 coalesce to form a non-high vowel otherwise identical to V2 , e.g. /a+i/ > [e], /a+u/ > [o], as in the Xhosa examples below (Aoki 1974). (13)

/wa-inkosi/ /wa-umfazi/

→ →

[wenkosi] ‘of the chiefs’ [womfazi] ‘of the woman’

The reverse sequences /i+a/ and /u+a/ are not subject to coalescence in Xhosa, but are resolved instead by vowel elision and glide formation, respectively (see §2.5 below). Languages in which the feature [ATR] is contrastive sometimes show a slightly more elaborate form of asymmetric height coalescence, in which the [ATR] value of a non-high V1 is preserved in some cases as well. Such languages divide into two types: those in which [−ATR] is systematically preserved (e.g. /a+i/ > [e], /e+o/ > [D]), and those in which [+ATR] is preserved (e.g. /a+i/ > [e], /o+// > [e]). Languages of the former type include Owon Afa (Awobuluyi 1972) and Anufo (Adjekum et al. 1993). Languages of the latter type include several North Guang languages and Southern Sotho (Casali 1998, 2003 and references therein).5 Though asymmetric height coalescence most commonly applies to sequences in which V1 is lower than V2 , cases of “reverse height coalescence” also exist in which a higher V1 followed by a lower V2 yields a lowered version of V1 (e.g. /i+a/ > [e], /u+a/ > [o]), while the opposite sequences do not trigger coalescence. This occurs in Foodo (Kwa; Ghana; Plunkett 1991: 68), as shown below. (The initial 5

The particular [ATR] value that is preserved under height coalescence shows a strong correlation with a language’s vowel inventory structure; see Casali (1998, 2003) and Causley (1999a) for discussion.

Roderic F. Casali

7

and final /a/’s are noun class affixes. The tonal changes are due to independent processes discussed in Plunkett.) (14)

a. /i+a/ > [e(] /á-bì-á/ → [ábê(] ‘seeds’ b. /Á+a/ > [D(] /á-sË-á/ → [àsF(] ‘ears’ c. /u+a/ > [o(] /á-jù-á/ → [ájô(] ‘millet’

cf. [dí-bí-lì] ‘seed’ cf. [kÈ-sË] ‘ear’ cf. [dú-jú-lì] ‘millet’

Sequences in which V2 is high and V1 is non-high do not undergo coalescence; compare the /Á+a/ sequence in (14b) with the /a+Á/ sequence in /kÈ-tá-Ë/ ‘bow’, which is retained in the surface form, [kÈtáË]. Other languages with reverse height coalescence patterns include Tem (Tchagbale 1976; de Craene 1986), Chagga (Nurse and Philippson 1977; Saloné 1980), Ewe (Westermann 1930), Bakossi (Hedinger and Hedinger 1977), and Nkengo (Hulstaert 1970). Interestingly, such patterns seem to occur predominantly at root–suffix boundaries, a restriction that partly parallels some limitations on the distribution of V2 elision (§2.2). A further coalescence pattern that should presumably be expected to occur is one in which front unrounded and back rounded vowels coalesce to form a front rounded vowel, e.g. /i+u/ > [y], /e+o/ > [ø], etc. Patterns of this type appear to be considerably less common than height coalescence. Two possible cases, Rotuman and Korean, are discussed in de Haas (1988) (see also Sohn 1987; Rice 1995; Causley 1999a). Coalescence of /e+o/ to [ø] is also described in Obolo (Faraclas 1982).

2.4 Glide formation In Ganda (cf. (3) above) and quite a few other languages, both front and back V1’s are subject to glide formation. It is also quite common, however, to find that only back round vowels glide and that front V1’s trigger a different resolution strategy, most commonly elision. This is the case for example in Xhosa (see §2.5 below) and Chumburung (Snider 1985). Though they are seemingly less common, there are also languages (e.g. Polish; Rubach 2000) in which only front vowels glide. A second point of variation involves the height of V1. Generally, if a language has glide formation at all, high V1’s will undergo the process (Rosenthall 1994, 1997; Casali 1995). In some languages (e.g. Ebira; Adive 1989), only high V1’s glide. In quite a large number of other languages, however, mid V1’s also glide.6 One such case, Chicano Spanish, is illustrated in the examples below (from Bakovio 2007, with phonemic forms substituted for orthographic ones): (15)

a.

b.

6

/mi ultima/ /mi ob7a/ /tu epoka/ /tu alma/ /me u7xe/ /po7ke a beses/ /komo eba/ /lo abla/

→ → → → → → → →

[mjultima] [mjoß7a] [twepoka] [twalma] [mju7xe] [po7kjaßeses] [komweßa] [lwaßla]

‘my last one (fem)’ ‘my deed’ ‘your time’ ‘your soul’ ‘it is urgent to me’ ‘because sometimes’ ‘like Eva’ ‘speaks it’

In rare cases, e.g. Aghem (Hyman 1979), languages may glide the low vowel /a/ as well.

Hiatus Resolution

8

Further variation exists as well. In some languages, glide formation does not apply to sequences in which V1 and V2 share the same frontness and roundness. In Gichode (Casali 1998: 168–169), for example, glide formation of a round vowel occurs only before non-round vowels, e.g. /u+i/ > [wi] but /Á+o/ > [o] (*[wo]). (Contrast this with realization of /u+o/ as [wo] in Ganda, as in (3) above.) Glide formation is also blocked in some languages (e.g. Ganda; Clements 1986) following certain consonants.7 Typically, both sorts of restrictions can be attributed to constraints that are effective quite generally in the language (e.g. languages that fail to glide /u/ or /o/ before a round vowel typically lack [Cw] before round vowels in general). Finally, some languages impose less stringent restrictions on glide formation when V1 occurs in absolute word-initial position. In Ganda, for example, only high V1’s generally glide in word-internal /CV1-V2/sequences. Word-initially, however, mid and even low V1’s undergo glide formation (in this case without compensatory lengthening), as in the examples below (Clements 1986: 75, n. 1):8 (16)

/o-a-gula/ /a-a-gula/ /e-a-laba/

→ → →

[wagula] [jagula] [jalaba]

‘you (sg) bought’ ‘he/she bought’ ‘it (cl. 9) saw’

Rather similar patterns are reported in Nyarwanda (Kimenyi 1979). Notwithstanding the considerable variation that exists in its patterning, there is one very significant respect in which the behavior of glide formation is surprisingly regular across languages. Quite consistently, (non-word-initial) sequences in which V1 and V2 are identical regularly fail to undergo glide formation. We can illustrate this restriction with additional examples from Ganda (Clements 1986): (17) /mi-iko/ /lu-uji/

→ →

[mi(ko] [lu(ji]

*[mji(ko] *[lwu(ji]

‘trowels’ ‘side’

Moreover, sequences such as /o+u/ and /e+i/, in which V1 and V2 are both front or both round and V1 is lower than V2 , rarely if ever trigger glide formation, but are resolved instead by vowel elision or coalescence (Casali 1995, 1998: 172, n. 5). Exceptions to these generalizations clearly arise in absolute word-initial position in some languages, as in the Ganda example in (16) above. I am not aware of any languages that consistently violate these restrictions word-internally, however.

2.5 Multiple hiatus resolution strategies in the same language It is quite common to find two or more different hiatus resolution processes at work in the same language. In some such cases, different processes are operative 7

In some languages, glide formation following certain consonants triggers further changes, e.g. /siV/ and /ziV/ are realized as [œV] and [ÚV] respectively in Ebira (Adive 1989). 8 Intervocalic gliding of non-high vowels also occurs in some three-vowel sequences discussed by Clements, as in /te-a-a-gula/ ‘he/she didn’t buy’, realized as [tejagula].

Roderic F. Casali

9

in different morphosyntactic contexts. In Axininca Campa (Bakovio 2003) and Washo (Midtlyng 2005), for example, hiatus is resolved by vowel elision at a prefix–stem boundary but by epenthesis at a stem–suffix boundary. In Lugisu (Brown 1970), a sequence /a+i/ is resolved by coalescence (to [e]) across a word boundary, but by eliding /a/ word-internally. There are also many cases, however, in which multiple strategies apply in exactly the same morphosyntactic context, targeting different vowel sequences. Especially common are cases (see Casali 1998: 83 –84) in which vowel elision occurs along with glide formation, coalescence, or both. Languages with both vowel elision and glide formation (but not coalescence) include Ganda, Etsako (Elimelech 1976), Igede (Bergman 1968), and Chicano Spanish (Bakovio 2007). Languages with coalescence and vowel elision (but not glide formation) include Afar (Bliese 1981) and Owon Afa (Awobuluyi 1972). Particularly intricate patterns are found in a considerable number of languages (32 cases are listed in Casali 1998: 83 –84) that manifest all three processes. One such language is Xhosa (McLaren 1955; Aoki 1974), whose hiatus resolution alternations conform to the following generalizations: (18)

Hiatus resolution in Xhosa a. b. c.

Where V1 is non-high and V2 is high, the outcome is a [−high] version of V2. A round V1 undergoes glide formation before a following non-round vowel.9 Elsewhere, V1 elision applies.

The overall pattern corresponding to these generalizations is shown below in Table 61.1, where coalescent realizations are underlined and those involving glide formation are italicized. Note that in the case of the input /o+i/, both coalescence and glide formation apply.

Table 61.1 Glide formation, coalescence, and vowel elision (in Xhosa) V2

V1

9

i e a o u

i i e e we wi

e e e e we we

a a a a wa wa

o o o o o o

u u o o o u

Aoki’s description implies that glide formation should apply before round vowels as well, e.g. /u+o/ > [wo], but he gives no examples of such realizations. In contrast, McLaren’s data and explicit statements (1955: 10) strongly suggest that gliding of /u/, /o/ occurs only before non-round vowels. I follow McLaren’s account here.

Hiatus Resolution

10

Examples illustrating some of these realizations in Xhosa (Aoki 1974) are shown below: (19)

/esisu-ini/10 /ni-o–a/ /ndi-akha/ /ni-enza/ /wa-ejele/ /a+a-oni/ /a+a-akhi/ /wa-inkosi/ /wa-umfazi/ /esilo-ini/

→ → → → → → → → → →

[esiswini] [no–a] [ndakha] [nenza] [wejele] [a+oni] [a+akhi] [wenkosi] [womfazi] [esilweni]

‘stomach (loc)’ ‘you roast’ ‘I build’ ‘you make’ ‘he fell in’ ‘wrong doers’ ‘builders’ ‘of the chiefs’ ‘of the woman’ ‘animal’

The descriptive summary of the Xhosa patterns in (18) illustrates something that is quite typical of languages that combine vowel elision with glide formation and/or coalescence, which is that it is generally possible to regard vowel elision as a kind of default process. That is, the simplest way of describing the relevant generalizations is often to specify the conditions under which glide formation and/or coalescence apply, with a statement that vowel elision applies elsewhere. All three processes – vowel elision, glide formation, and coalescence – can occur either with or without compensatory lengthening, depending on the language (see chapter 64: compensatory lengthening). Typically, if compensatory lengthening applies with one process it will apply with the others as well. Thus, Ganda shows compensatory lengthening with both vowel elision and glide formation, while Xhosa does not show compensatory lengthening with either of these, nor with coalescence. It also appears generally true that languages (e.g. Ganda) with contrastive vowel length manifest compensatory lengthening while those with no phonemic length do not, but it remains to be seen how universal this correlation is.

3

Theoretical treatments and issues

3.1 Early generative phonology Many analyses of hiatus resolution patterns in particular languages (e.g. Brown 1970; Aoki 1974; Phelps 1975, 1979; Elimelech 1976; Halle 1978; Shaw 1980; Snider 1985) were carried out within early generative phonological frameworks conforming roughly to the model proposed in Chomsky and Halle (1968) or its offshoots. In such models, hiatus resolution processes are due to the operation of languagespecific phonological rules. To account for the Xhosa hiatus resolution patterns in (19), for example, Aoki (1974: 239) posits three ordered rules of Vowel Lowering, Glide Formation, and Vowel Deletion, which (with minor notational adjustments) are essentially those in (20): 10

Aoki (1974: 238) displays the underlying forms of the first and last forms in (19) as /isisu-ini/ and /isilo-ini/, respectively, but describes the lowering of the word-initial vowel form /i/ to [e] as a morphosyntactic replacement, suggesting that initial /e/ is present underlyingly.

Roderic F. Casali

11 (20)

a.

Vowel Lowering V → [−high] /

G V J __ I−highL

b.

Glide Formation G V J → [−vocalic] / __ V I+roundL

c.

Vowel Deletion V → Ø / __ V

Derivations illustrating the operation of these rules are shown below (Aoki 1974: 40):11 (21)

Underlying Form Vowel Lowering Glide Formation Vowel Deletion Output

/wa-umfazi/ /esisu-ini/ /esilo-ini/ wa-omfazi — esilo-eni — esisw-ini esilw-eni w-omfazi — — [womfazi] [esiswini] [esilweni]

The formal apparatus of early generative phonology frequently offered multiple possibilities for analyzing a given pattern. For example, in contrast to Aoki’s analysis of Xhosa coalescence using separate vowel lowering and elision rules, other researchers (e.g. Phelps 1975, 1979; Halle 1978) treated very similar patterns in other languages using a type of rule, known as a transformational rule, which is capable of simultaneously affecting (and, in the case of coalescence, merging the features of) two different segments. Perhaps not surprisingly, much of the literature on hiatus resolution patterns of this period focused on issues of rule formulation and the related question of when the rules for two potentially related processes might appropriately be collapsed into a single rule. Aoki’s paper, which provides extensive arguments against a transformational rule analysis of coalescence (on the grounds that it is arbitrary and unrevealing and that it leads to an unnecessary increase in the complexity and power of the theory), is itself an interesting case in point. Other relevant work includes Brown (1970), Harms (1973), Hyman (1973), Shaw (1980), Snider (1985), and an extended debate (Chomsky and Halle 1968; Phelps 1975, 1979; Halle 1978) over some particularly intricate patterns in Kasem.

3.2 Autosegmental and non-linear generative phonology The late 1970s and 1980s saw the development of alternative and greatly elaborated autosegmental or non-linear conceptions of phonological structure in which some or all phonological features are assumed to occur on separate structural tiers (see chapter 14: autosegments). A number of studies of hiatus resolution phenomena (e.g. Katamba 1985; Clements 1986; Pulleyblank 1986, 1988; Sohn 1987; de Haas 1988; Snider 1989) were carried out using such models. We will look at one representative (and influential) case in some detail, Clements’ (1986) treatment 11

The derivations in (21) differ slightly from those shown in Aoki due to an apparent typo in his derivation of [esiswini] and a minor (and irrelevant) difference in choice of underlying forms (see note 10).

Hiatus Resolution

12

of glide formation and elision in Ganda (see (3) above). Clements’ analysis employs the rules in (22).12 (22)

a.

Glide Formation C

V

V

[+high] [−cons] b. Non-high Vowel Deletion V

V

[−high] [−cons] An appealing feature of Clements’ analysis is that it provides a very straightforward account of the compensatory lengthening that accompanies both elision and glide formation in Ganda. Both rules in (22) have the effect of delinking a V element from its associated vowel features. This is illustrated below for the case of vowel elision. (23a) shows the underlying form corresponding to /ka-oto/ ([ko(to]) ‘small fireplace’ within Clements’ model, while (23b) shows the representation that results when this form is subjected to rule (22b) (Non-high Vowel Deletion), which delinks /a/ from its associated V element, in conjunction with a further (universal) convention that is assumed to delete unassociated segments (in this case the delinked /a/). (23)

C

V

V

C

V

k

a

o

t

o

b. C

V

V

C

V

o

t

o

a.

k

The parallel forms in (24) illustrate the application of the Glide Formation rule (22a). (24a) shows the underlying form of /mu-ana/ ([mwa(na]) ‘child’, and (24b) shows the result of applying Glide Formation to this form. (24)

a.

b.

12

C

V

V

C

V

m

u

a

n

a

C

V

V

C

V

m

u

a

n

a

Glide Formation as formulated in (22a) does not account for the cases where non-high vowels glide word-initially in (16). Clements proposes an additional rule to account for these, which we will not treat here.

Roderic F. Casali

13

Following the application of these rules, the forms in (23b) and (24b) both contain an unassociated V element. Clements assumes that there is a universal Linking Convention that has the effect of automatically reassociating such an unassociated V element to an accessible vowel segment (subject to a general prohibition on crossing of association lines). Applied to the representations in (23b) and (24b), this convention yields the representations in (25a) and (b), respectively. (25)

a.

C

V

k b.

V

C

V

o

t

o

C

V

V

C

V

m

u

a

n

a

In these surface representations, V2 emerges as a long vowel, since it is linked to two V elements (see chapter 54: the skeleton). This account encodes quite directly the intuition that compensatory lengthening involves the transfer of duration from one segment to another. There is a further wrinkle to the analysis. As noted in §2.4 above, glide formation does not apply in Ganda to the sequences /i+i/ and /u+u/, in which V1 and V2 are identical. To prevent the Glide Formation rule (22a) from applying to these sequences, Clements posits an additional rule of Twin Vowel Deletion that is ordered before Glide Formation and functions to remove sequences of identical high vowels as possible inputs to the latter: (26)

Twin Vowel Deletion V

V

[aF] [aF] This rule is applicable to words like /mi-iko/ [mi(ko] ‘trowels’, whose underlying form is shown in (27): (27)

C

V

V

C

V

m

i

i

k

o

Application of Twin Vowel Deletion, along with the universal convention requiring deletion of unassociated segments and the Linking Convention that accomplishes reassociation of a free V element, will convert this to (28). (28)

C m

V

V

C

V

i

k

o

Hiatus Resolution

14

Since the form in (28) does not meet the structural description for Glide Formation to apply, the analysis correctly predicts [mi(ko] and not *[mji(ko] as the surface form. While the analysis derives the correct forms, however, the need to posit the language-specific rule (26) implies that immunity of sequences of identical vowels from glide formation is an idiosyncratic characteristic of the language. As noted in §2.4, such sequences appear to be regularly exempt from glide formation in other languages as well, suggesting that something more universal than a language-specific rule (26) is at work. (Potentially, this presents an interesting challenge not only for autosegmental models like Clements’ but for other approaches as well.) A strong interest of many autosegmental theories is the specification of phonological features. Much research has been done in particular on the possibility of accounting for certain phonological patterns based on the assumption that only one value of a feature is phonologically specified (see chapter 7: feature specification and underspecification). Underspecification models of this type have potential implications for the analysis of vowel coalescence. One of the analytical questions that arises in connection with coalescence is what determines which features of the two merged vowels are preserved in the output. An interesting general answer to this question, pursued in a study by de Haas (1988) (see also Sohn 1987 and Snider 1989), is that the underlyingly specified features from both vowels are preserved in the output. Preservation of all specified features of both vowels under coalescence would presumably be impossible in cases where the two vowels have opposite values of some feature, since this would lead to a surface vowel simultaneously specified as both [+F] and [−F] for some feature [F]. Following previous work in radical underspecification theory, de Haas assumes that only one value of each feature is underlyingly specified. Consider in this regard the symmetric coalescence of /o/ and /a/ to [D] in Attic Greek, as in the relevant forms in (4) above, repeated here as (29). (29)

/ti(ma-omen/ /ajdo-a/

→ →

[ti(.m∑(.men] [aj.d∑(]

(/ao/ > [D(]) ‘honor (1pl pres ind)’ (/oa/ > [D(]) ‘shame (acc sg)’

In de Haas’s underspecification analysis, /a/ is specified only as [+low] and [+back] at the point where coalescence applies, while /o/ is specified only as [+round]. Combining all three feature values yields a [+low], [+back], [+round] vowel, which in de Haas’s analysis is equivalent to [D]. Many autosegmental treatments of hiatus resolution processes were also concerned with the relationship between hiatus resolution and syllable structure and attempted to establish a formal connection between the two. In the model of de Haas (1988), for example, (symmetric) coalescence is contingent on prior resyllabification of two adjacent vowels into a single syllable. Other autosegmental analyses that attempted to connect hiatus resolution to syllabification include Katamba (1985), Pulleyblank (1986), Walli-Sagey (1986), Schane (1987), and Sohn (1987).

3.3 Optimality Theory Analyses of hiatus resolution patterns within Optimality Theory (OT) date from the early years of the paradigm and include, among other studies, Rosenthall (1994,

15

Roderic F. Casali

1997), Casali (1995, 1997, 1998), Orie and Pulleyblank (1998), Senturia (1998), Causley (1999a, 1999b), and Bakovio (2003, 2007). Though they differ somewhat in detail, most such analyses share the following general components: (i)

(ii)

Some constraint (which must be highly ranked) that militates against heterosyllabic adjacent vowel sequences. There is some controversy (discussed below) over the exact identity of this constraint. For now, we will simply label it “NoHiatus.” Constraints that are violated by various hiatus resolution possibilities. Generally, vowel elision is assumed to violate a constraint Max, which requires underlying segments to be represented in surface forms. Epenthesis is assumed to violate a constraint Dep against insertion of material (as well as relevant markedness constraints against the features of the inserted consonant – see below). Diphthong formation violates a constraint NoDiph against diphthongs. Glide formation violates, minimally, a markedness constraint, here labeled *CG, against consonant + glide sequences.13 Coalescence violates a constraint Uniformity, which prohibits merger of two underlyingly distinct segments into a single segment in the output.

Given these assumptions, hiatus resolution is forced whenever NoHiatus is ranked sufficiently high. At a rough first approximation, the particular form of hiatus resolution that occurs is determined by the constraint that is ranked lowest. For example, epenthesis is predicted to occur if the constraint Dep is outranked by the remaining constraints, as illustrated in (30), using a hypothetical input /ku abo/. (30)

/ku abo/ a. .ku.a.bo.

NoHiatus Max NoDiph *CG Uniformity Dep *! *!

b. .kua.bo. c. .ka.bo. d. .kwa.bo. e. .ko.bo. ☞ f. .ku.?a.bo.

*! *! *! *

The simplified analysis sketched above would need to be significantly elaborated to account for the intricate patterns and interactions found in many languages.14 It does, however, illustrate one important general feature of OT analyses, which is that all phonological processes occur in response to some markedness constraint(s). In this case, the primary markedness constraint is the constraint labeled ‘NoHiatus’ in (30). One of the issues that has been debated is the exact nature of this constraint. In what follows, we will look briefly at this question and several other important issues that arise within OT approaches to hiatus resolution. 13

Under some analyses (e.g. Bakovio 2007), gliding of [−high] vowels will also incur violations of a constraint Ident[high], which prohibits changes to the feature [high], since the resulting semivowel [w]/[j] is assumed to be [+high]. 14 For some proposed constraints relevant to compensatory lengthening which are not considered in this simplified analysis, see Rosenthall (1997).

Hiatus Resolution

16

3.3.1 What drives hiatus resolution? Many descriptions and analyses of vowel hiatus resolution processes (e.g. Brown 1970; Mtenje 1980; Shaw 1980; Katamba 1985; Pulleyblank 1986; Walli-Sagey 1986; Sohn 1987; de Haas 1988; Wiltshire 1992; Balogné Bérces 2006) have suggested that such processes are motivated by factors related to canonical syllable structure, and in particular the need to avoid onsetless syllables (see chapter 33: syllable-internal structure and chapter 55: onsets). In OT, this notion has often been formalized by high ranking of a constraint Onset that requires syllables to have onsets, thus disallowing heterosyllabic V.V sequences which would arise in contexts where hiatus is maintained. An alternative view is that hiatus resolution derives from an avoidance of vowel sequences, and not a requirement that all syllables have onsets. Such a view is made plausible by the observation that vowel hiatus seemingly involves unique phonetic difficulties not found with word-initial onsetless syllables. At least two kinds of difficulty have been cited. First, mutual co-articulatory interaction in a sequence of adjacent vowels tends to perturb the quality of each vowel, potentially making accurate identification of vowel qualities more difficult (Borroff 2003). A different explanation is proposed by de Haas (1988), who sees the problem as a kind of “sonority clash” or “bad syllable contact” (see chapter 49: sonority and chapter 21: vowel height). The adjacent heterosyllabic vowels have (roughly) equal sonority, whereas the preferred transition between syllables should involve a sonority trough. Under the widespread assumption that constraints exist in response to particular phonetic challenges, these considerations lend support to the view that there should be some phonological constraint that specifically excludes hiatus. Several studies have raised novel arguments that hiatus resolution cannot always be attributed to Onset. Orie and Pulleyblank (1998) argue that attributing hiatus resolution to Onset in Yoruba misses important generalizations about the conditions that govern the distribution of different hiatus resolution strategies across different contexts. They adopt instead a constraint NoHiatus, which is violated by vowels in hiatus but not by onsetless syllables in general. Borroff (2003, 2007) presents data from a number of languages in which the same hiatus resolution patterns found with clear /VV/ sequences apply to /V?V/ sequences as well. In Chickasaw (Borroff 2007: 57, citing Ulrich 1993), for example, /VV/ hiatus is resolved by glide epenthesis, as shown in (31a). Interestingly, the same process applies to /V?V/, as in (31b). (31)

a. b.

/tof-to-a/ /bo?-a/

→ →

[toftowa] [bo?w-a]

‘to spit more than once’ ‘to be beaten’

On the assumption that an intervocalic [?] should suffice to satisfy Onset, the fact that the same glide epenthesis process applies even when an intervocalic [?] is present argues that something other than Onset is responsible for hiatus resolution in this case. Borroff (2003) argues for a constraint Vcv-Coord, motivated with reference to phonetic facts involving the sequencing of vowel gestures, which in essence requires that a consonantal target appear between two different vowels.15 15

More precisely, the label Vcv-Coord is a shorthand for a conjoined alignment constraint (see chapter 62: constraint conjunction) Align(V1, release, C1, target) & Align(C1, release, V2 , target), which is described in prose as a requirement to “align the release of the first vowel in a sequence of vowels with the achievement of the target of a consonant, and align the release of that same consonant with the achievement of the target of the second vowel of a sequence” (Borroff 2003: 11).

17

Roderic F. Casali

Though the constraint is equivalent for most purposes to NoHiatus, it is (in contrast to Onset) crucially not satisfied by an intervocalic glottal stop, which lacks an (oral) gestural target. An alternative analysis (Borroff 2007) is to assume that a prevocalic glottal stop does not in fact satisfy Onset. In either case, patterns such as these raise interesting challenges for familiar assumptions about hiatus resolution and its motivations.

3.3.2 Directionality in vowel elision Any analysis of vowel elision in hiatus contexts must account for the choice of vowel, V1 or V2 , that is elided. In rule-based models, the deleted vowel is typically specified directly in the form of the elision rule. For example, both the linear deletion rule (20c) and the autosegmental deletion rule (22b) given above stipulate deletion of the first of two adjacent vowels. In contrast, an account within Optimality Theory must assume that elision of V1 or V2 in a given context will violate different constraints, whose relative ranking determines which outcome occurs. The problem then becomes to identify the relevant constraints. The possible rankings of these constraints should also suffice to generate the V1 or V2 elision cases that are attested cross-linguistically, without predicting patterns that are unattested. Arguably, the relevant generalizations to be accounted for are at least approximately as summarized in (12) above. A possible account of these generalizations is outlined in Casali (1997). The explanation assumes that at a prefix–root juncture or a boundary between two content words, V2 is protected by a constraint MaxMI or MaxWI, demanding, respectively, preservation of morpheme- and word-initial vowels. In addition, the analysis continues to assume a generic Max constraint that is violated by deletion of a segment in any context. The analysis also assumes a constraint MaxLex requiring preservation of segments in roots and in content words. Crucially, there are no analogous Max constraints that specifically target word- or morpheme-final position, or affixes or function words. A consequence of these assumptions is that in some contexts the constraint violations incurred by elision of V1 will be a subset of those incurred by V2 elision. At a prefix–root boundary, for example, elision of V2 violates MaxMI (since V2 is the root-initial segment), MaxLex, and (ordinary) Max, while elision of V1 violates only the latter (assuming we are dealing with a minimally CV prefix, so that V1 is not morpheme-initial).16 Since the constraint violations incurred by V1 elision in this context are a subset of those arising with V2 elision, eliding V2 in this context should, all else being equal, be more costly than eliding V1. Thus, only V1 elision is ordinarily expected in this context. This is illustrated below, using a hypothetical CV prefix and VCV root. Note that there is no ranking of the constraints under which the second candidate, with V2 elision, is optimal. (32)

/CV1-V2CV/

MaxMI MaxLex Max

☞ a. CV2CV b. CV1CV 16

*!

*

* *

The full analysis in Casali (1997) actually predicts that V2 elision should be possible at prefix–root boundaries in the special case of a V prefix, since V1 is protected by MaxWI and an additional constraint MaxMS requiring preservation of monosegmental morphemes. We will ignore these complications here.

Hiatus Resolution

18

Similarly, only V1 elision is predicted when underlying vowels abut at the boundary between two content words. In this case, both V1 elision and V2 elision violate MaxLex and general Max; the two possibilities thus tie on these constraints. However, since V2 elision violates MaxWI while V1 elision does not, the former outcome is less optimal. This is illustrated below for a sequence of two hypothetical VCV content words. (33)

/VCV1 V2CV/ MaxWI MaxLex Max ☞ a. VC V2CV b. VCV1 CV

*!

* *

* *

In other contexts, elision of either vowel is predicted to be possible. For example, at a root–suffix boundary V2 elision violates MaxMI but not MaxLex. Thus, V2 elision is possible if MaxLex outranks MaxMI, as shown below using a hypothetical VCV root and VC suffix: (34)

/VCV1-V2C/ a. VCV2C

MaxLex MaxMI Max *!

☞ b. VCV1C

*

* *

V1 elision, which violates MaxLex but not MaxMI, is predicted under the opposite ranking: (35)

/VCV1-V2C/

MaxMI MaxLex Max

☞ a. VCV2C b. VCV1C

* *!

* *

For roughly analogous reasons, both V1 elision and V2 elision are predicted possibilities at the boundary between a content word and a following function word; in this context V1 elision violates MaxLex but not MaxWI, while V2 elision violates only the latter. Note that this model encodes no general context-independent preference for elision of V1; the overall statistical predominance of V1 elision noted above arises indirectly from the fact that V1 elision is predicted in a wider range of contexts. An alternative interpretation of the observed typology might suppose that there is a general context-independent preference for preservation of V2, expressible as some constraint(s), and that this can be overridden in cases where V1 occurs in a prominent position (and hence falls under the protection of some positional faithfulness constraint). The view that hiatus patterns reveal a general context-independent preference for preservation of V2 is expressed by Lamontagne and Rosenthall (1996) (see also Alderete 2003), who refer to this effect as the persistence of V2. Finding evidence to distinguish the two accounts is not easy. There is one context, however, in which the two views potentially make different predictions: where underlying vowels come together morpheme-internally, for example due to the optional deletion of an intervening consonant. While the Casali (1997) model offers no clear predictions in such cases, a model assuming general persistence of V2 should predict, all else being equal, that V1 must elide. A number of languages

19

Roderic F. Casali

do show vowel elision in such cases, and in at least some of them, this prediction is not borne out. Yoruba (Orie and Pulleyblank 1998; Pulleyblank 1998) and Igbo (Emenanjo 1972) both elide V2 , not V1 , in such cases. Though it might be premature to rule out the possible influence of other factors in these cases, these patterns at least appear to challenge the Persistence of V2 view, especially since both languages normally show V1 elision in other contexts (which could be attributed to constraints favoring preservation of initial segments).

3.3.3 Epenthetic consonants and markedness As noted in §2.1 above, only certain consonants are widely observed to function epenthetically as hiatus interrupters. An adequate phonological theory should explain why this is so. Within OT, the problem of explaining the range of possible epenthetic consonants is closely tied to the question of markedness (see chapter 4: markedness; chapter 12: coronals; chapter 22: consonantal place of articulation). Since epenthetic consonants, by definition, are not present underlyingly, their featural content is not affected by faithfulness constraints requiring preservation of phonological material. Consequently, the epenthetic consonant used to resolve hiatus in a given language should be the consonant that is optimal with respect to relevant markedness constraints alone, as these are ranked in the language. The predicted typological range of possible epenthetic consonants should thus follow from the set of universal markedness constraints posited, together with any restrictions (assumed in some models) on their possible rankings. We can illustrate the basic principles at issue with reference to markedness constraints on place of articulation (POA), which have received much attention in the recent literature. OT models have generally assumed markedness constraints targeting each major POA feature, e.g. the constraints *Lab, *Cor, *Dors, and *Glot, which ban, respectively, labial, coronal, dorsal (e.g. velar), and glottal consonants. All else being equal, the particular epenthetic consonant employed in a language is predicted to have the POA of whichever POA constraint is ranked lowest, e.g. a glottal consonant is expected if *Glot is lowest-ranked. In a theory in which the possible ranking of these POA constraints varies freely across languages, we should expect that any POA could function epenthetically in some language. However, some phonologists have assumed that certain places of articulation are universally more marked than others. For example, de Lacy (2006) assumes the fixed scale in (36), where “>” means “is more marked than.” (36)

dorsal > labial > coronal > glottal

It would be straightforward enough to translate this scale into a universally fixed ranking (i.e. one which is stipulated to hold in all languages as part of Universal Grammar) of POA constraints, as in (37). (37) *Dors >> *Lab >> *Cor >> *Glot Fixed rankings of this sort, with some disagreement over details, have played a role in a number of OT analyses (see for example Lombardi 2002). In place of such a fixed hierarchy, however, de Lacy (2006: 2) adopts a different technical implementation of the same general idea, specifically the set of freely rankable POA markedness constraints in (38):

Hiatus Resolution (38)

a. b.

*{Dors} *{Dors,Lab}

c.

*{Dors,Lab,Cor}

d. *{Dors,Lab,Cor,Glot}

20

Assign a violation for each [dorsal] feature. Assign a violation for each [dorsal] and each [labial] feature. Assign a violation for each [dorsal], each [labial], and each [coronal] feature. Assign a violation for each [dorsal], each [labial], each [coronal], and each [glottal] feature.

In this system, a consonant at a POA further to the left on the scale in (36) will always incur worse violations of these POA constraints than one further to the right. This is because the violations incurred by a POA further to the left are necessarily a superset of those incurred by a POA further to the right, regardless of how these constraints are ranked, as shown below (de Lacy 2006: 50): *{Dors} *{Dors,Lab} *{Dors,Lab,Cor} *{Dors,Lab,Cor,Glot}

(39) k p

*

* *

t ?

* *

* *

*

* *

(Crucially, there are no further POA constraints, e.g. *{Cor} or *{Cor,Glot}, targeting other individual places or place combinations.) If a fixed place markedness hierarchy of this sort were the whole story, we would predict that epenthetic consonants would always be glottals, since an epenthetic glottal consonant is always least costly according to this constraint system. This prediction is too restrictive, as it does not account for various other possibilities (e.g. coronals or a homorganic semivowel) that are reported to exist (see §2.1 above). De Lacy’s solution assumes that there are additional markedness scales that refer to dimensions other than place, and that these interact with the place markedness hierarchy to produce the observed range of typological possibilities. For example, the possibility of epenthesizing a coronal stop [t], as in Axininca Campa, follows from the assumption of an additional set of markedness constraints (this time related not to place but to manner of articulation) against high-sonority consonants in onsets, along with the further (and controversial – see Lombardi 2002; Uffman 2007) assumption that glottal consonants [?] and [h] are higher in sonority than all non-glottal consonants (see chapter 49: sonority). These assumptions motivate a constraint *__ D q/Glot prohibiting glottals in syllable margins (onsets or codas). In languages in which *__ D q/Glot is ranked above the relevant POA markedness constraints, glottals will be excluded as epenthetic hiatus interrupters, despite their (universal) optimality with respect to POA alone. With glottals ruled out, the predicted outcome (all else being equal) should be the POA that fares second best according to the constraint system (see (39)). This is coronal.17 The predicted outcome is illustrated in (40), using a hypothetical input /ai/.

17

See Lombardi (2002) for a similar proposal.

21 (40)

Roderic F. Casali /ai/

*— Dq/ *{Dors} *{Dors,Lab} *{Dors,Lab, *{Dors,Lab, Cor} Glottal Cor,Glot}

a. aki

*

b. api ☞ c. ati d. a?i

*

* *

* *

* *

*

* *

Epenthetic homorganic semivowels (e.g. [w] following /u/, [j] following /i/) are predicted in de Lacy’s theory in cases where further markedness constraints requiring consonants (including epenthetic ones) to agree in their place and manner features with adjacent vowels are highly ranked. Additional markedness constraints generate a few further predicted possibilities in languages in which they are highly ranked. In all, the model predicts the following restricted range of epenthetic consonants in hiatus contexts: [? t h P r w j]. De Lacy claims that this set corresponds to the attested range of possibilities. While de Lacy’s theory provides a detailed, plausible, and comprehensive OT account of consonant epenthesis, it is unlikely to be the last word on the subject. The topic of markedness (both with respect to epenthesis and other areas) has been an extremely complex and controversial one. Among other things, some phonologists (see Hume 2003; Rice 2007; chapter 12: coronals; chapter 22: consonantal place of articulation) have questioned the claim that glottal (or any other) place of articulation is universally unmarked, arguing that either dorsal or labial (as well as coronal) can also function as the unmarked place in some languages. If this is correct, it would suggest the possibility of epenthetic consonants such as [p] or [k] as well. It remains, perhaps, to be seen whether such cases exist. De Lacy discusses several reported cases, but argues that they are better analyzed in other terms (for example because putative epenthetic consonants in some such cases can be treated as present underlyingly). At present, a clear understanding of the typology of consonant epenthesis is arguably somewhat clouded by lack of clear consensus on relevant empirical generalizations. Considerable disagreement exists over the interpretation of patterns in some individual languages, a famous example being the question of whether the “intrusive r” phenomenon found in some English dialects (e.g. the pronunciation of saw it as [sDP/t] in some Eastern Massachusetts dialects, including my own) constitutes epenthesis (see de Lacy 2006, Lombardi 2002, and Uffman 2007 for discussion of this and other cases). Certain cross-linguistic generalizations have also been disputed. For example, while glottal stop is widely regarded as a frequent choice of hiatus interrupter, Uffmann (2007) proposes that glottal stops are not typically inserted primarily to avoid hiatus, but are generally used (German is cited as one example) to provide an onset in prosodically strong positions, e.g. word-initially or before a stressed vowel, where they function to create a maximized sonority contrast with the following vowel. (This account crucially assumes that glottal stops are the lowest sonority consonants, which is exactly the opposite of what de Lacy assumes.) Undoubtedly, there will be further debate over some of the relevant empirical generalizations, as well as their appropriate theoretical treatment.

Hiatus Resolution

22

3.4 The problem of gradience An important distinction in most phonological theories is the distinction between categorical and gradient processes (see chapter 89: gradience and categoricality in phonological theory). A categorical change involves a clear “either-or” shift in the presence of one or more segments or their features, as in a case where an underlying segment is removed completely (elision) or undergoes changes in the binary values of one or more features. Frequently, however, languages manifest gradient processes that involve changes in the degree of some feature, e.g. a phonemically oral vowel is slightly nasalized next to a nasal consonant but remains less nasal than phonemic nasal vowels in the same language. In hiatus contexts, a possible gradient change might involve the “near elision” of one of the adjacent vowels, e.g. a case where an underlying /V1 V2/ sequence is realized phonetically as V2 (perhaps with lengthening) preceded by a short and variable remnant of V1. Hiatus resolution processes have most often been described and analyzed in terms that suggest categorical changes. However, two recent instrumental studies, Baltazani (2006) and Zsiga (1993, 1997), have shown that hiatus resolution patterns (glide formation and/or vowel elision) that had previously been described as categorical in two languages, Modern Greek and Igbo, respectively, actually involve gradient and highly variable timing adjustments. For reasons of space, we will consider only the Igbo case here. Sequences of adjacent vowels arise very commonly in Igbo in cases where a word ending in a vowel precedes a word beginning in a vowel, as in the phrases shown below (from Zsiga 1997; the diacritics mark [−ATR] vowels). (41)

/ïsïtñ ïtñ/ /ñt¢ ñzñ/ /ezi ïtñ/ /ede ïtñ/

‘three sevens’ ‘another grub’ ‘three loans’ ‘three coco-yams’

Three Igbo subjects in Zsiga’s (1993) study each produced six repetitions of each of these and various similar phrases in which one of the eight Igbo vowels occurs word-finally before one of the words [ïtñ] ‘three’ or [ñzñ] ‘another’. (In all, each of the eight vowels was used in two utterances.) Vowel formant measurements of the digitized recordings showed extreme variation, even for the same utterance produced by the same speaker, in the realization of the underlying vowel sequences. These ranged from tokens showing essentially no deletion or assimilation (i.e. in which both vowels clearly surface) to those showing complete loss of V1 (i.e. with the output consisting entirely of a lengthened version of V2). If all the observed outcomes were of one of these two types, this might suggest a categorical but optional rule eliding V1 with compensatory lengthening of V2 (or a rule of total assimilation of V1 to V2). Importantly, however, the results show a range of intermediate realizations as well, in which formant values near the beginning of the vocalic span show a quality intermediate between V1 and V2. This intermediate quality varies across repetitions of the same utterance from one that is more similar to V1 to one that is more similar to V2. Zsiga argues that such findings are not easily reconcilable

23

Roderic F. Casali

with an analysis that treats hiatus resolution as optional but categorical, and that the process is better understood as an adjustment in the relative timing of V1 and V2. More specifically, achievement of the target articulatory gestures for V2 varies from relatively late (allowing for a more or less normal manifestation of a preceding V1) to relatively early (resulting in partially assimilated tokens) to virtually at the release of the preceding consonant (in which case V1 is essentially gone). Seen from this perspective, superficial instances of categorical deletion in some of the tokens are better regarded as simply the extreme endpoint of a process that applies along a continuum. Though specific proposals vary, it has been widely assumed that the familiar kinds of phonological rules and/or constraints standardly used in the analysis of categorical processes are not appropriate to the treatment of gradient sound changes. Zsiga analyzes gradient hiatus resolution in Igbo using the framework of articulatory phonology (Browman and Goldstein 1990), a model that is well suited to handling variable adjustments in the relative timing of gestures. In addition to highlighting the importance of (and need for additional) explicit theoretical treatments of gradient changes in hiatus contexts, these studies raise an important empirical issue as well. Hiatus resolution in both Igbo and Modern Greek had been described in some previous studies as categorical. This raises the possibility (see Zsiga 1997: 265) that other hiatus resolution processes that have been described as categorical in the literature might turn out to be gradient upon closer examination. Studies such as Baltazani’s and Zsiga’s underscore the need for careful attention to the possibility of gradience in the context of descriptive phonological fieldwork.

4

Summary

Hiatus resolution patterns are extremely varied. This chapter has provided a brief and necessarily selective look at some of the variation that occurs in the behavior of particular hiatus resolution processes and in their co-occurrence and interaction. The range of explanatory models that have arisen in connection with hiatus resolution phenomena is also very broad. We have looked at a sample of theoretical proposals from several time periods, including early generative treatments, autosegmental analyses, and several OT models. The central research questions have varied somewhat from model to model. Whereas rule formalism and related issues were a central concern in early generative analyses, autosegmental analyses used more elaborated phonological representations to suggest new solutions to problems such as compensatory lengthening, the featural output of coalescence, and the role of syllable structure in hiatus resolution. Issues that have arisen within OT include the primary markedness constraint that triggers hiatus resolution, the constraint rankings that determine which of two adjacent vowels elides, and the problem of accounting for the range of consonants that can function epenthetically as hiatus interrupters. Finally, we have looked briefly at an issue, gradient hiatus-related processes, which poses potentially important theoretical and empirical challenges for any approach.

Hiatus Resolution

24

ACKNOWLEDGMENTS I am grateful to Beth Hume, Marc van Oostendorp, Keith Snider, and two reviewers for their valuable comments and suggestions on an earlier version of this paper, and to Sudharsan Seshadri Nagarajan for assistance with relevant library research.

REFERENCES Adive, John Raji. 1989. The verbal piece in Ebira. Arlington: Summer Institute of Linguistics & University of Texas at Arlington. Adjekum, Grace, Mary E. Holman & Thomas W. Holman. 1993. Phonological processes in AnufD. Legon: Institute of African Studies, University of Ghana. Ahmad, Zaharani. 2001. Onset satisfaction and violation in Malay: An optimality account. In Graham W. Thurgood (ed.) Papers from the 9th Annual Meeting of the Southeast Asian Linguistics Society, 1999, 135–159. Tempe: Arizona State University Program for Southeast Asian Studies. Alderete, John. 2003. Structural disparities in Navajo word domains: A case for LexCatFaithfulness. The Linguistic Review 20. 111–157. Aoki, Paul K. 1974. An observation of vowel contraction in Xhosa. Studies in African Linguistics 5. 223 –241. Awobuluyi, Oladele. 1972. The morphophonemics of Owon Afa. Research Notes, Department of Linguistics and Nigerian Languages, University of Ibadan 5. 25 –44. Bakovio, Eric. 2003. Conspiracies and morphophonological (a)symmetry. Paper presented at the 1st Old World Conference in Phonology, Leiden. http://works.bepress.com/ cgi/viewcontent.cgi?article=1041&context=ebakovic (April 2010). Bakovio, Eric. 2007. Hiatus resolution and incomplete identity. In Fernando Martínez-Gil & Sonia Colina (eds.) Optimality-theoretic studies in Spanish phonology, 62–73. Amsterdam & Philadelphia: John Benjamins. Balogné Bérces, Katalin. 2006. What’s wrong with vowel-initial syllables? SOAS Working Papers in Linguistics 14. 15 –21. Baltazani, Mary. 2006. Focusing, prosodic phrasing, and hiatus resolution in Greek. In Louis M. Goldstein, Douglas Whalen & Catherine T. Best (eds.) Laboratory phonology 8, 473–494. Berlin & New York: Mouton de Gruyter. Bergman, Richard. 1968. Vowel sandhi in Igede and other African languages. M.A. thesis, Hartford Seminary Foundation. Bliese, Loren F. 1981. A generative grammar of Afar. Arlington: Summer Institute of Linguistics. Borroff, Marianne L. 2003. Against an Onset approach to hiatus resolution. Paper presented at the 77th Annual Meeting of the Linguistic Society of America, Atlanta (ROA-586). Borroff, Marianne L. 2007. A landmark underspecification account of the patterning of glottal stop. Ph.D. dissertation, Stony Brook University. Browman, Catherine P. & Louis Goldstein. 1990. Tiers in articulatory phonology, with some implications for casual speech. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 341–376. Cambridge: Cambridge University Press. Brown, Gillian. 1970. Syllables and redundancy rules in generative phonology. Journal of Linguistics 6. 1–17. Casali, Roderic F. 1995. Patterns of glide formation in Niger-Congo: An optimality account. Paper presented at the 69th Annual Meeting of the Linguistic Society of America, New Orleans.

25

Roderic F. Casali

Casali, Roderic F. 1996. A typology of vowel coalescence. UC Irvine Working Papers in Linguistics 2. 29 –42. Casali, Roderic F. 1997. Vowel elision in hiatus contexts: Which vowel goes? Language 73. 493–533. Casali, Roderic F. 1998. Resolving hiatus. New York & London: Garland. Casali, Roderic F. 2003. [ATR] value asymmetries and underlying vowel inventory structure in Niger-Congo and Nilo-Saharan. Linguistic Typology 7. 307–382. Causley, Trisha. 1999a. Complexity and markedness in Optimality Theory. Ph.D. dissertation, University of Toronto. Causley, Trisha. 1999b. Faithfulness and contrast: The problem of coalescence. Proceedings of the West Coast Conference on Formal Linguistics 17. 117–131. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1986. Compensatory lengthening and consonant gemination in LuGanda. In Wetzels & Sezer (1986), 37–77. Craene, Robert de. 1986. Le verbe conjugé en Tem. Studies in African Linguistics 17. 1–37. de Lacy, Paul. 2006. Markedness: Reduction and preservation in phonology. Cambridge: Cambridge University Press. Elimelech, Baruch. 1976. A tonal grammar of Etsako. UCLA Working Papers in Phonetics 35. Available at http://escholarship.org/uc/item/7qd5v492. Emenanjo, E. N. 1972. Vowel assimilation in Igbo. Research Notes, Department of Linguistics and Nigerian Languages, University of Ibadan 5. 7–18. Faraclas, Nicholas. 1982. Elision and other morpheme boundary phenomena in the western dialects of Obolo. Journal of West African Languages 12. 69 –82. Haas, Wim G. de. 1988. A formal theory of vowel coalescence: A case study of ancient Greek. Ph.D. dissertation, University of Nijmegen. Halle, Morris. 1978. Further thoughts on Kasem nominals. Linguistic Analysis 4. 167–185. Harms, Robert T. 1973. How abstract is Nupe? Language 49. 439 –446. Hedinger, Robert & Sylvia Hedinger. 1977. Phonology of AkDDse (Bakossi). Yaoundé: Summer Institute of Linguistics. Howard, Irwin. 1973. A directional theory of rule application in phonology. Bloomington: Indiana University Linguistics Club. Hulstaert, Gustaaf. 1970. Esquisse du parler des Nkengo. Tervuren: Musée Royal de l’Afrique Centrale. Hume, Elizabeth. 2003. Language specific markedness: The case of place of articulation. Studies in Phonetics, Phonology and Morphology 9. 295 –310. Hyman, Larry M. 1973. Nupe three years later. Language 49. 447–452. Hyman, Larry M. (ed.) 1979. Aghem grammatical structure. Southern California Occasional Papers in Linguistics 7. Kaisse, Ellen M. 1977. Hiatus in Modern Greek. Ph.D. dissertation, Harvard University. Katamba, Francis. 1985. A non-linear account of the syllable in Luganda. In Didier L. Goyvaerts (ed.) African linguistics: Essays in memory of M. W. K. Semikenke, 267–283. Amsterdam: John Benjamins. Kimenyi, Alexandre. 1979. Studies in Kinyarwanda and Bantu phonology. Carbondale, IL: Linguistic Research Inc. Kutsch Lojenga, Constance. 1994. Ngiti: A Central-Sudanic language of Zaire. Cologne: Rüdiger Köppe Verlag. Lamontagne, Greg & Sam Rosenthall. 1996. Contiguity constraints and persistent vowel parsing. Unpublished ms., University of British Columbia & Ohio State University. Lombardi, Linda. 2002. Coronal epenthesis and markedness. Phonology 19. 219 –251. McLaren, James. 1955. A Xhosa grammar. 2nd edn. Cape Town: Longmans, Green & Co. Midtlyng, Patrick J. 2005. Washo morphophonology: Hiatus resolution at the edges or let them be vowels. Santa Barbara Papers in Linguistics 16. 50 –63.

Hiatus Resolution

26

Mtenje, Al. 1980. Aspects of Chichewa derivational phonology and syllable structure. M.A. thesis, Southern Illinois University, Carbondale. Nurse, Derek & Gérard Philippson. 1977. Tones in Old Moshi (Chaga). Studies in African Linguistics 8. 49 –80. Orie, Olanike Ola & Douglas Pulleyblank. 1998. Vowel elision is not always onset-driven. Unpublished ms., Tulane University & University of British Columbia. Parker, Kirk. 1985. Baka phonology. Occasional Papers in the Study of Sudanese Languages 4. 63–85. Parker, Steve. 1989. The sonority grid in Chamicuro phonology. Linguistic Analysis 19. 3–58. Parkinson, Frederick B. 1996. The representation of vowel height in phonology. Ph.D. dissertation, Ohio State University. Payne, David L. 1981. The phonology and morphology of Axininca Campa. Austin, TX: Summer Institute of Linguistics. Phelps, Elaine. 1975. Simplicity criteria in generative phonology: Kasem nominals. Linguistic Analysis 4. 297–332. Phelps, Elaine. 1979. Abstractness and rule ordering in Kasem: A refutation of Halle’s maximizing principle. Linguistic Analysis 5. 29 –69. Picard, Marc. 2003. On the emergence and resolution of hiatus. Folia Linguistica Historica 24. 44 –57. Plunkett, Gray C. 1991. The tone system of Foodo nouns. M.A. thesis, University of North Dakota. Pulleyblank, Douglas. 1986. Underspecification and low vowel harmony in Okpe. Studies in African Linguistics 17. 119 –153. Pulleyblank, Douglas. 1988. Vowel deletion in Yoruba. Journal of African Languages and Linguistics 10. 117–136. Pulleyblank, Douglas. 1998. Yoruba vowel patterns: Deriving asymmetries by the tension between opposing constraints. Unpublished ms., University of British Columbia. Rice, Keren. 1995. On vowel place features. Toronto Working Papers in Linguistics 14. 73–116. Rice, Keren. 2007. Markedness in phonology. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 79 –97. Cambridge: Cambridge University Press. Rosenthall, Sam. 1994. Vowel/glide alternation in a theory of constraint interaction. Ph.D. dissertation, University of Massachusetts, Amherst. Rosenthall, Sam. 1997. The distribution of prevocalic vowels. Natural Language and Linguistic Theory 15. 139 –180. Rubach, Jerzy. 2000. Glide and glottal stop insertion in Slavic languages: A DOT analysis. Linguistic Inquiry 31. 271–317. Saloné, Sukari. 1980. Vowel coalescence and tonal merger in Chagga (Old Moshi): A natural generative approach. Studies in African Linguistics 11. 75 –100. Schane, Sanford A. 1987. The resolution of hiatus. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23(2). 279–290. Senturia, Martha B. 1998. A prosodic theory of hiatus resolution. Ph.D. dissertation, University of California, San Diego. Shaw, Patricia A. 1980. Theoretical issues in Dakota phonology and morphology. New York: Garland. Snider, Keith L. 1985. Vowel coalescence across word boundaries in Chumburung. Journal of West African Languages 15. 3 –l3. Snider, Keith L. 1989. Vowel coalescence in Chumburung: An autosegmental analysis. Lingua 78. 217–232. Sohn, Hyang-Sook. 1987. On the representation of vowels and diphthongs and their merger in Korean. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23. 307–323. Tchagbale, Zakari. 1976. Phonologie et tonologie du tem. Ph.D. dissertation, Université de la Sorbonne Nouvelle, Paris.

27

Roderic F. Casali

Tranel, Bernard. 1992–1994. Tone sandhi and vowel deletion in Margi. Studies in African Linguistics 23. 111–183. Tucker, Archibald N. 1962. The syllable in Luganda: A prosodic approach. Journal of African Languages 1. 122 –166. Uffmann, Christian. 2007. Intrusive [r] and optimal epenthetic consonants. Language Sciences 29. 451–476. Ulrich, Charles H. 1993. The glottal stop in Western Muskogean. International Journal of American Linguistics 59. 430 –441. Walli-Sagey, Elisabeth. 1986. On the representation of complex segments and their formation in Kinyarwanda. In Wetzels & Sezer (1986), 251–295. Westermann, Diedrich. 1930. A study of the Ewe language. London: Oxford University Press. Wetzels, W. Leo & Engin Sezer (eds.) 1986. Studies in compensatory lengthening. Dordrecht: Foris. Wiltshire, Caroline. 1992. Syllabification and rule application in harmonic phonology. Ph.D. dissertation, University of Chicago. Zsiga, Elizabeth C. 1993. Features, gestures, and the temporal aspects of phonological organization. Ph.D. dissertation, Yale University. Zsiga, Elizabeth C. 1997. Features, gestures, and Igbo vowels: An approach to the phonology–phonetics interface. Language 73. 227–274.

62 Constraint Conjunction Megan J. Crowhurst

. . . conjunctive interaction is from a formal point of view entirely natural in OT; indeed, in an important sense, its absence would be unnatural. By this I mean simply that without conjunction, basic OT typologies are not strongly harmonically complete, but with conjunction, they are. (Smolensky 2006: 139)

Interactions among constraints in early Optimality Theory (Prince and Smolensky 1993) were limited to a mode of strict domination, represented by the connective “>>.” However, researchers soon noted that a theory of constraint interaction relying on strict domination did not adequately account for certain well-known sound behaviors, prompting arguments in favor of additional modes of interaction. Three main proposals for combining simple constraints into more complex ones using connectives other than “>>” were advanced: local conjunction, an analog of Boolean conjunction, and material implication.

1

Classical Optimality Theory

Optimality Theory (OT) made its debut in linguistics through the work of Prince and Smolensky (1993), who adapted a constraint-based model with a long history in other fields (including evolutionary biology, the information sciences, and economics) for the formal analysis of sound phenomena in language. In Prince and Smolensky’s model, input states are mapped to output states through a procedure in which a set of potential analyses of the input (output candidates) is passed through a filter consisting of an evaluator, Eval, and a hierarchy of constraints restricting properties of the output. The constraints are members of Universal Grammar, and the priorities assigned to them vary by language. OT constraints are soft constraints – any constraint can be violated, but only under pressure from a constraint with higher priority. The predicted output is the candidate that survives the filter. This candidate is optimal in that it represents the input–output mapping that best satisfies the hierarchy by minimizing violations of higher-ranking constraints at the expense of lower-ranking ones. According to the doctrine of strict domination, if two constraints A and B interact, then constraint A outranks B (or vice versa) in the hierarchy, notated as The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0062

2

Megan J. Crowhurst

A >> B (or B >> A). An interaction A >> B is exposed when the requirements of A and B are incompatible, and there is an input whose properties are such that no viable candidate satisfies both constraints. In such cases, the higher-ranking constraint A is easy to identify because its effect is evident, while B’s effect is obscured. (Constraint B may, however, take effect when A is not at stake.) The following tableau illustrates the evaluation of a mini-set of two candidates, each violating either A or B and not the other. Being higher-ranked, A acts on the candidate set first: the candidate which best satisfies A is kept, and the less successful candidate (in regard to constraint A) is rejected. In this simple example, the candidate that fares best on A is the optimal candidate. Were the rankings of A and B reversed, candidate 2 would be the winner instead. (1)

Constraint A >> Constraint B Input ☞ a. output candidate 1 b. output candidate 2

Constraint A Constraint B * *!

Given that OT is a theory of constraint interaction, an important issue naturally concerns the ways in which constraints might interact. As noted, for Prince and Smolensky and much subsequent work, only strict domination was sanctioned as a mode of constraint interaction.1 Over time, various researchers have challenged this doctrine in considering what other relationships might hold among constraints. Some researchers have proposed a relation of non-dominance, in which neither constraint A nor B dominates the other (Crowhurst 2001). Under non-dominance, violations of A and B together are evaluated cumulatively. Others have argued that constraints can exist in a free ranking relationship to account for optional sound phenomena (e.g. Reynolds 1994; Prince 2001; Itô and Mester 2003a; Jacobs 2004). This chapter reviews proposals that have addressed the question “Can simple constraints combine with one other, and if so, how?” Or, “What connectives other than ‘>>,’ if any, define relationships that can hold among constraints?”

2

Local conjunction: Rejecting the “worst of the worst”

The first proposal for combining constraints beyond the standard mode of strict domination, introduced in Smolensky (1993, 1995, 1997) and subsequently worked out in a series of presentations culminating in Smolensky (2006), was that elemental OT constraints can be locally conjoined to form a more complex constraint that is violated only if both of its members are violated in a specified domain. Other influential discussions of the details and applications of local conjunction appeared in a series of papers by Itô and Mester (e.g. 1996, 1998, 2003a), culminating in Itô and Mester (2003b). Smolensky’s definition of local conjunction appears in (2) (Smolensky 2006: 43).

1

Discussion here is limited to proposals advanced in the core OT phonological literature. Combinatorial devices have been used uncontroversially in probabilistic models of OT.

Constraint Conjunction (2)

3

Local conjunction within a domain D *A &D *B is violated if and only if a violation of *A and a (distinct) violation of *B both occur within a single domain of type D.

In evaluating local conjunctions, Eval returns a mark “*” only when both conjuncts are violated, (3a). Some authors have observed (e.g. Hewitt and Crowhurst 1996) that local conjunction is in fact analogous to logical disjunction: “*” is equivalent to False (F), and the absence of a mark is equivalent to True (T). This can be seen by comparing (3a) with the truth table for logical disjunction. (3)

a.

Evaluation of a local conjunction C1

C1&DC2

C2

Cand1 Cand2

*

Cand3 Cand4 b.

* *

*!

*

C1

C1 C2

C2

Cand1

T

T

T

Cand2

F

T

T

Cand3

T

T

F

Cand4

F

F

F

Logical disjunction

Itô and Mester (2003b: 24) and Smolensky (2006) assume that “&” is a combinatorial operation made available by UG to individual languages, which may activate “&” to derive complex constraints on a language-specific basis. On this view, all of the constraints specified in ConUG, the universal set, plus any languagespecific local conjunctions, are mapped onto a larger, language-specific constraint set, ConG. So, “>>” determines strictly hierarchical rankings, while “&” combines constraints into “superconstraints,” which can then be inserted into hierarchies defined by “>>.”2 The role of the “&” operator in grammars is formally defined by Itô and Mester (2003b: 25) as in (4). (4)

Role of local conjunction in grammars A grammar G can expand the basic constraint set Con inherited from Universal Grammar to a superset ConG = Con ∪ {C1&C2}, for C1, C2 § Con. Expansion is potentially recursive, so that ConG can in turn be extended to a superset ConG′ by adding C3&C4 to ConG, for C3, C4 § ConG, and so on.

2

But see Itô and Mester (1998) and Bakovio (2000) for a slightly different view.

Megan J. Crowhurst

4

The main goal of local conjunction is to derive empirical generalizations about markedness from irreducible principles. Constraint conjunction and other proposals for combining constraints have been criticized on the grounds that they greatly increase the expressive power of the formal architecture of OT by exponentially expanding the constraint set. However, the insights, the improvements in precision, and to some extent the economies achieved with local conjunction have often been impressive.

2.1

Coda conditions

The earliest works to employ local conjunction (e.g. Itô and Mester 1996; Spaelti 1997; Smolensky 2006) noted that the mechanism could be used to advantage in analyzing coda conditions, constraints which impose strict conditions on syllable codas rather than penalizing them outright (chapter 33: syllable-internal structure; chapter 53: syllable contact). Coda conditions have often been captured by the schema CodaCond[a], where a denotes restricted features. A common requirement is for obstruents in coda positions to be voiceless, other factors being equal (e.g. German, Turkish, Russian; see chapter 69: final devoicing and final laryngeal neutralization). The coda condition for voicing appears in (5). (5)

CodaCond[voi] J K K G −son J K I+voiceL L q *C

(One * for any voiced obstruent syllabified exclusively as a syllable coda.) Early work promoting local conjunction noted that coda conditions can be treated as local conjunctions of two well-established constraints, NoCoda in (6a) on the one hand, and a markedness constraint such as (6b) on the other, to yield (6c), with the segment as the local domain of evaluation (Smolensky 1993, 2006; Itô and Mester 2003b; see also Morris 2002). (6)

a. b. c.

NoCoda Syllables do not have codas. (One * per syllable with a coda.) *VoiObst *[−sonorant, +voice]. (One * for any voiced obstruent.) NoCoda &SEG *VoiObst (One * for any segment which is in a syllable coda and which is a voiced obstruent.)

Local conjunctions such as (6c) capture the intuition that constraints expressing coda conditions are really more restrictive versions of NoCoda. A standard assumption has been that a local conjunction is ranked above its conjuncts, and this is consistent with the understanding that the effects of special constraints are visible only when some constraint intervenes between the special constraint and a related, less highly ranked general constraint. In the case under discussion,

Constraint Conjunction

5

restrictions on codas are visible in a grammar only when the following are true: (i) MaxIO(Seg) outranks both instantiations of NoCoda – the unconjoined version and the local conjunction in (6c) – so that the effects of the conjunction are not obscured by deletion; and (ii) IdentIO[voi] ranks above the unconjoined constraint *VoiObst, allowing a surface obstruent voicing contrast, but below (6c), so that only voiceless obstruents will occur in coda position, unless higher-ranking constraints demand otherwise (as in cases of regressive voice assimilation). A tableau making this point is given in (7) (adapted from Itô and Mester 2003b: 27–28). (7)

li(b

MaxIO(Seg) NoCoda &SEG IdentIO[voi] *VoiObst NoCoda *VoiObst

☞ a. li(p b. li(b c. li(

* *!

b

* *

*!

li(be MaxIO(Seg) NoCoda &SEG IdentIO[voi] *VoiObst NoCoda *VoiObst ☞ a. li(bH b. li(pH

b *!

Three advantages to the reformulation of (5) as (6c) are immediately apparent. First, locally conjoining *VoiObst with NoCoda avoids the redundancy that occurs when *VoiObst is stated twice, once as the general feature co-occurrence constraint in (6b), and once again, embedded in the coda condition, (5). Second, restating a given CodaCond constraint as the local conjunction of NoCoda with a standard markedness constraint explains why we have restrictions on coda consonants, but no constraints that express similar restrictions on onsets (Itô and Mester 2003b: 29). In a grammar that allows conjunction, nothing prevents the local conjunction of Onset and a markedness constraint like *VoiObst, but as tableau (8) shows, a conjunction like Onset &SEG *VoiObst could never be violated: if Onset is violated, then there is no onset consonant to check against *VoiObst. Conversely, if *VoiObst is violated by a consonant in onset position, then Onset is clearly not violated. (8)

bit a. bit ☞ b. pit c. it

Onset &SEG *VoiObst *VoiObst Onset *!

b *!

This latter result of local conjunction weighs strongly in its favor: given that *VoiObst encodes a standard generalization about markedness, and given that coda conditions stated in traditional terms following the model of (5) are common, it is notable that no evidence has been found for mirror image constraints imposing comparable restrictions on segments in onset position. The traditional approach

Megan J. Crowhurst

6

has been to assume that a formal asymmetry matches the empirical one – that is, that there are no Onset conditions, or constraints imposing CodaCond-like requirements on segments in onset position. As we have just seen, the local conjunction approach requires no formal asymmetry: local conjunctions like Onset &SEG *VoiObst might exist, but as they can never be active in selecting surface candidates, they will never be rankable, and their existence is moot.

2.2

Universal markedness hierarchies

Local conjunction has also been used successfully in explaining the fact that universal markedness hierarchies are preserved in multiple domains. Much work establishes the place harmony scale in (9a) (chapter 22: consonantal place of articulation). In OT grammars, this scale is expressed by the hierarchy in (9b). The subscript “UG” indicates that this ranking is “fixed,” or specified in universal grammar, and does not vary across languages. (9)

a. b.

Coronal > Labial, Dorsal *Lab, *Dors >>UG *Cor

The place markedness hierarchy in (9) is known to be preserved in different domains, and Smolensky (1993, 2006) advances a detailed argument as to why this might be so: given that (9b) is fixed in UG, any hierarchy in which the constraints in (9b) are locally conjoined with other constraints will be similarly fixed. To continue with the example of coda conditions, various authors, including Smolensky (1993, 2006), Zoll (1998), and Itô and Mester (2003b), have observed that locally conjoining the markedness constraints in (9b) with NoCoda produces the hierarchy in (10), which favors coronals over labials and dorsals in syllable codas. (10)

NoCoda &SEG *Lab, NoCoda &SEG *Dors >>UG NoCoda &SEG *Cor

Smolensky (2006) uses the same reasoning to explain the commonly observed proliferation of segmental contrasts among coronal consonants in many segmental inventories, relative to the labial and dorsal classes (chapter 12: coronals). As an example, consider the consonant inventory of Tohono O’odham (Uto-Aztecan) in (11). (11)

Tohono O’odham consonants labial p b m w

dental } { Ú |

coronal retroflex Õ Ï Î

palato-alveolar ΠРJ j

velar k g

glottal ? h

I

The coronal class dominates the inventory: 11 of the 19 consonants are coronals. Note also that the coronals are represented by three places and three manners of articulation (stops, fricative, and affricates) in the obstruent class, whereas the labials and velars are represented only by stops. To abbreviate Smolensky’s point, if a single-feature markedness constraint like *[+cont] (specified for fricatives

Constraint Conjunction

7

and affricates; see chapter 28: the representation of fricatives; chapter 16: affricates) can be locally conjoined with the fixed hierarchy in (9b), yielding (12), then a grammar that interposes a constraint IdentIO[Place] above *Cor &SEG *[+cont] produces an inventory that has [+continuant] coronals, but no continuant obstruents at other places of articulation. (12)

*Lab &SEG *[+cont], *Dors &SEG *[+cont] >>UG *Cor &SEG *[+cont]

2.3

Feature harmonies

Local conjunction has also been used to account for patterns of vowel and consonant harmony by locally conjoining markedness constraints to form feature domains (see also chapter 91: vowel harmony: opaque and transparent vowels; chapter 72: consonant harmony in child language; chapter 77: longdistance assimilation of consonants; chapter 118: turkish vowel harmony; chapter 123: hungarian vowel harmony). In Smolensky (2006), any contiguous sequence of segments which share one or more phonological features forms a feature domain, and any feature domain is a possible instantiation of the domain of a conjunction. An abbreviated version of Smolensky’s (2006: 64) definition of a feature domain is given in (13). (13)

Definition of feature domain (“z” stands for any feature) A maximal contiguous span of z-bearers with a common value [±z] is a [±z] feature domain D[±z]. (Thus, by definition, contiguous domains of the same z value are impossible.)

Smolensky’s use of the feature domain in OT builds on earlier work (e.g. Kirchner 1993; Smolensky 1993; Cole and Kisseberth 1994; Cassimjee and Kisseberth 1998), but the insight that feature domains can be analyzed in terms of local conjunctions of constraints is due to Smolensky. Examples of Smolensky’s (2006) use of feature domains as restrictors on local conjunction are seen in his treatments of vowel harmony and restrictions on consonant clusters. Smolensky (2006) provides an extended discussion of vowel harmony, with special attention given to a pattern of source-conditioned [ATR] harmony found in Lango (Nilotic: Okello 1975; Bavin Woock and Noonan 1979; Noonan 1992; Archangeli and Pulleyblank 1994). The Lango facts are complex; for our purposes, a fragment of Smolensky’s account will serve to make the point. Lango has the [+ATR] vowels [i e H o u] and the [−ATR] vowels [> e a D Á] (Noonan 1992). The examples in (14) show a root with a [−ATR] mid vowel /e/ combining with a suffix containing a [+ATR] high vowel, /u/ or /i/. (Tone is omitted in these examples.) In (14a), we see that /e/ assimilates [+ATR] when the suffix vowel is /i/, but not when it is /u/, as in (14b). (14)

Lango [ATR] harmony a. b.

Regressive [+ATR] harmony /dek + Ci/ dek.ki ‘your (sg) stew’ No harmony /dek + wu/ dek.wu ‘your (pl) stew’

Megan J. Crowhurst

8

In autosegmental terms, [+ATR] spreads regressively from /i/ to /e/ in [dek.ki], forming a [+ATR] feature domain whose head is the source vowel [i], as shown in (15a). (Smolensky marks heads of feature domains with a superscript “o”; we will adopt the convention of underlining heads of feature domains.) The form [dek.wu], where no assimilation has applied, has both a [−ATR] and a [+ATR] feature domain. (15)

a.

[+ATR]

b. [−ATR] [+ATR]

dek ki

dek

wu

According to Archangeli and Pulleyblank’s (1994) autosegmental analysis of Lango, [+ATR] spreads regressively from a [+high] source vowel to a target vowel if one of the following is true: (i) the source vowel is [+front] and the target is any vowel in either an open or closed syllable; (ii) the source vowel is not [+front], source and target vowels are both [+high], and the target is in either an open or closed syllable; or (iii) the source vowel is not [+front], and the target is not [+high], and the target is in an open syllable. [+ATR] spread in (14a) meets condition (i), but (14b) meets none of the conditions for harmony. Following Archangeli and Pulleyblank (1994), Smolensky’s analysis (2006) of Lango draws on the insight that ATR harmony is conditioned by the markedness of segments that combine [−ATR] and [+ATR] with other features. Well-established markedness constraints are shown in (16).3 (16)

a.

b.

c.

[ATR] and backness *[+ATR, −front] *[−ATR, +front] [ATR] and height *[+ATR, −high] *[−ATR, +high] *V[+A]C]q

(Avoid [+ATR] back vowels.) (Avoid [−ATR] front vowels.) (Avoid [+ATR] mid and low vowels.) (Avoid [−ATR] high vowels.) (No [+ATR] vowels in closed syllables.)

Returning to the examples in (14), note that in [dek.ki], in which underlying /e/ harmonizes with /i/, the outcome optimizes the constraint *[−ATR, +front], which prefers [e], but at the cost of violating *[+ATR, −high], which prefers [e]. In [dek.wu], however, we see the opposite pattern of constraint satisfaction: [e] in the output optimizes *[+ATR, −high] but violates *[−ATR, +front]. The critical difference is in the source vowel. Archangeli and Pulleyblank’s intuition, which Smolensky seeks to capture, is that less marked segments make better domain heads, and that when possible, better domain heads propagate their features through harmony. To the constraints in (16), then, Smolensky’s account adds *Hd[+ATR] and *Hd[−ATR], which penalize a segment for being the head of a [+ATR] or [−ATR] domain, respectively.

3

The constraints in (16) and similar constraints, were proposed as elements of the “grounded phonology” framework developed in Archangeli and Pulleyblank (1994).

Constraint Conjunction (17)

9

a.

Hd-L[−ATR] A [−ATR] domain must be left-headed. (No regressive [−ATR] spread.) b. (*[−ATR, +front] & *Hd[ATR]) &D[ATR] F[ATR] No [+front] head of an unfaithful [−ATR] domain. (No [−ATR] spread from a [+front] vowel.) c. *[−ATR, +high] Vowels do not combine the features [−ATR] and [+high]. (One * for either of [> Á].) d. (*[+ATR, −front] & Hd-L[ATR]) &D[ATR] (*[+ATR, −high] & *V[+ATR]C]q & F[ATR]) A [+ATR] domain with a [−front] head that is not leftmost must be faithful at a [−high] vowel at a closed syllable. (No regressive [+ATR] spread from a [−front] source onto a [−high] vowel in a closed syllable.)

The constraints in (17b) and (17d) are complex and require further explanation. The embedded local conjunction (*[−ATR, +front] & *Hd[ATR]) penalizes a vowel which is both the head of an ATR domain and has the features *[−ATR, +front]. According to this restriction, the lax vowels [> e] cannot head an ATR domain. The syllable [dek] in [dek.wu] violates this requirement. Smolensky uses the expression F[ATR] in the “macro” local conjunction more or less as the more standard Ident[ATR] would be used, to require the segments in an ATR domain to be faithful to the value for [ATR] they came with. Conjoining (*[−ATR, +front] & *Hd[ATR]) and F[ATR], taking an ATR domain as the locus of violation, has the effect of minimizing ATR domains headed by [−ATR, +front] vowels. That is, the macro-conjunction prevents [−ATR] spread from a front vowel, and this is why /dek + wu/ surfaces as [dek.wu] and not [dek.wÁ]. By itself, the first embedded local conjunction in (17d), (*[+ATR, −front] & Hd-L[ATR]), would penalize a [+ATR] domain whose head, a [−front] vowel [u], [H] or [o], is not leftmost within the domain.4 The second embedded local conjunction, (*[+ATR, −high] & *V[+ATR]C]q & F[ATR]), penalizes an unfaithful [+ATR] domain which contains a closed syllable whose vowel is [−high]. The macro-conjunction formed by locally conjoining the two smaller local conjunctions with an ATR domain as the locus of violation means just this: for a [+ATR] domain whose head is on the right and is one of the vowels [u H o] to pass the macro-conjunction, there cannot be further to the left a closed syllable with a [−high] vowel, which is unfaithful through assimilation to the head. This is why regressive [+ATR] spread does not apply in /dek + wu/, so that on the surface we find [dek.wu] and not [dek.wu]. Tableau (18) shows why /dek + wu/ surfaces as [dek.wu] and not [dek.wÁ] or [dek.wu] (i.e. why we have neither regressive [+ATR] harmony nor progressive [−ATR] harmony in this case, under Smolensky’s analysis).

4

For example, a [+ATR] domains such as e . . . u would be penalized. Three types of ATR domain are permitted by the local conjunction (*[+ATR, −front] & Hd-L[ATR]). If the head is one of the set [i e > e a D Á] (i.e. anything other than [u H o]), then the head is rightmost or leftmost in the ATR domain. The head can be one of the set [u H o] if it is domain-initial.

Megan J. Crowhurst

10 (18)

Lango: No regressive [+ATR] harmony /dek+wu/ a. dek.wu ☞ b. dek.wu

(17d)

Hd-L[−atr]

(17b)

*[–atr,hi] Agree[atr]

*! *

c. dek.wÁ

*!

*

Tableau (19) shows how the analysis works for [dek.ki] (from /dek + Ci/). In this case, progressive [−ATR] is blocked just as for [dek.wu], and for the same reason. However, in [dek.ki], regressive [+ATR] harmony is not blocked, because in this case, neither the head (a front vowel this time) nor the target of assimilation (the vowel /e/, in an open, not closed syllable) violates the macro-conjunction in (17d). (19)

Lango: Regressive [+ATR] harmony /dek+Ci/

(17d)

Hd-L[−atr]

(17b)

*[–atr,hi] Agree[atr]

☞ a. dek.ki b. dek.ki

*!

c. dek.k>

*!

*

The brief discussion offered here falls short of representing the very complicated pattern of [ATR] harmony found in Lango, or of Smolensky’s (2006) minutely detailed account of the same. The goal here has been to illustrate a proposal for combining constraints that are already complex into a macro-constraint. More detailed discussions of the Lango pattern can be found in Noonan (1992), Archangeli and Pulleyblank (1994), and Smolensky (2006). Another very common type of restriction can be seen in conditions on consonant clusters: in languages that permit consonant clusters at all, coronals tend to combine with other consonants much more permissively than do labials and dorsals (chapter 46: positional effects in consonant clusters). For example, English allows coronals to cluster with other coronals, with labials, and with dorsal consonants, as shown in (20a). However, with the exception of a few loans (e.g. Akbar, Afghan) and clusters formed across compound boundaries (e.g. [[cup][cake]], [[black][bird]]), labials and dorsals do not cluster with either labials or dorsals, so that (with the regular exception of homorganic nasal + obstruent sequences), clusters such as those in (20b) are generally disallowed. (20)

a.

Cor with Cor state holder chortle adze parlour Atlantic bolster trap

Cor with Lab apt Abner abduct help spry almond arm atmosphere

Cor with Dors acne task silk argue disclose alcohol agree Atkins

Constraint Conjunction b.

Lab with Lab *pb, *bp *pm, *bm *pf, *bv, *fp, etc.

Dors with Dors *kg, *gk

11

Dors with Lab *kp, *pk, *gb, *bg *km, *gm *kf, *fk, *gv, *vg, etc.

The asymmetric distribution of place in consonant clusters is another effect of the place hierarchy in (9). Smolensky (2006) shows that these effects can be accounted for by locally conjoining the constraints in (9b) with one another to produce a cluster place markedness hierarchy like that in (21) ((21) expands Smolensky’s hierarchy to include dorsals) (21)

Place markedness hierarchy for two-consonant clusters 1*[Lab]&CL*[Lab] 5 *[Lab]&CL*[Cor] # 2*[Dors]&CL*[Lab] 6 >>UG ! >> *[Cor]&CL*[Cor] @*[Dors]&CL*[Cor] $ UG 3*[Dors]&CL*[Dor]7

In his analysis of phonotactic conditions on consonant sequences, Smolensky (2006) identifies the domain of local conjunction as a consonant cluster, CL. However, a consonant cluster per se is not a unit of phonological structure. Although Smolensky doesn’t discuss consonant clusters in this light, any sequence of segments whose members share features can be thought of as instantiating feature domains. An obstruent cluster, for example, would instantiate the D[±z] formed by the feature [−sonorant], and a nasal–stop cluster would correspond to the feature domain [−continuant].

2.4

Counterfeeding effects

One of the more striking advantages claimed for local conjunction is its usefulness in accounting for counterfeeding effects. Moreton and Smolensky (2002) show, refining a proposal in Kirchner (1996), that local conjunction can account for synchronic chain shifts of the type A → B, B → C.5 In Western Basque, for example, when two identical vowels are juxtaposed, the low vowel /a/ raises to [e] before [a], and /e/ raises to [i] before [e], as shown in (22) (Kirchner 1996; Moreton and Smolensky 2002; Kawahara 2003). (22)

a.

b.

/a/ → [e] / __ V (violates Ident[low]) alaba bat ‘daughter (indef)’ alabea neska bat ‘girl (indef)’ neskea /e/ → [i] / __ V (violates Ident[high]) seme bat ‘son (indef)’ semie ate bat ‘door (indef)’ atie

‘daughter (def)’ ‘girl (def)’ ‘son (def)’ ‘door (def)’

The Kirchner and the Moreton and Smolensky account goes as follows: if the unmarked vowel is [i], then we’d expect both /a/ and /e/ to raise to [i] in the shifting environment, other factors being equal. However, each vowel moves only one step up, so that /a/ changes its value for the feature [low], and /e/ changes 5

See also Beckman (2003) and Smolensky (2006: §5).

12

Megan J. Crowhurst

its value for [high]. Neither adjustment results in a change of both features. This effect can be accounted for if a conjunction of the constraints Ident[low] and Ident[high] is ranked above a set of markedness constraints (presumably including at least one OCP constraint) that promote raising in a hiatus; call this set HR. Thus, either of the Ident constraints can be violated on its own, when they are ranked below HR, but no shift violating both constraints within a segment (the relevant domain) can occur. The tableaux in (23), adapted from Moreton and Smolensky (2002), illustrate the analysis for two identical sequences of vowels, /aa/ in /alaba-a/ and /ee/ in /seme-e/. (23)

/alaba-a/ Ident[low] &SEG HR Ident[low] Ident[high] Ident[high] ☞ a. alabea b. alabaa c. alabia

* * *!

*

/seme-e/ Ident[low] &SEG HR Ident[low] Ident[high] Ident[high] ☞ a. semie b. semee

* *!

In the tableaux for [alabea] and [semie], we see that the sequences /aa/ and /ee/ are ruled out by the markedness constraints (the HR set). For /alabaa/, an output [alabia], with raising to [i], is rejected by the local conjunction – the output is the candidate that violates only one of the Ident constraints (in this case Ident[high]). In the case of /semee/, [semie] with a high vowel is possible because raising in this case violates only Ident[high] and not Ident[low]. Violating only one of the Ident constraints, [semie], does not violate the local conjunction.

2.5

The conjunction of markedness and faithfulness constraints

So far, we have discussed the local conjunction of markedness constraints, and the conjunction of faithfulness constraints (chapter 63: markedness and faithfulness constraints). Bakovio (2000) and Ìubowicz (2002, 2005) argue for a special role for the conjunction of markedness with faithfulness constraints (M&F conjunction). Bakovio (2000) shows that M&F conjunction can be used to resolve what he calls the “majority rules” problem in the analysis of voicing assimilation in obstruent clusters. In cross-syllabic obstruent clusters, voicing assimilation is generally controlled by the obstruent in onset position. In Dutch, for example, the medial clusters /kd/ zakdoek ‘handkerchief’ and /dk/ in bloedkoraal ‘red coral’ surface as [gd] and [tk], respectively (Kager 1999).6 However, patterns of assimilation 6

The pattern reported here for Dutch is very common. A seemingly odd characteristic of the dialect Kager describes that voiced fricatives in onset position are always devoiced (e.g. /zv/ in kaasvorm ‘cheese mould’ surfaces as [sf]), and this requirement compels the surface voicelessness of any preceding obstruent.

Constraint Conjunction

13

in word-final obstruent clusters are not influenced by requirements on syllable onsets. Consider a language in which an obstruent voicing contrast is preserved in word-final position, which has word-final clusters of two obstruents such as /bt#/ and /pd#/, and in which the markedness constraint Agree[voice] (obstruent clusters must agree in voicing) dominates the faithfulness constraint Ident[voice]. In such a case, where all candidates passing Agree[voice] fail the faithfulness constraint, the candidate containing the voiceless cluster is selected by *VoiObst. Where the input provides a two-consonant cluster which already satisfies the agreement constraint, for example /pt/ or /bd/, input voicing is generally preserved, showing that Ident[voice] dominates *VoiObst (as long as no “spoiler” constraint intervenes). (24)

/pd/ a. pd b. bd

Agree[voice] Ident[voice] *VoiObst *! *

☞ c. pt /bt/ a. bt b. bd

* *!*

* Agree[voice] Ident[voice] *VoiObst *! *

☞ c. pt

* *!*

*

This analysis predicts a different outcome for clusters of more than two obstruents. Tableau (25) shows that in such cases the ranking Ident[voice] >> *VoiObst predicts that cluster voicing will be determined by the majority. This yields the typologically supported outcome for clusters like /skd/, in which two of three obstruents are voiceless at input (e.g. English basked). Unfortunately, it also predicts rare (if at all attested) outcomes like /zgt/ → [zgd] when two of three obstruents are voiced at input. (25)

/zgt/ Agree[voice] Ident[voice] *VoiObst a. zgt ☞ b. zgd c. skt

*! *

** ***

**!

/skd/ Agree[voice] Ident[voice] *VoiObst a. skd b. zgd ☞ c. skt

*! **! *

* ***

Megan J. Crowhurst

14

Bakovio shows that the “majority rules” problem is avoided if the constraints Ident[voice] and *VoiObst can be locally conjoined, assuming the segment as the domain of the conjunction. In (26) we see that on this analysis, of the two clusters that survive Agree[voice], [zgd] and [skt], the fully voiced cluster [zkd] is rejected even though it is more faithful, because it violates the conjunction Ident[voice] &SEG *VoiObst (which is ranked above both of its conjuncts). (26)

/zgt/ Agree[voice] a. zgd b. zgt

*!

c. zkt

*!

☞ d. skt

Ident[voice] &SEG*VoiObst

Ident[voice] *VoiObst

*!

*

*** **

*

*

*

**

Bakovio’s larger point is that the presence of marked segments (in this case voiced obstruents) on the surface is one thing, but the presence of unfaithful marked segments is another. In cases of assimilation in mixed clusters, the presence of unfaithful marked segments can be compelled by high-ranking faithfulness constraints (e.g. assimilation to a voiced onset in cross-syllabic clusters). However, when such pressures are absent, assimilation to the unmarked is the pattern attested across languages. In Bakovio’s example, an analysis that admits the local conjunction Ident[voice] &SEG *VoiObst guarantees the typologically supported outcome, whereas an analysis that relies exclusively on the unconjoined constraints admits the (unsupported) assimiliation to the marked alternative. Ìubowicz (2002, 2005) argues that M&F conjunctions are necessary to account for derived environment effects (chapter 88: derived environment effects), using palatalization in Polish (chapter 121: slavic palatalization) as an example. Other works using local conjunction to account for derived environment effects include Downing (2001) and Itô and Mester (2003a).

2.6

Local self-conjunction

As a special case of local conjunction, some researchers have argued that markedness constraints can be “self-conjoined,” and that self-conjunction offers new insights into cases of dissimilation that have often been attributed to the OCP (e.g. Fukazawa 1999, 2001; Itô and Mester 1998, 2003b; Alderete 2004; Smolensky 2006). As Itô and Mester (2003b: 29) put it: the “Obligatory Contour Principle” is by itself neither a constraint nor a formal universal in phonological theory. The culprit in OCP-type dissimilations is not the adjacency of identical feature specifications on a tier, but the multiple presence of a marked type of structure within some domain.

Constraint Conjunction

15

Self-conjunction is intended to capture the intuition that in dissimilation, one violation of a markedness constraint is tolerated, but a double violation crosses what Itô and Mester aptly call a markedness threshold, and is categorically rejected. As an example, let us consider the role of obstruent voicing in Japanese. In the lexical classes comprised by native Japanese and Sino-Japanese forms, voiced obstruents cannot co-occur within a morpheme (chapter 86: morpheme structure constraints), but neither voiceless obstruents nor voiced sonorants are subject to voicing restrictions (Itô and Mester 1998, 2003b). Thus, morphemes such as those in (27a) and (27b) are typical of Japanese (Itô and Mester 2003b: 34–35). However, native Japanese and Sino-Japanese morphemes containing more than one voiced obstruent (e.g. *[kabazi]) are not. (27)

a.

b.

kusa tako sato kaze geta hada

‘grass’ ‘octopus’ ‘village’ ‘wind’ ‘clogs’ ‘skin’

kataki hotoke tatami kasegi kagami -bakari

‘enemy’ ‘Buddha’ ‘straw mat’ ‘earning’ ‘mirror’ ‘only’

The obstruent voicing contrast in Japanese requires the constraint ranking IdentIO >> *VoiObst. The fact that multiple occurrences of voiced obstruents within a morpheme is prohibited is accounted for by self-conjoining the constraint *VoiObst, with the morpheme as the domain. The conjunction *VoiObst2MORPH (=*VoiObst &MORPH *VoiObst) is ranked above IdentIO. Tableau (28) shows how a set of candidates based on the form kaze and the hypothetical input /gaze/ are evaluated under these rankings. (28)

/kaze/ *VoiObst2MORPH IdentIO(Seg) *VoiObst ☞ a. kaze b. kase

*!

c. gase

*!*

g

*

gz

d. gaze

z

*!

/gaze/ *VoiObst2MORPH IdentIO(Seg) *VoiObst ☞ a. kaze b. kase

* **!

z

☞ c. gase

*

g

d. gaze

*!

gz

Self-conjunction can account not only for passive restrictions on the occurrence of similar structures within a domain, but also cases in which the existence of alternations makes the dissimilation more obvious. A well-known example is Lyman’s Law, which blocks the effect of Rendaku, a phenomenon seen in Japanese

16

Megan J. Crowhurst

lexical compounds. Rendaku assigns the feature [+voice] to a voiceless obstruent appearing as the initial consonant of the second member of a compound. The feature [+voice] is treated as a special morpheme in this case, and as such, its presence in the output is required by the constraint RealizeMorph, defined in (29) (Walker 2000; chapter 103: phonological sensitivity to morphological structure). (29)

RealizeMorph Every morpheme has a phonological exponent in the output. (One * for any null morpheme.)

The Rendaku effect requires RealizeMorph to be ranked more highly than IdentIO(Seg). Tableau (30) is adapted from Itô and Mester (2003b: 37) (constraint names have been changed to match the names used here). The examples used in tableaux (31) and (30) are /ori + kami/ → [origami] ‘paper folding’ and /kami + kaze/ → [kamikaze] ‘divine wind’. (30)

/ori+R+kami/ RealizeMorph IdentIO(Seg) *VoiObst ☞ a. origami b. orikami

*

g

*!

The Lyman’s Law effect occurs because the self-conjunction *VoiObst2MORPH dominates RealizeMorph, neutralizing its effect. (31)

/kami+R+kaze/ *VoiObst2MORPH Realize IdentIO(Seg) *VoiObst Morph ☞ a. kamikaze b. kamigaze

* *(gaze)!

*

z gz

Beyond the examples discussed in §2 local conjunction and its special variant, self-conjunction have been used in the analysis of a diverse set of phenomena. Local conjunction has been used in analyses of sonority distance restrictions on syllable structures (Baertsch 1998; Baertsch and Davis 2003; Smolensky 2006), the sonority hierarchy (Smolensky 2006), tone sandhi in Mandarin Chinese (Lin 2000), glide formation in German (Hall 2004, 2007), and accentual phenomena (Alderete 1999). Finally, Levelt et al. (1999) and Levelt and van de Vijver (1998) propose that local conjunction plays a role in children’s acquisition of the constraint hierarchy of the language they are learning.

3

Other modes of constraint combination

Other proposals for deriving complex constraints from simpler ones using connectives other than “&” have included constraint analogs of Boolean conjunction and two versions of material implication.

Constraint Conjunction

3.1

17

The “best of the best”

Hewitt and Crowhurst (1996) and Crowhurst and Hewitt (1997) argue that in addition to local conjunction and self-conjunction, a proper account of some phonological patterns is best achieved using complex constraints derived using a connective, “û,” with the semantics of Boolean conjunction. Crowhurst and Hewitt call this mode of interaction simply “constraint conjunction.” Here, we will call it b-conjunction, to distinguish it from local conjunction. In contrast to local conjunction, which bans the “worst of the worst,” b-conjunction insists on the “best of the best.” See the definitions and tables in (32) and (33). (32)

a.

b.

(33)

Boolean conjunction The conjunction AûB is true iff proposition A is true and proposition B is true. B-conjunction A candidate Cand passes a b-conjunction AûB iff Cand passes constraint A and Cand passes constraint B.

a. Boolean conjunction A

AûB

B

T

T

T

F

T

T

T

T

F

F

F

F

b. Evaluation of a B-conjunction C1

C 1 C2

*

*!

C2

Cand1 Cand2 Cand3 Cand4

*

*!

*

*!

*

Crowhurst and Hewitt restrict b-conjunction to constraints that share what they call a focus (or fulcrum in Hewitt and Crowhurst 1996) which identifies the locus of violation for each of the conjuncts. (34)

Focus (of a constraint)def a. b.

Every constraint has a unique focus. A constraint’s focus is identified by the universally quantified argument.

Crowhurst and Hewitt’s use of b-conjunction is exemplified by their analysis of alignment effects in the stress system of Diyari (Pama-Nyungan; Austin 1981). In Diyari, any morpheme of two or more syllables has initial stress, and secondary stress falls on the penult in morphemes longer than three syllables, as shown in

Megan J. Crowhurst

18

(35a). The cases of special interest are polymorphemic words containing monosyllabic suffixes, exemplified in (35b). Monosyllabic suffixes are never stressed. Moreover, no syllable preceding any morpheme is ever stressed (cf. [’puljudu-Ki‘maÍa], [’maÕa-Îa-Ki]). Thus, although they have the same number of syllables, the forms [’maÕa-Îa-Ki] and [’kaJa-‘waÈa] are assigned stress quite differently: the bisyllabic suffix [-’waÈa-] has stress, but the sequence of suffixes [-Îa-Ki-] does not.7 (35)

a.

b.

’kaJa ’pinadu ’Ianda‘walka ’wintara‘naja ’kaJa-‘waÈa ’taji-‘jati‘maji ’maÕa-Îa-Ki ’puljudu-Ki-‘maÍa ’pinadu-‘waÈa ’Janda-na-‘maÍa

(’ka.Ja) (’pi.na)du (’Ian.da)(‘wal.ka) (’winta)ra(‘naja) (’ka.Ja)-(‘wa.Èa) (’ta.ji)-(‘ja.ti)(‘ma.ji) (’ma.Õa)-Îa-Ki (’pu.lju)du-Ki-(‘ma.Ía) (’pi.na)du-(‘wa.Èa) (’Jan.da)-na-(‘ma.Ía)

‘man’ ‘old man’ ‘to close’ ‘how long’ ‘man-pl’ ‘to eat-opt’ ‘hill-char-loc’ ‘mud-loc-ident’ ‘old man-pl’ ‘hit-part-ident’

The lynchpin of Crowhurst and Hewitt’s account of Diyari is the conjunction in (36c) of the two alignment constraints in (36a) and (36b). The individual constraints require an edge (left or right) of a morpheme to be aligned with the same edge of a foot. In conjunction, they require each edge of a morpheme to be aligned with the same edge of a foot. (36)

a.

b.

c.

MorphemeFt-Left (MFL) Align-L(Morpheme, Ft) (One * for any morpheme whose left edge does not coincide with the left edge of some foot.) MorphemeFt-Right (MFR) Align-R(Morpheme, Ft) (One * for any morpheme whose right edge does not coincide with the right edge of some foot.) MFL ûMORPH MFR (One * per morpheme which fails MFL, MFR, or both.)

Crowhurst and Hewitt treat Diyari as a language that avoids foot structure when possible. They rank the standard constraint Parse-q below *Struc(Ft), which assigns a penalty for each foot in the output. The conjunction in (36c) outranks *Struc(Ft), resulting in a foot at each edge of morphemes whose syllable count makes this possible. FtMin(q) prevents the assignment of monosyllabic feet. Tableau (37) shows how the correct outputs (’winta)ra(‘naja) and (’ta.ji)-(‘ja.ti)(‘ma.ji) are selected under the rankings FtMin >> MFL ûMORPH MFR >> *Struc(Ft). (In the following tableaux, violations of the individual conjuncts that are promoted to 7

Note that the underived pentasyllabic form [’wintara‘naja] has initial and penultimate (not antepenultimate) stress. Hewitt and Crowhurst (1996) use this example (from Austin 1981) to show that stress in Diyari is not assigned from left to right, as many theoretical accounts of Diyari have assumed (e.g. Poser 1989; Crowhurst 1994; Kager 1994). Rather, Diyari assigns a trochaic foot at both edges of a morpheme when it can.

Constraint Conjunction

19

violations of the conjunction are shown in parentheses. Violations of the alignment constraints are shown with a superscript corresponding to the initial phoneme of the morpheme that incurred the penalty.) (37)

/taji-jatimaji/

FtMin MFL ûMORPH MFR *Struc(Ft)

☞ a. (’ta.ji)-(‘ja.ti)(‘ma.ji) b. (’ta.ji)-ja.ti(‘ma.ji) c. (’ta.ji)-(‘ja.ti).ma.ji d. (’ta.ji)-ja.ti.ma.ji /wintaranaja/

(*j)

*j!

(*j)

*j! *j!

*** ** (*j) (*j)

** *

FtMin MFL ûMORPH MFR *Struc(Ft)

☞ a. (’winta)ra(‘naja) b. (’win.ta)ra.na.ja

(*w)

*w! *w!

c. win.ta.ra(‘na.ja)

** ** (*w)

*

The crucial cases are those that contain monosyllabic affixes. In [’Jan.da-na-‘ma.Ía], the monosyllabic suffix [-na-] is not footed, but the two bisyllabic morphemes are aligned at each edge with a foot. The especially interesting case is [’ma.Õa-Îa-Ki], which has adjacent monosyllabic suffixes. The candidate (’ma.Õa)-(‘Îa)-(‘Ki), which satisfies the alignment constraints, is ruled out by FtMin. If the constraints MFL and MFR were not conjoined but were evaluated independently, then of the remaining two, the optimal candidate should be (’ma.Õa)-(‘Îa-Ki), which minimizes violations of MFL and MFR. However, the optimal candidate is in fact (’ma.Õa)Îa-Ki, in which neither of the suffixes is aligned with foot structure. When MFL and MFR are b-conjoined, however, a morpheme aligned at only one edge is no better than a morpheme aligned at both edges, so that (’ma.Õa)-(‘Îa-Ki) and (’ma.Õa)-Îa-Ki both fail the conjunction MFL ûMORPH MFR. In this case, *Struc(Ft) decides for the candidate (’ma.Õa)-Îa-Ki, in which the monosyllabic suffixes are unfooted. The analysis of [’ma.Õa-Îa-Ki] under b-conjunction is shown in (38). (38)

/maÕa-Îa-Ki/

FtMin MFL

MFR *Struc(Ft)

(*Î *K) *Î *K (*Î *K) (*K) *Î *K (*Î)

☞ a. (’ma.Õa)-Îa-Ki b. (’ma.Õa)-(‘Îa-Ki) c. (’ma.Õa)-(‘Îa)-(‘Ki)

MORPH

*!*

* * ***

The analysis just presented for [’ma.Õa-Îa-Ki] predicts that trisyllabic roots should not be aligned with foot structure, yet, like all other root morphemes, they have initial stress. Crowhurst and Hewitt’s analysis attributes the presence of initial stress to the constraint MainStress-L in (39), which insists that the head of the prosodic word stand exactly at the left edge of the prosodic word. Ranking MainStress-L above the conjunction MFL ûMORPH MFR compels the presence of an initial foot, even though that foot is poorly aligned with the root morpheme in the output.

Megan J. Crowhurst

20 (39)

MainStress-L (MSL) Align(PrWd, L; Head(PrWd), L) (Every PrWd is left aligned with the stressed syllable in its head foot; one * per misaligned PrWd.)

B-conjunction is also useful in stating templatic constraints on the size of reduplicants (chapter 100: reduplication). A constraint like Red=Ft, for example, can be stated as in (40a). However, note that this statement informally assumes the interpretation provided by b-conjunction: any reduplicant must be both left aligned and right aligned with the same foot. Under b-conjunction, the templatic effect is achieved by b-conjoining the constraints in (40b) and (40c), with the foot as the focus of the conjunction. Under this analysis, any foot will be evaluated against the conjunction. Any foot not co-extensive with Red will fail the conjunction, although it might be required by higher-ranking constraints on metrical structure. However, the constraint will be satisfied by reduplicated forms in which Red and a foot are co-extensive. (40)

a.

Red=Ft The reduplicant string is co-extensive with a foot (e.g. Downing 2000: 3). b. FootRed-L Align-L(Ft, AffixRED) One * for any reduplicant that is not left aligned with a foot. c. FootRed-R Align-R (Foot, AffixRED) One * for any reduplicant that is not right aligned with a foot. d. FootRed-L ûFOOT FootRed-R

The connective “û” is also used by Downing (1998, 2000) to account for exceptional alignment behavior found with onsetless syllables (e.g. as stress and tone bearers) and prosodic restrictions on reduplication in Kinande.

3.2

Implication

A different form of constraint combination, most closely related to material implication in logic, was proposed by Crowhurst and Hewitt (1997) as a way of analyzing effects of conflicting directionality (Kiparsky 1973; Zoll 1997) in the assignment of stress. As an example, Dongolese Nubian (Armbruster 1960) assigns a single stress, which surfaces on the rightmost of heavy syllable, if there are any (where only vowel length is relevant for syllable weight). Examples are shown in (41a). However, when only light syllables are present, as in (41b), stress falls on the initial syllable. (41)

a.

Heavy syllables ’bee.kat.t> do.’goo.g>r te.le.’graaf.k> t>n.t>.’neeI.ke.g>d maa.’suu.ra maa.’leeœ se.ree.g>r.œug.lee.re.’daag

‘to be killed’ ‘raise it’ ‘a telegram’ ‘their maternal aunt’ ‘tube, pipe’ ‘it doesn’t matter’ ‘be in the situation of having worked well’

Constraint Conjunction b.

Light syllables ’bu.run ’ta.ra.ga ’mu.go.san ’–>.J>.ran

21

‘it is a girl’ ‘page, leaf’ ‘tell to leave’ ‘tell him (her) to go and wait’

Crowhurst and Hewitt’s analysis of Dongolese uses the constraints in (42). The pattern found in words containing heavy syllables seems to require that the constraint HeavyHead in (42a) be ranked above Heads-R in (42b), which must in turn be ranked above Heads-L, (42c). (42)

a.

b.

c.

HeavyHead The head syllable of a foot is bimoraic. (One * for any stressed light syllable.) Heads-R Align-R(Head(Ft), PrWd) (One * per syllable coming between any stressed syllable and the right edge of the dominating PrWd.) Heads-L Align-L(Head(Ft), PrWd) (One * per syllable coming between any stressed syllable and the left edge of the dominating PrWd.)

The problem is that the ranking required for the heavy syllable cases doesn’t account for stress in forms containing only light syllables: these seem to require that the alignment constraints be ranked in the reverse order, Heads-L >> Heads-R. The insight Crowhurst and Hewitt (1997) seek to capture is that in a language like Dongolese, heavy syllable stress and any conditions imposed on heavy syllables under stress (in this case, Heads-R) take priority. Heads-L becomes relevant only when no heavy syllables are present. Crowhurst and Hewitt propose that HeavyHead and Heads-R combine in a complex constraint that takes the form of an implication, in which the satisfaction of one requirement is unilaterally dependent on the satisfaction of another. Under their interpretation of material implication, whether a candidate passes A > B depends primarily on the candidate’s success on constraint A, and secondarily on its success on constraint B. Crowhurst and Hewitt’s definition of implication appears in (43).8 (43)

Implication (Crowhurst and Hewitt 1997) If Cand passes A, then Cand is evaluated with respect to B: If Cand passes B, then Cand passes A > B; If Cand fails B, then Cand fails A > B. If Cand fails A, then Cand fails A > B and Cand’s success on B is irrelevant.

As in the case of conjunction, Crowhurst and Hewitt propose that only constraints which share an argument may combine to form implications. Crowhurst 8

Note that Crowhurst and Hewitt’s use of implication does not have the semantics of classical material implication in Boolean logic.

Megan J. Crowhurst

22

and Hewitt’s account of Dongolese ranks the implication HeavyHead >HEAD(FT) Heads-R above Heads-L. Tableau (44) shows how their analysis works in forms like [’mu.go.san], which contain only light syllables. All viable candidates violate HeavyHead (constraint A), and this translates to one violation each against the implication HeavyHead >HEAD(FT) Heads-R. (Heads-R cells have been shaded to indicate that, per Crowhurst and Hewitt’s definition, this constraint has no effect on the outcome.) So it is that in forms containing only light syllables, Heads-L selects the initially stressed candidate. /mugosan/

(44)

HeavyHead >HEAD(FT) Heads-R Heads-L

☞ a. (’mu.go)san b. mu(’go.san)

(*) (*)

* *

c. mu.go(’san)

(*)

*

(**) (*)

*! *!*

Tableau (45) shows how the analysis selects the correct result in forms containing long vowels, [maa.’suu.ra]. /maasuura/

(45)

HeavyHead >HEAD(FT) Heads-R Heads-L

☞ a. maa(’suu)ra b. (’maa)suu.ra c. (‘maa)(’suu)ra d. maa.suu(’ra)

(*)

*

*!

(*) (**)

*!

(***)

*

*!

In response to Crowhurst and Hewitt, Balari et al. (2000) argue essentially that Crowhurst and Hewitt’s interpretation of implication should not be allowed. These authors maintain that if complex constraints are to be derived using logical connectives, they should be connectives with classical Boolean semantics. Like Crowhurst and Hewitt, Balari et al.’s analysis of Dongolese uses Heads-L and Heads-R, but they replace HeavyHead with a constraint LightHead in (46a), which requires stress to fall on a monomoraic syllable, and they propose the (classically evaluated) implication in (46b).9 The tableaux for [’mu.go.san] and [maa.’suu.ra] under BVM’s analysis appear in (47).10 (46)

a. b.

LightHead The head q of a foot is monomoraic. LightHead → Heads-L The implication is violated only if LightHead is satisfied and Heads-L is not.

9

Balari et al. use the symbol “→” in the same way as “>” in Boolean logic. Balari et al.’s take on conflicting directionality is very close to that of Zoll (1997). Zoll proposes the constraint below to account for stress in Selkup. Note that Zoll’s constraint is an unacknowledged complex constraint, which, when unpacked, has the semantics of Balari et al.’s implicational constraint. (i) Align-L(’q[, PWd): A light stressed syllable should be word-initial. 10

Balari et al. differ from CH in assuming that Heads-L and Heads-R restricts the foot, not just the stressed syllable. (47) is taken from their work and therefore reflects this difference.

Constraint Conjunction /mugosan/

(47)

LightHead → Heads-L

☞ a. (’mu.go)san b. mu(’go.san)

*!

(*)

c. mu.go(’san)

*!

(**)

/maasuura/ ☞ a. maa(’suu)ra b. (’maa)suu.ra c. (‘maa)(’suu)ra d. maa.suu(’ra)

23

Heads-R *

LightHead → Heads-L (*)

(*)

* **!

(*)

**!*

(*) (*) *!

Heads-R

(**)

In the end, it turns out that neither version of implication is strictly necessary to account for standard effects of conflicting directionality in stress. Conflicting directionality in stress, as in Dongolese, can be analyzed using the local conjunction of the constraints *PkFT/q[ and InitialStress in (48a) and (48b) (taking the domain of the conjunction to be the prosodic word). (48)

a. b.

c.

*PkFT/q[ Avoid stress on monomoraic syllables. InitialStress Align-L(Head(PrWd), PrWd) The head of the PrWd, syllable with main stress, occurs initially in its prosodic word. *PkFT/q[ &PRWD InitialStress

Tableau (49) shows that the hierarchy *PkFT/q[ &PRWD InitialStress >> *PkFT/q[ >> Heads-R correctly accounts for Dongolese. (49)

/mugosan/

*PkFT/q[&PRWDInitialStress *PkFT/q[ Heads-R

☞ a. (’mu.go)san b. mu(’go.san)

*!

* *

c. mu.go(’san)

*!

*

/maasuura/

*PkFT/q[&PRWDInitialStress *PkFT/q[ Heads-R

☞ a. maa(’suu)ra b. (’maa)suu.ra c. maa.suu(’ra)

** *

* **! *!

*

In addition to Balari et al., other arguments in favor of constraints derived using a connective with the semantics of Boolean material implication are presented in Archangeli et al. (1998) and Ìubowicz (2005).

Megan J. Crowhurst

24

4

Different perspectives on the domain of conjunction

One distinguishing feature of work using various forms of conjunction has been researchers’ different assumptions concerning what can constitute D, the domain of a conjunction. As the definition in (3) makes clear, Itô and Mester’s (2003b) view is that D must be instantiated by a member of the set of phonological or morphological constituents. Smolensky’s use of D is similar, but perhaps more inclusive in his use of the feature domain – a “maximal contiguous span of zbearers with a common value [±z]” (see again (13)). We can say that for both Itô and Mester as well as Smolensky, the domain of a local conjunction is identified with a designated node in the phonological or morphological structure, and all material associated with this node (the association possibly being mediated by other intervening nodes). One difference between these authors seems to be that for Smolensky, but perhaps not Itô and Mester (2003b), any phonological feature can be the node that determines a domain. A different perspective on D is found in Hewitt and Crowhurst (1996), Crowhurst and Hewitt (1997), Downing (1998, 2000), Bakovio (2000), and Ìubowicz (2002, 2005), all of whom argue in one way or another that constraints can be conjoined only when they have the same locus of violation. As noted earlier, Hewitt and Crowhurst (1996), Crowhurst and Hewitt (1997), and Downing (1998, 2000) argue that b-conjunction should be limited to constraints that share an argument, which serves as the locus of violation of the conjoined constraint. In Crowhurst and Hewitt’s analysis of Diyari reviewed earlier, for example, the locus of violation of the constraints MorphemeFt-L (MFL) and MorphemeFt-R (MFR) is the morpheme, and the conjunction of these constraints produces a complex constraint, MFL ûMORPH MFR, which has the same locus of violation. An extended formal discussion of the locus of violation as a principled restriction on the domain of local conjunctions of markedness and faithfulness constraints is developed in Ìubowicz (2005). Her definitions of restricted local conjunction and of the locus for local conjunction are shown in (50). (50)

a. b.

Restricted local conjunction C = C1 & C2 is violated iff LOCC1 ∩ LOCC2 ≠ Ø. Locus for local conjunction The locus for local conjunction is the intersection of the sets LOCC1 and LOCC2.

Formally, (50a) states that the loci of violation for the conjuncts of a local conjunction must intersect and that this intersection may not be null; (50b) states that the locus of violation for the local conjunction is the intersection of sets defined in (50a). In a similar vein, Bakovio (2000) proposes that two constraints can be conjoined only if they are co-relevant, meaning that the definition of each conjunct specifies a particular feature also mentioned by the other conjunct. The co-relevance restriction plays an important role in Bakovio’s use of M&F (markedness and faithfulness) conjunctions to account for “assimilation to the unmarked” phenomena, as discussed earlier. In his words, “the net effect of a co-relevant local conjunction of

Constraint Conjunction

25

markedness and faithfulness is to specifically prohibit the unfaithful introduction of a marked segment” (Bakovio 2000: 7). Bakovio notes that the co-relevance restriction (or Ìubowicz’s more precisely defined restriction based on locus of violation) provides an answer to Itô and Mester (1998), who take the position that M&F conjunctions should not be allowed because they lead to undesirable results. Positioning the conjunction NoCoda & Ident[voice] above the fragment *[+voice] >> Ident[voice], for example, would falsely predict a language in which obstruents are voiced only in syllable codas. Bakovio notes that conjuncts used to illustrate Itô and Mester’s point are not co-relevant, and in fact uses the example to reinforce his claim that only co-relevant constraints are conjoinable.

5

Taming the beast: Curbing the expressive power of constraint conjunction

Proposals for combining simpler constraints into more complex ones using connectives other than “>>” have not been universally welcomed (see for example Orgun and Sprouse 1999; Parker 2001; Idsardi 2006; Iverson and Salmons 2007; Zhang 2007). The issue of restricting the power of a formal theory that permits the derivation of complex constraints from simpler ones has generated concern, and inspired various proposals aimed at restrictiveness. The position of some scholars that restrictions on conjunction should be encoded in restrictions on the domain of conjunction was discussed in the last section. Other ways in which the issue of restrictiveness has played into arguments for and against constraint conjunction are reviewed below.

5.1

Chain shifts: An argument against material implication

Wolf (2007) calls for restricting the set of connectives that can be used to combine constraints. He asserts that local conjunction and strict domination exhaust the ways in which constraints may interact. In particular, he argues that complex constraints with the semantics of material implication must not be allowed, because admitting such constraints would have the consequence of allowing OT to model synchronic circular chain shifts. A circular chain shift takes the form A → B → A (where B can be expanded to accommodate additional links in the chain), in which a phoneme /A/ shifts to sound [B], and phoneme /B/, also present in the language, shifts to [A] (see also chapter 73: chain shifts). The existence of a circular chain shift in which all links occur synchronically would present a problem for the OT doctrine of harmonic ascent (Prince and Smolensky 1993; McCarthy 2002). This is because a change A → B could only occur if B is less marked than A (and this would require a particular set of markedness constraints to outrank a particular set of faithfulness ones). But then, if B is less marked than A, there would be no reason for the change B → A (and the ranking that accounted for A → B could not account for the second change). Moreton (1999) provides a formal proof showing that an OT grammar that admits only faithfulness and markedness constraints is incapable of modeling circular chain shifts. As it turns out, there are no convincing synchronic examples of a circular chain

26

Megan J. Crowhurst

shift conditioned by purely phonological factors, although there do appear to be examples triggered by morphological conditions (Anderson and Brown 1973). Diachronic examples of phonologically conditioned circular chain shifts are quite common (e.g. the flip-flop /q/ → /i/ → /q/ in Siriono: Crowhurst 2000; the Germanic Kreislauf: Iverson and Salmons 2008), but these might be best handled by an analysis that assumes changes in constraint rankings at various stages of a language’s development, along the lines proposed in Holt (2003).

5.2

Other proposals for promoting restrictiveness

Several other proposals aimed at promoting restrictiveness should be mentioned. Some researchers have proposed restrictions (beyond the locus of violation) on the kinds of constraints that can combine. As noted earlier, Itô and Mester (1998) argued against the conjunction of markedness with faithfulness constraints. Fukazawa and Miglio (1998) proposed that conjunction should be limited to constraints within the same family, which would have much the same effect. These proposals have been countered by persuasive demonstrations of the benefits of conjoining faithfulness and markedness constraints (see §2.5). A commonly accepted restriction has been to allow only complex constraints whose parts are independently necessary constraints. To do otherwise would permit unnatural results. Smolensky (2006) provides the example of a rule spreading [+ATR] from a [−high] vowel (e.g. [o . . . e] → [D . . . e]). This pattern is exactly what we don’t find in ATR harmony, because [−ATR] non-high vowels are marked, and only unmarked feature bearers propagate their features (Archangeli and Pulleyblank 1994). Smolensky (2006) notes that ranking the conjunction *[+ATR, +hi] & HD-L[+ATR] above F[ATR] (the constraint requiring a feature domain to be faithful) could produce this result. However, the conjunction *[+ATR, +hi] & HD-L[+ATR] could simply never exist; one of its conjuncts *[+ATR, +hi] is not a member of Con, since from a markedness perspective, [+ATR] favors high vowels. However pernicious, conjunctive relations of this type have in fact been proposed in the literature, sometimes to account for very real phenomena. An example would be the implication in (46b), used by Balari et al. (2000) to account for directional stress effects. Their constraint LightHead, which demands that stressed syllables be light, has no independent motivation – it is in fact an anti-harmonic constraint (since stress on light syllables is less harmonic than stress on heavy syllables). If LightHead is not a plausible candidate for membership in Con, then it would seem reasonable to conclude that Balari et al.’s conjunction LightHead → Heads-L is not an admissible complex constraint. Another example would be the constraint *Lapse in (51), employed by Elenbaas and Kager (1999: 282), who are proponents of the view that only independently motivated constraints may be combined. (51)

*Lapse Every weak beat must be adjacent to a strong beat or the word edge.

Elenbaas and Kager note the disjunction in the requirement imposed by (51), but they point out that one of the disjuncts has no justification as a constraint in its own right. The first requirement, “every weak beat must be adjacent to a strong beat,” is a cross-linguistically common restriction. However, Elenbaas and Kager note that there is no clear and independent motivation for the second requirement, that

Constraint Conjunction

27

“every weak beat must be adjacent to the word edge.” For this reason, they conclude that *Lapse should not be treated as a complex constraint. However, whether acknowledged or not, *Lapse is a form of disjunction (local conjunction), and a proper definition of the constraint would have to take account (more on this below). *Lapse is more problematic than the previous example, because the need for *Lapse is well established: as we showed in the last section, the Balari et al. implication is unnecessary; alternatives are readily available.

6

Concluding remarks

Proponents of allowing devices for combining constraints have argued that they further the goal of deriving empirical generalizations about markedness from irreducible principles (Itô and Mester 2003b; Smolensky 2006) and (a related point) that they promote greater precision and formal explicitness. Resistance to accepting complex constraints has been due largely to the observation that despite the advantages claimed, devices for combining constraints greatly increase the expressive power of the formal architecture, especially in regard to the constraint inventory Con. On the other hand, Itô and Mester (2003b: 22) note that in fact, the use of connectives keeps in check the proliferation of inexplicit constraints that duplicate aspects of others without a formal account of their components and the relations among them. (That is, since the need for the constraints in (6a) and (6b) is uncontroversial, and if we can combine them as in (4c), then we don’t need the coda condition in (5).) In closing, I will put forward a consideration that may seem provocative, but which ought to be taken seriously. Smolensky (2006) points out that the statement normally given for a constraint such as (51) (of which there are many examples in the literature) may pass as a translation of the definition of a constraint into English (or pseudo-formal language blended with English), but it is not a proper definition of that constraint (i.e. stated in the precise language of formal logic). A precise definition of (51) in formal language must necessarily take account of the disjunction it contains; it cannot do otherwise. Whether complex constraints should be allowed is a non-issue. In fact, they have been used without controversy from the inception of OT, camouflaged as informal descriptions that have been accepted as definitions of constraints. The issue that deserves consideration is whether the position that complex constraints should be excluded can be maintained in light of its implications – most seriously, the implication that an enormous number of generally accepted constraints ought to be expelled from Con, or at least rethought. On the issue of overgeneration, we may perhaps leave the last word, for now, to Itô and Mester: The broadly defined outline of local conjunction theory . . . admits a huge number of conjoined constraints, only a small subset of which will turn out to play a role in grammar, and many of which are unwanted. In our view, the task of distinguishing between “reasonable,” “plausible,” “expected” conjunctions and “unreasonable,” “implausible,” “unexpected” conjunctions cannot be relegated to the syntax of conjunction, which simply provides a system for expressing derived constraints. The distinction is an issue of phonological substance and phonetic groundedness, not one of formalization (Itô and Mester 2003b: 24).

28

Megan J. Crowhurst

ACKNOWLEDGMENTS The author appreciates useful comments received from the editors, Marc van Oostendorp and Keren Rice, and from Scott Myers, Joe Salmons, and two anonymous reviewers. Their input has helped to make this a better chapter.

REFERENCES Alderete, John. 1999. Faithfulness to prosodic heads. In Ben Hermans & Marc van Oostendorp (eds.) The derivational residue in phonology, 29–50. Amsterdam & Philadelphia: John Benjamins. Alderete, John. 2004. Dissimilation as local conjunction. In John J. McCarthy (ed.) Optimality Theory in phonology: A reader, 394–406. Malden, MA & Oxford: Blackwell. Anderson, Stephen R. & Wayles Browne. 1973. On keeping exchange rules in Czech. Papers in Linguistics 6. 445–482. Archangeli, Diana & Douglas Pulleyblank. 1994. Grounded phonology. Cambridge, MA: MIT Press. Archangeli, Diana, Laura Moll & Kazutoshi Ohno. 1998. Why not *NY? Papers from the Annual Regional Meeting, Chicago Linguistic Society 34. 1–26. Armbruster, Charles Hubert. 1960. Dongolese Nubian: A grammar. Cambridge: Cambridge University Press. Austin, Peter. 1981. A grammar of Diyari, South Australia. Cambridge: Cambridge University Press. Baertsch, Karen. 1998. Onset sonority distance constraints through local conjunction. Papers from the Annual Regional Meeting, Chicago Linguistic Society 34. 1–16. Baertsch, Karen & Stuart Davis. 2003. The split margin approach to syllable structure. ZAS Papers in Linguistics 32. 1–14. Bakovio, Eric. 2000. Harmony, dominance, and control. Ph.D. dissertation, Rutgers University. Balari, Sergio, Rafael Marín & Teresa Vallverdu. 2000. Implicational constraints, defaults, and markedness. Unpublished ms., Universitat Autònoma de Barcelona (ROA-396). Bavin Woock, Edith & Michael Noonan. 1979. Vowel harmony in Lango. Papers from the Annual Regional Meeting, Chicago Linguistic Society 15. 20–29. Beckman, Jill N. 2003. The case for local conjunction: Evidence from Fyem. Proceedings of the West Coast Conference on Formal Linguistics 22. 56–69. Cassimjee, Farida & Charles W. Kisseberth. 1998. Optimal Domains Theory and Bantu tonology: A case study from Isixhosa and Shingazidja. In Larry M. Hyman & Charles W. Kisseberth (eds.) Theoretical aspects of Bantu tone, 33–132. Stanford: CSLI. Cole, Jennifer & Charles W. Kisseberth. 1994. Nasal harmony in Optimal Domains Theory. Unpublished ms., University of Illinois. Crowhurst, Megan J. 1994. Prosodic alignment and misalignment in Diyari, Dyirbal, and Gooniyandi: An optimizing approach. Proceedings of the West Coast Conference on Formal Linguistics 13. 16–31. Crowhurst, Megan J. 2000. A flip-flop in Sirionó (Tupian): The mutual exchange of /i q/. International Journal of American Linguistics 66. 57–75. Crowhurst, Megan J. 2001. Coda conditions and um infixation in Toba Batak. Lingua 111. 561–590. Crowhurst, Megan J. & Mark Hewitt. 1997. Boolean operations and constraint interactions in Optimality Theory. Unpublished ms., University of North Carolina & Brandeis University (ROA-229). Downing, Laura J. 1998. On the prosodic misalignment of onsetless syllables. Natural Language and Linguistic Theory 16. 1–52.

Constraint Conjunction

29

Downing, Laura J. 2000. Morphological and prosodic constraints on Kinande verbal reduplication. Phonology 17. 1–38. Downing, Laura J. 2001. Liquid spirantisation in Jita. Malilime: Malawian Journal of Linguistics 2. 1–27. Elenbaas, Nine & René Kager. 1999. Ternary rhythm and the lapse constraint. Phonology 16. 273–329. Fukazawa, Haruka. 1999. Theoretical implications of OCP effects on features in Optimality Theory. Ph.D. dissertation, University of Maryland at College Park (ROA-307). Fukazawa, Haruka. 2001. Local conjunction and extending sympathy theory: OCP effects in Yucatec Maya. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 231–260. Cambridge: Cambridge University Press. Fukazawa, Haruka & Viola Miglio. 1998. Restricting conjunction to constraint families. Proceedings of the Western Conference on Linguistics 9. 102–117. Hall, T. A. 2004. German glide formation and constraint conjunction. Unpublished ms., University of Indiana. Hall, T. A. 2007. German glide formation and its theoretical consequences. The Linguistic Review 24. 1–31. Hewitt, Mark & Megan J. Crowhurst. 1996. Conjunctive constraints and templates in Optimality Theory. Papers from the Annual Meeting of the North East Linguistic Society 26. 101–116. Holt, D. Eric (ed.) 2003. Optimality Theory and language change. Dordrecht: Kluwer. Idsardi, William J. 2006. A simple proof that Optimality Theory is computationally intractable. Linguistic Inquiry 37. 271–275. Itô, Junko & Armin Mester. 1996. Rendaku 1: Constraint conjunction and the OCP. Paper presented at the Kobe Phonology Forum. Itô, Junko & Armin Mester. 1998. Markedness and word structure: OCP effects in Japanese. Unpublished ms., University of California, Santa Cruz (ROA-255). Itô, Junko & Armin Mester. 2003a. On the sources of opacity in OT: Coda processes in German. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 271–303. Cambridge: Cambridge University Press. Itô, Junko & Armin Mester. 2003b. Japanese morphophonemics: Markedness and word structure. Cambridge, MA: MIT Press. Iverson, Gregory K. & Joseph C. Salmons. 2007. Domains and directionality in the evolution of German final fortition. Phonology 24. 121–145. Iverson, Gregory K. & Joseph Salmons. 2008. Germanic aspiration: Phonetic enhancement and language contact. Sprachwissenschaft 33. 257–278. Jacobs, Haike. 2004. Rhythmic vowel deletion in OT: Syncope in Latin. Probus 16. 63–89. Kager, René. 1994. Generalized alignment and morphological parsing. Unpublished ms., University of Utrecht. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kawahara, Shigeto. 2003. On a certain type of hiatus resolution in Japanese. Phonological Studies 6. 11–20. Kiparsky, Paul. 1973. “Elsewhere” in phonology. In Stephen R. Anderson & Paul Kiparsky (eds.) A Festschrift for Morris Halle, 93–106. New York: Holt, Rinehart & Winston. Kirchner, Robert. 1993. Turkish vowel harmony and disharmony: An optimality theoretic analysis. Unpublished ms., University of California, Los Angeles (ROA-4). Kirchner, Robert. 1996. Synchronic chain shifts in Optimality Theory. Linguistic Inquiry 27. 341–350. Levelt, Clara C. & Ruben van de Vijver. 1998. Syllable types in cross-linguistic and developmental grammars. Paper presented at the 3rd Biannual Utrecht Phonology Workshop (ROA-265). Levelt, Clara C., Niels O. Schiller & Willem J. Levelt. 1999. A developmental grammar for syllable structure in the production of child language. Brain and Language 68. 291–299.

30

Megan J. Crowhurst

Lin, Hui-Shan. 2000. Layered OCP, unparsed preposition, and local constraint conjunction in Mandarin tone sandhi. Paper presented at the 33rd International Conference on Sino-Tibetan Languages and Linguistics, Ramkhamhaeng University, Thailand. Ìubowicz, Anna. 2002. Derived environment effects in Optimality Theory. Lingua 112. 243–280. Ìubowicz, Anna. 2005. Locality of conjunction. Proceedings of the West Coast Conference on Formal Linguistics 24. 254–262. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. Moreton, Elliott. 1999. Non-computable functions in Optimality Theory. Unpublished ms., University of Massachusetts, Amherst (ROA-364). Moreton, Elliott & Paul Smolensky. 2002. Typological consequences of local constraint conjunction. Proceedings of the West Coast Conference on Formal Linguistics 21. 306–319. Morris, Richard. 2002. Coda obstruents and local constraint conjunction in north-central Peninsular Spanish. In Teresa Satterfield, Christina Tortora & Diana Cresti (eds.) Current issues in Romance languages: Selected papers from the 29th Linguistic Symposium on Romance Languages, 207–224. Amsterdam & Philadelphia: John Benjamins. Noonan, Michael. 1992. A grammar of Lango. Berlin & New York: Mouton de Gruyter. Okello, Jenny. 1975. Some phonological and morphological processes in Lango. Ph.D. dissertation, Indiana University. Orgun, Cemil Orhan & Ronald L. Sprouse. 1999. From MParse to Control: Deriving ungrammaticality. Phonology 16. 191–224. Parker, Steve. 2001. Non-optimal onsets in Chamicuro: An inventory maximized in coda position. Phonology 18. 361–386. Poser, William J. 1989. The metrical foot in Diyari. Phonology 6. 117–148. Prince, Alan. 2001. Invariance under re-ranking. Paper presented at the 20th West Coast Conference on Formal Linguistics, University of Southern California. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Reynolds, William. 1994. Variation and phonological theory. Ph.D. dissertation, University of Pennsylvania. Smolensky, Paul. 1993. Harmony, markedness, and phonological activity. Paper presented at the Rutgers Optimality Workshop 1, Rutgers University (ROA-87). Smolensky, Paul. 1995. On the structure of the constraint component Con of UG (ROA-86). Smolensky, Paul. 1997. Constraint interaction in generative grammar II: Local conjunction, or random rules in universal grammar. Paper presented at the Hopkins Optimality Theory Workshop/University of Maryland Mayfest 1997. Smolensky, Paul. 2006. Optimality in phonology II: Harmonic completeness, local constraint conjunction, and feature domain markedness. In Paul Smolensky & Géraldine Legendre (eds.) The harmonic mind: From neural computation to optimality-theoretic grammar, vol. 2: Linguistic and philosophical implications, 27–160. Cambridge, MA: MIT Press. Spaelti, Philip. 1997. Dimensions of variation in multi-pattern reduplication. Ph.D. dissertation, University of California, Santa Cruz. Walker, Rachel. 2000. Nasal reduplication in Mbe affixation. Phonology 17. 65–115. Wolf, Matthew. 2007. What constraint connectives should be permitted in OT? University of Massachusetts Occasional Papers in Linguistics 36. 151–179. Zhang, Jie. 2007. Constraint weighting and constraint domination: A formal comparison. Phonology 24. 433–459. Zoll, Cheryl. 1997. Conflicting directionality. Phonology 14. 263–286. Zoll, Cheryl. 1998. Positional asymmetries and licensing. Unpublished ms., MIT (ROA-282).

63

Markedness and Faithfulness Constraints Paul de Lacy

1

Introduction

Objects and mechanisms called “constraints” have featured in many theories of the phonological and syntactic modules. However, the explicit bifurcation into “markedness and faithfulness” constraints is specifically found in Optimality Theory (OT; Prince and Smolensky 1993) and its developments (especially McCarthy and Prince 1999), as well as in theories based on OT (Stochastic OT: Boersma and Hayes 2001; Targeted Constraint Theory: Wilson 2001; OT with Candidate Chains: McCarthy 2007; Stratal OT: Bermúdez-Otero, forthcoming; Kiparsky, forthcoming). So this chapter focuses on the things called “constraints” in OT (specifically the “classical OT” of Prince and Smolensky 1993 and McCarthy and Prince 1999). In particular, it focuses on OT constraints in the phonological module; there are also OT theories of the syntactic module and OT theories of morphology – they will not be discussed here. This chapter’s aim is to examine the basic syntax and semantics of constraints. On the syntax side: What is the form of constraints? What is the “constraint construction language”? On the semantics side: How are constraints “interpreted” – i.e. how are constraints used to assess a candidate’s violation marks? This chapter focuses on the basics of constraints, so it does not aspire to identify every constraint theory or list every constraint and constraint generator (§4.2) that has been proposed; for that, see the ongoing ConCat project.1 An OT constraint is commonly treated as a function that takes a candidate and returns “violation marks” (see Prince and Smolensky 1993). Violation marks are discrete elements; they are usually written as a string of asterisks with one asterisk per unique element (but violation marks in their formal implementation are not necessarily a string). For example, a constraint *Dorsal returns one violation mark for each instance of the representational element [dorsal] in an output representation (constraint names are usually written in small capitals). So for a candidate that includes an output representation [pakak], *Dorsal returns **, because there are two [dorsal] features in the form (one for each [k]). “Candidates” are sets of 1

Available at http://concat.wiki.xs4all.nl.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0063

2

Paul de Lacy

forms including at least an output and input representation, but probably many other related representations as well as relations between them (§3.3; Prince and Smolensky 1993). Constraints fit into the overall phonological system thus (following Prince and Smolensky 2004). An input is drawn from the lexicon; the input consists of phonological material with morphological and syntactic annotation/structure. A generation mechanism (GEN) produces many (perhaps an infinite number of) candidates. One or more candidates is selected from the array; the selection process (EVAL) involves constraints generating violation marks for candidates and an algorithm that uses the violation marks and other factors to determine the winning candidate(s). EVAL refers to “ranking” – a total order on constraints. Ranking does not influence how violation marks are calculated; however, ranking is crucial in discovering the winning candidate. The phonetic module then takes (one of) the winner(s) and realizes it (i.e. converts it into articulatory movements that produce speech sound). In short, constraints are just one part of many mechanisms that work together to determine the winning output representation. Constraints do not determine winners on their own. The term “markedness and faithfulness” as applied to constraints was coined in Prince and Smolensky (1993: §1.4). “Markedness constraints” return violation marks based solely on the form of the output representation. *Dorsal above is a markedness constraint. Unfortunately, the term “markedness” can cause confusion because it seems to imply an inherent connection to theories of markedness (see chapter 4: markedness). However, theories of markedness are expressed in OT via both markedness and faithfulness constraints (e.g. de Lacy 2006). The term “output constraint” is therefore less confusing than “markedness constraint,” and I will use it here. However, the phrase “markedness constraint” is in such widespread use that I fear “output constraint” will never catch on, in spite of my efforts in this chapter (to add to the confusion there are constraints called “output–output constraints,” which are actually faithfulness constraints; see §3.3). As originally used, “faithfulness constraints” are those that return violation marks based on comparison of the output representation with the input (Prince and Smolensky 1993: §1.2; though strictly speaking the output includes the input, as discussed in §3.1). Later work, especially McCarthy and Prince (1999), broadens the term to include any constraint that assigned violations by comparing any pair of inter- or intra-representational forms (e.g. the base of reduplication and the reduplicant (McCarthy and Prince 1999); the derivational base and the output (Benua 1997); a designated form and the output (McCarthy 1999)). The majority of work in OT now uses McCarthy and Prince’s Correspondence Theory, so in these cases it is accurate to refer to “correspondence constraints” – i.e. those that use correspondence relations in their calculation of violation marks. However, non-correspondence faithfulness constraints exist in some versions of OT (e.g. containment theories – §3.1), so “faithfulness constraints” is still a usefully broad term. This chapter focuses on a few important issues about constraints. §2 discusses constraint form in output constraints: What are constraints made of, and how do they return violation marks? §3 deals with faithfulness and correspondence constraints. §4 discusses the source of constraints – whether they are innate and how/whether they relate to external sources.

Markedness and Faithfulness Constraints

1.1

3

“Constraint” in other theories

The term “constraint” is used in many different ways in many different theories. In some rule-based theories there are objects called “constraints” or “filters” that – if their conditions are met – doom the derivation or block rules from applying. If an input I undergoes a series of rules to create a representation z and there is a filter *z, the derivation is doomed (i.e. input I has no corresponding output). See Chomsky and Lasnik (1977) for early examples in syntax, and Itô (1986) for conditions on syllable structure. The filter/condition concept does not have a direct analogue in OT – OT constraints assign violations; EVAL is the source of (relative) doom for candidates. Occasionally, “constraint” is used to refer to the side-effects of conditions on representational primitives and to restrictions on the algorithm that generates output candidates (GEN). Obviously, output representations can only be constructed out of objects and relations that are available (i.e. prosodic nodes, features, planes, tiers; precedence, dominance). For example, there is no candidate in which a node can both precede and follow another node, because the phonological precedence relation is asymmetric (i.e. if a and b are on the same tier and a < b (i.e. a precedes b) and b < a then a = b) (see chapter 34: precedence relations in phonology for a discussion of precedence). One could informally call the asymmetry of phonological precedence a “constraint,” but it is not an OT constraint.

2

Output constraints

A constraint takes a candidate and returns violation marks.2 For example, the constraint *Dorsal returns one violation mark for [ka], two for [kax], and so on. A constraint’s violation assignment can be described in informal terms: e.g. “*k returns a violation mark for each [dorsal] segment.” This informal description is useful, but far from being a formal definition. A formal definition of a constraint must be couched in a Constraint Definition Language (CDL). A comprehensive CDL specifies representational primitives and relations and restrictions on their combination in constraints. The same distinction can be made for rule-based theories like Chomsky and Halle (1968; SPE). Suppose we observe a rule R that takes an input representation /ak/ and converts it to the representation [a?]. R could be described as “change /k/ into a [?] word-finally.” However, R must be defined in terms of a Rule Definition Language (RDL); an RDL is the elements and relations that can be used to construct a rule and limits on their combination. Most of Chomsky and Halle (1968) is devoted to developing such an RDL; R is defined as /k/ → [?] / __ [−seg, −FB, +WB] (the rightmost cluster of features is a word boundary).

2

It is common to see comments like “constraints impose a partial order on the candidate set” (Samek-Lodovici and Prince 1999: 9), “this constraint dooms the candidate,” and so on. These comments are meant as a quick way of describing the complex process of determining order among candidates; the process involves constraints, VR (§2.2), ranking, and Eval’s mechanisms; constraints are merely one part of the process of establishing the winning candidate.

4

Paul de Lacy

Some work in OT uses informal descriptions to talk about constraints. Formal objects (representational elements like prosodic nodes, features, etc.) are often mentioned in the informal descriptions, but the constraints are nevertheless not defined in terms of an overarching CDL. There have been attempts to develop a comprehensive CDL (Eisner 1997; Potts and Pullum 2002), but most work has either focused on particular groups of constraints, or treated constraints as “black boxes.” To explain, it is possible to fruitfully investigate some (perhaps many) aspects of OT theories without knowing the precise definition of constraints, but only knowing which violation marks constraints assign in which situations. After all, the winning candidate is not directly determined by constraints, but by their violation marks. So, if the violation marks are known then the winner can be determined – the exact means by which the violation marks came to be assigned is often not crucial. For this reason I believe it is fair to say that there has been less focus on developing a CDL in OT theories than on developing RDLs in rule-based theories like Chomsky and Halle (1968). Even so, there have been detailed proposals of CDLs for groups of OT constraints (see §4 below), and some proposals about aspects of the general CDL. In my own (joint) work, for example, Bye and de Lacy (2000) propose restrictions on how constraints can refer to constituent edges; de Lacy (2006) proposes that constraints cannot include both prosodic nodes and segmental features in their definitions. An explicit CDL is both useful and ultimately essential to a complete optimality theory. A CDL can tell us which constraint formulations are valid, and thus set a bound on which constraints can and cannot exist. The following subsections will discuss a CDL. There is a strong uniformity in constraint descriptions and definitions that suggests broad agreement about certain aspects of the CDL. For expository reasons, I will start with the CDL for output constraints. As a word of warning, due to space limitations I will discuss only a few CDLs, and focus on the basic components of just one. The CDL discussed below deals with a broad set of output constraints that I believe every phonologist would accept as possible constraints. I will not attempt to comprehensively discuss all extant CDLs or aspects of CDLs, but instead focus on basic CDL properties.

2.1

Output constraints: Representation

There is not a lot of explicit discussion about how constraints work in OT. It seems to me that the majority of work in OT treats constraints as functions from candidates to violation marks. Output constraints inspect the output representation in a candidate, and return a string of violation marks. So *Dorsal returns one violation for candidates with an output representation [ka], two for [kag], three for [gaxikan], and so on, leading to the description: “Assign one violation for each dorsal segment.” We are seeking a CDL in which *Dorsal can be formulated. One issue to address is the CDL’s representational primitives. For example, *Dorsal might be cast in terms of a representational theory in which velar consonants are [+back, +high] (Chomsky and Halle 1968: 303), or one in which a Place node dominates a dorsal node which dominates [+back] and [+high] terminal nodes, or one in which an oral cavity node dominates a C-place node which dominates a dorsal terminal node (see Hall 2007 for an overview of feature theories). There are many extant representational theories, but for the purposes of this chapter I will adopt

Markedness and Faithfulness Constraints

5

Clements and Hume’s model (there is no widespread consensus on which representational theory is correct, though; even less now than in 1995, I suspect; see also chapter 27: the organization of features). The CDL also must specify how representational elements can be combined in constraints. For example, a relatively lax CDL could allow several different versions of *Dorsal, as in (1). I use • for “root node” – the lowest node that dominates all segmental features. The symbol ↓ stands for the immediate dominance relation: a↓b means “a immediately dominates b” (i.e. a dominates b and there is no c such that a dominates c and c dominates b); a↓b↓c means “a immediately dominates b and b immediately dominates c.” Dominance is an asymmetric transitive relation that holds between nodes on different autosegmental tiers. The descriptions are given in (1); the violation marks the constraint assigns are shown for [k], [k(] (assuming a one-root geminate theory), and [Ik] (assuming obligatory feature sharing for adjacent elements; Schein and Steriade 1986). (1)

*Dorsal versions Return a violation for . . .

[k]

[k(]

[Ik]

a. each distinct root node • s.t. •↓CPlace↓[dorsal]

*

*

**

b. each distinct [dorsal] feature

*

*

*

c. each distinct prosodic node Ò s.t. p↓•↓CPlace↓[dorsal]

*

**

**3

I do not know which of the constraints in (1) exist. Suppose it turns out that we need only (1a). The non-existence of (1b) and (1c) could be achieved by placing restrictions on the CDL such that all output constraints must refer to a root node in their definition and no constraint may mention both prosodic nodes and segmental features. Every extant theory of representation provides a CDL with a great deal of potentially expressive power. So, it is highly likely that any CDL theory will have to incorporate extensive limitations on permissible representations in constraints; it is probably too hopeful that all limits on constraints will be a side-effect of inherent limitations on representations (see §4). The CDL must also specify how the representation is used to assess violation marks. For example, suppose a constraint mentions the structure [•↓CPlace↓ [dorsal]]. How is this structure used to assess violation marks relative to some candidate? In the constraint description (1a), I assumed that the constraint searches the candidate’s output representation and one violation mark is returned for each distinct structure that has the form [•↓CPlace↓[dorsal]]. However, could there be a constraint which returns one violation regardless of how many [•↓CPlace↓ [dorsal]] structures there were? Such a constraint would return * for [ka], [kax], and [kaxga]. Let’s turn to this issue now.

2.2

Function or representation?

I asserted without comment above that a constraint is a function: i.e. it takes a candidate as an input and returns violation marks. Conceiving of constraints as 3

(1c) could return one violation if [I] and [k] were both dominated by the same [ or q node (e.g. as in [oiIk]q).

6

Paul de Lacy

independent functions opens up the possibility that different constraints could assign violation marks in very different ways. For example, the Align schema from McCarthy and Prince (1993) takes four arguments and assesses violation marks with respect to designated prosodic constituents. The violation marks from Align(Ft, R; PrWd, R) are the sum of the number of syllables between the right edge of each foot and the right edge of the PrWd. So, the constraint returns nine violations for [(qq)(qq)(qq)q] (see McCarthy and Prince 1993: 15–16 for details; to understand this constraint see especially 1993: 10 and definitions 14–16). It is clear that the way in which this Align constraint assesses violation marks is quite different from the way in which violations of *Dorsal are assessed (cf. McCarthy 2003). If constraints are self-contained algorithms that return violation marks and the CDL is sufficiently powerful, we might see pairs of constraints that refer to the same representational structure but differ in how they calculate violation marks. For example, there could be a pair of constraints *Dorsal, where *Dorsal returned only one violation mark regardless of how many [dorsal] features there are in a form, as long as there is at least one (see Wolf 2007a for relevant discussion). The two constraints refer to the same structure – [•↓CPlace↓[dorsal]] – and differ only in terms of how that structure is used to assess violation marks from a candidate. A pair of constraints like this – i.e. that refer to the same representational structure but differ only in their quantification – would be strong evidence that each constraint is an independent algorithm that assigns violations (or at least that there are several groups of constraints that differ in how they assign violations). However, my impression is that the constraints-as-functions approach is too powerful. The output constraints that have been proposed in phonological literature are often very similar: they essentially have the form *R, where R is a representation; one violation mark is assigned for each distinct occurrence of R in a candidate’s output representation. *Dorsal is an example of such a constraint. The apparent uniformity in how constraints assess violations suggests that it is worthwhile considering an alternative theory of constraints in which constraints are not functions but solely representations. In such an approach, there would be a single algorithm, Violation Assigner (VR). VR takes as its input an output constraint and a candidate and returns violation marks. VR works the same way for all constraints, thus imposing uniformity in how violation marks are assigned. So the constraint *Dorsal is really a representation [•↓CPlace↓[dorsal]]; *Dorsal itself does not assess violation marks. There are many ways to formulate a VR algorithm that does the job described above. For example, one could take the set of all sub-representations of a candidate’s output representation and compare each member of the set to a constraint representation; the number of violation marks returned for a particular constraint would be the number of subsets that were equivalent to the constraint’s representation. I will instead discuss a somewhat more efficient algorithm that does a similar job.4

4

See www.pauldelacy.net/VR for software which allows the user to see the VR below in action and try out various constraints and representations.

Markedness and Faithfulness Constraints (2)

7

Violation Assigner (VR): Outline Inputs: – a constraint C; – a candidate that includes an output representation R; Output: – a set of violation marks (a set of unique identifiers) indexed to C and the candidate. a. b.

c.

Take a node c in C. For each node r in R: If r is the same type and value as c, then check whether r is connected to a structure equivalent to C. – if it is, return a violation mark, and continue to the next r. Add no other violation marks to the result.

For example, take a constraint *[+voice], which consists of one [voice] node with a value “+.” Each node in the candidate’s output representation is checked. If a node is a [voice] node and has the value “+,” a violation mark is added to the result. The VR might seem straightforward, but it has interesting complexities, particularly in the procedure that checks whether a node is “connected to a structure equivalent to [the constraint] C.” Take a more complex constraint – one that involves two nodes: e.g. *q[[, “Don’t have bimoraic syllables.” *q[[ has three nodes (q, [1, [2) and three relations (q↓[1, q↓[2, [1 < [2). A node is selected from the constraint (it doesn’t matter which one) – let’s say q in this case. The output representation is searched for q nodes. When one is found, the next step is to check whether q is connected to a structure equivalent to the constraint. The implementation of this checking procedure is that q is checked to see if it is in any of the relations mentioned by the constraint: i.e. does the particular q in the representation dominate two different [ nodes? If it does, then the [ nodes that are dominated by q are checked to see whether their relations have equivalents in the constraint. After nodes and relations are found in the output representation that are equivalent to those in the constraint, a violation mark is returned. The procedure that checks whether n is connected to a structure equivalent to C means that constraints cannot be unconnected. For example, suppose that there is a constraint *[1, [2 which is violated if a word contains two (not necessarily adjacent) moras; these moras are unconnected in this constraint – there is no precedence relation between them, nor is there a node that dominates them both. VR can evaluate such a constraint, but the constraint will never return any violation marks. VR checks relations between nodes: i.e. VR will search for a [1 node, and then check its relations. Since [1 has no connection to [2 via either precedence or dominance, VR will never find any structure in any R that is equivalent to the structure described by the constraint. So the VR algorithm itself, through how it compares the constraint’s structure to structures in the output representation, imposes a weak connectedness requirement on constraints. For a constraint C ever to return a violation mark, every node in C must be connected to every other node. Nodes x and y are “connected” here if it is possible to trace a direct route through precedence and dominance relations from x to y.5

8

Paul de Lacy

The connectedness requirement that results from VR is weak, because a much stronger requirement could be imagined and implemented: i.e. in a constraint, every pair of nodes on the same tier must be in a precedence relation, and every pair of nodes on different tiers must be in a dominance relation. The difference between weak and strong connectedness can be seen in a less connected version of *q[[: one where q↓[1 and q↓[2 , but there is no precedence relation between [1 and [2. Such a constraint would not be evaluated by a VR that imposes strong connectedness because it has two nodes on the same tier ([1, [2) that are not in a precedence relation. However, it is perfectly acceptable in the weak-connectedness VR described above, because every node is connected to every other node. Weak connectedness allows constraints of the form *[x . . . x]D, where there are two nodes of type x within a particular domain D (e.g. Suzuki 1998; see also the discussion of local conjunction in §4.2). The connectedness side-effect of VR is desirable – as far as I am aware, no one has proposed constraints that have completely unconnected elements. The larger point here is that the nature of the algorithm that assigns violation marks is crucial in any theory of constraints. The algorithm not only determines how violation marks are assigned, but whether particular constraints will ever assign violation marks (i.e. it effectively puts restrictions on constraint form just as VR means that constraints must contain connected representations). If constraints are representations and there is a single VR, there should be broad uniformity in the way that violation marks are assigned. For example, the constraint [•↓CPlace↓[dorsal]] will assign violations for each occurrence of its representation in the candidate’s output representation. In contrast, there is no way to formulate a constraint like *>dorsal: if the constraint consists of the representation [•↓CPlace↓[dorsal]], then the VR will return a violation for each occurrence of [•↓CPlace↓[dorsal]]; it cannot be limited to assigning one violation regardless of the number of occurrences of [•↓CPlace↓[dorsal]]. So the VR theory means that output constraints should all assign violations in fundamentally the same way, while the constraints-as-functions theory allows for significant differences. It is even possible that there is a middle ground: there could be several violation assignment algorithms, with VR being just one of them. With several VRs, we would expect to see uniformity in how violations are assigned, but only within particular groups of constraints. Which view is correct? How much uniformity in violation mark assignment is there?

2.3

Regularities and irregularities in violation assignment

There is a great deal of regularity in the way that violation marks are assessed in output constraints. In fact, there is so much regularity that I am sure there would be no confusion among phonologists about how a constraint like *œ7 works (even though I concocted it just now): it would return a violation mark for each Let us define a transitive symmetric relation ⊕; x⊕y if xIdent[nasal] does not return any violations, because for every input segment (i.e. /ã/) there is some pair that preserves the [nasal] value (i.e. ). Alderete (2001) argues that at least some faithfulness constraints have “antifaithfulness” counterparts. For example, OO-Ident[voice] returns a violation for each corresponding segment that has different values of [voice] (for the OO-part, see §3.3). However, ¬OO-Ident[voice] returns a violation if there is no pair of correspondents that disagrees on [voice] values. The constraints differ in terms of how they assess violations rather than the representations they refer to. If Struijke’s (2000) and Alderete’s (2001) proposals are correct, they pose a serious challenge to an approach that seeks to find a single F-VR algorithm. Their proposals mean that there are sets of constraints that differ only in terms of the procedure of violation mark assignment, not in the representation and relations they refer to.

3.3

Developments in correspondence constraints

The theory of correspondence constraints has been reduced, altered, and extended since McCarthy and Prince (1993). For example, Keer (1999) argues that Uniformity does not exist, with the effect that coalescence is obligatory in certain situations. Similarly, a number of authors have argued that Dep does not exist (e.g. Bernhardt and Stemberger 1998). They observe that output constraints do a similar job to IO-Dep; output constraints

Markedness and Faithfulness Constraints

15

(usually) prefer less representational structure over more, and so does IO-Dep. However, Gouskova (2007) argues that the effects of IO-Dep can be distinguished from structural constraints. Most work has focused on extending correspondence to new nodes (§3.3.1) and dimensions (§3.3.2).

3.3.1 Loci of correspondence McCarthy and Prince (1993: 14) proposed that correspondence holds between segments. I have adopted a particular theory of representation (autosegmental theory) in this chapter that does not provide an easy way to define “segment”; in this theory, the most natural understanding of McCarthy and Prince’s proposal is to say that correspondence holds between “root nodes.” McCarthy and Prince also suggest that correspondence could hold between other nodes: tonal nodes, prosodic nodes, and terminal and non-terminal feature nodes. Myers (1997) develops this idea for tonal nodes. For example, IO-Max-T requires that every input tone node correspond to some output tone node. The most significant effect of the proposal is that tones can survive even when their segmental sponsors are deleted. See Yip (2007) for an introduction to tone constraints; also chapter 45: the representation of tone. McCarthy (2000) argues for a variety of constraints that require (at least some) prosodic nodes to be in correspondence. Since (most) prosodic structure is apparently absent in inputs, evidence for correspondence between syllables and feet comes from identity across other forms (e.g. base–reduplicant, base–derivative; see §3.3.2). A widely discussed extension has been feature correspondence. Lombardi (2001) argues that feature-to-feature correspondence is essential in explaining differences between how place features and voice features differ in their behavior. However, coalescence can be achieved without feature correspondence (e.g. Pater 1999; de Lacy 2002: chapter 8), and a general concern with Max-feature approaches is the lack of observed feature autonomy. In several theories, features do not seem to have the same kind of independence as tones: while tones can survive if their sponsors are deleted, there may not be similar effects for features (featural morphemes are special cases, however; see Wolf 2007b: §2.2 and chapter 82: featural affixes for discussion).

3.3.2 Dimensions of correspondence The discussion above has focused on correspondence between inputs and outputs. However, there have been many proposals that extend the reach of correspondence. The proposals fall into two categories: intra-representational correspondence and inter-representational correspondence. McCarthy and Prince (1999) propose that intra-representational correspondence is found in reduplication (see also chapter 100: reduplication). A reduplicant morpheme has no input content, but its output segments can correspond to certain other output segments (the reduplicant’s “base”). For example, one of the reduplicated forms of Maori [parau] ‘baffled’ is [pa7apa7au]. The reduplicated segments correspond to other output segments thus: [p1a273a4p1a273a4u5]. McCarthy and Prince (1993) argue that constraints on Base–Reduplicant (BR) correspondence have the same form as constraints on IO correspondence. BR-Max requires every base element to have some correspondent in the reduplicant

16

Paul de Lacy

(violated once in [p1a273a4p1a273a4u5]), BR-Dep requires every reduplicant segment to have a correspondent in the base, and BR-Ident[F] regulates featural identity between base and reduplicant. What is surprising about the extension of correspondence to the Base– Reduplicant dimension is that there is essentially one formal mechanism that accounts for both the input–output relation and reduplication. Other theories of reduplication conceive of the phenomenon as involving templates or a type of long-distance assimilation, perhaps through autosegmental spreading (see e.g. McCarthy and Prince 1986). Urbanczyk (2007) gives an overview of BR correspondence and reduplication. Other intra-representational correspondence relations have been proposed. Kitto and de Lacy (1999) argue that epenthetic segments can correspond to other output segments, resulting in “copy epenthesis”: e.g. Winnebago [bo(pjnjs] ‘hit at random’ (Miner 1992). The reason for proposing correspondence here is “overapplication”: nasal vowels only occur after nasal consonants in Winnebago, except when epenthetic vowels copy a post-nasal vowel, as above. Such “overapplication” is expected with correspondence, since featural identity of corresponding elements can trump phonotactic restrictions; and is also found in reduplication and other types of correspondence (see Urbanczyk 2007 for discussion of under- and overapplication in reduplication; and Benua 1997 for output–output correspondence). Hansson (2001) and Rose and Walker (2004) go further in arguing that any output segment can correspond to another output segment. The effect is seen in long-distance agreement. For example, in Chumash sibilants agree in anteriority within a word: /s-ilakœ/ → [œilakœ] ‘it is soft’; cf. [s-ixut] ‘it burns’. There have been many proposals for inter-representational correspondence, too. Benua (1997) proposes that segments in the output representation can correspond to segments in the “trans-derivational base” of that output. The trans-derivational base of a word is basically the word minus its structurally outermost affix. So, the base of original [[origin]al] is origin. Original itself is the base of originality, and origin is also the base of originate. OO-correspondence can be used to explain why some morphologically complex words do not follow expected phonological patterns, but instead remain similar to their base. For example, in my idiolect (and in many other English-based idiolects) the head foot avoids final syllables in nouns, but otherwise is drawn to the right edge of a PrWd: [Hd(’mHsH)bÁ] admissible, [HdmHsH(’bHlH)7i] admissibility ([H] can be stressed in my dialect, and /l/ → [Á] outside onsets). However, with some affixes the foot does not get drawn rightward as expected: [Hd(’mHsH)bÁnHs] admissibleness, *[HdmH(’sHbÁ)nHs]. When ness appears in a word, it subjects the candidate to an OO-faithfulness requirement that has the effect of forcing the correspondent of the base’s head syllable to also be a head. So *[HdmH(’sHbÁ)nHs] loses to [Hd(’mHsH)bÁnHs] because the corresponding head syllable in the base [Hd(’mHsH)bÁ] is [mH], not [sH]. Further work on interword relationships has argued that candidates should consist of entire output paradigms of related word forms. See McCarthy (2005) for references and discussion. Yet other work has proposed correspondence relations between the output representation and another output representation that is identified by a special selection mechanism, with the aim being to account for phonological opacity.

Markedness and Faithfulness Constraints

4

17

Possible and impossible constraints

Even after defining the CDL’s representational elements, relations, and algorithm(s) assigning violation marks, there remains the question of which constraints actually exist. I wish I could list all the phonological constraints that exist in the human brain here. Unfortunately, there is no agreed-upon list. Many constraints have been proposed, and many algorithms too. Given the rapid changes in phonological theories and variety of constraint proposals, it is more useful to discuss general intrinsic and extrinsic restrictions on theories of constraints. Every CDL imposes intrinsic limits on possible constraints. The nature of the elements and relations by which constraints are defined means that some imaginable constraints could not occur. For example, suppose the CDL has no disjunction operator. A constraint that assigns a violation to a segment if it is either [+voice] or [labial] is then not possible – it is impossible to formulate using the CDL’s syntax. Similarly, the VR itself may impose “restrictions” on constraints in the sense that certain constraints might be well formed in the CDL, but not assign violation marks. In the VR discussed above, constraints that had unconnected representational elements would not assign violation marks; so, while such unconnected constraints could exist, they are effectively inert and will never be observed to have an effect on selecting a winning candidate. Other cases were discussed in §1.1 and §2.2. It is also possible (even likely) that there are extrinsic limits on constraints. An extrinsic limit is a restriction on particular types of constraint even though the constraint would have a well-formed syntactic structure in the CDL. For example, suppose there was a CDL that made it possible to define a constraint that banned syllable onsets, yet such a constraint did not exist: an extrinsic limit would have to be responsible. The alternative is to suppose that there are almost no significant extrinsic limits: the set of constraints includes every constraint definable using the CDL (up to a certain level of complexity). The issue of extrinsic limits is a very difficult one. The first issue addressed below is methodological: how can we tell whether there are extrinsic limits on constraints (§4.1)? §4.2 discusses where those extrinsic limits come from.

4.1

Evidence for extrinsic restrictions

The majority of OT theories and subtheories do propose many extrinsic restrictions on constraints, specified by the CDL (cf. discussion in Blevins 2004). The evidence comes from restrictions and requirements that cannot be attributed to non-cognitive mechanisms. To explain, suppose we never observe a particular phonological property in any human language, like an epenthetic [k] (e.g. /iti/ never surfaces as [kiti] in any grammar). It is possible that the lack of [k]-epenthesis is due to constraints. For [k] to be epenthetic, there has to be a (set of) output constraint(s) that returns violation marks for every segment except [k]. Without such a constraint, epenthetic [k] won’t occur (see e.g. de Lacy 2006: ch. 3). However, there are potentially non-CDL reasons why epenthetic [k] is never observed. Some other part of the phonological component could be responsible

18

Paul de Lacy

(see §4.2). There is also luck – e.g. war, pestilence, or plague – which may have accidentally wiped out all speakers of languages with epenthetic [k]. After all, every theory of phonology predicts many tens of thousands of distinct phonological systems, and only a few thousand have existed and will ever exist. For epenthetic [k] to be observed, it must also be learnable. Actuation of a phonological change comes about through learner misperception or misarticulation. So if epenthetic [k] cannot come about through such a situation, it won’t be observed. Even if a sound change can be actuated easily, it will quickly disappear if it cannot be transmitted effectively. In this particular case, though, there is evidence that learners could misperceive vowel hiatus as involving a [k] (Kingston and de Lacy 2006). So, suppose a phonological situation P never occurs. Suppose further that the lack of P cannot be ascribed to non-constraint grammatical processes or extraphonological mechanisms. In such a case, phonological extrinsic restrictions on constraints are responsible for the lack of P. For epenthetic [k], it is easy to come up with a set of constraints that penalize everything except [k] (e.g. *Labial, *Coronal, *Glottal), so the CDL must not permit this set of constraints (or at least, this set of constraints with free ranking). There are several other methods of determining that a particular phonological situation is due to constraints. See Kingston and de Lacy (2006: §3.3) and references cited for discussion.

4.2

Origins and universality of constraints

If there are restrictions on possible constraints, where do those restrictions come from? There are fundamentally two different proposals: (a) innateness and (b) constraint-construction mechanisms that refer to phonology-external structures. The innateness view is that constraints are hard-wired into the brain (i.e. part of our genetic make-up). The “hard-wired” view comes in two versions. One is that each constraint is specified independently. In this version, only those constraints that are hard-wired into the brain exist, so extrinsic limits on constraints boil down to genetics. The other version is that there are hard-wired algorithms that automatically generate constraints – “constraint generators” (sometimes called “schemas”). For example, there would be an “Ident[F]“ constraint generator that produces constraints with the form D-Ident[F] where D is a pair of dimensions (input–output, base–reduplicant, etc.), and F is a subsegmental node. The generator is “complete” in that it would generate constraints for every D and every F (Green 1993). The constraint-construction algorithms determine which constraints exist. The alternative is to propose mechanisms that are derived from phonologyexternal mechanisms, or at least can take phonology-external factors into account. A growing body of work argues that there are many algorithms that take phonetic factors like ease of articulation and perceptual distinctiveness into account in evaluating which phonological constraints to generate. In this view, limits on constraints are a combination of the inherent limits of the constraint-construction algorithm and the restrictions imposed by the phonology-external factors that those algorithms refer to. For example, Hayes (1995) discusses phonological constraints on voiced stops. Phonetic voicing in stops is hard to maintain; the further back the stop is, the more

Markedness and Faithfulness Constraints

19

difficult it is to maintain voicing during the closure phase: it is harder to maintain voicing during the closure phase of [g] than for [d], and it is harder for [d] than for [b]. Suppose there is a constraint generator that produces constraints on voicing in stops. It could imaginably generate many constraints, e.g. *g, *d, *b, *g/d, *g/b, *d/b, *g/d/b (where *x/y means “Return a violation for any segment that is x or y”). However, if the mechanism referred to articulation in a way that reflected voicing difficulty, the constraints would be winnowed down to *g, *g/d, *g/d/b. Hayes (1995) further observes that the CDL’s intrinsic representational restrictions could impose further limits on the possible constraints: *g/d, for example, is not definable in some feature theories, as there is no feature that [g] and [d] share to the exclusion of [b]; with such representational theories, the only constraints generated by the mechanism would be *g and *g/d/b (i.e. *[+voice, −continuant, −nasal]). To summarize, the majority of work in OT adopts the idea that there are constraint generators. However, there is ongoing disagreement over whether constraint generators can refer to phonology-external factors like ease of articulation and perceptual difficulty. Gordon (2007) provides discussion and references; see also chapter 98: speech perception and phonology. A related issue is constraint universality. A constraint is “universal” if it exists in every grammar. A constraint can exist in every grammar because it is hardwired into CON (the set of constraints), or because it is produced by a constraint generator (see §4.2) that produces the same constraints in the same way for every grammar. A “language-specific” constraint is one that exists in only some languages; it must be learned. For specific discussions of constraint universality, see Green (1993), Prince and Smolensky (1993), and McCarthy (2002: §1.2.1, §3.1.5.2). There is an important nuance to constraint universality/language specificity. It is possible that constraints are not universal, but rather constraint generators are. For example, Align is a constraint generator that exists in every grammar. However, if Align is allowed to take individual morphemes (or morphs) as arguments, it could produce language-specific constraints like Align([um]Af, L; Stem, L), “The affix um occurs stem-initially, is a prefix” for Tagalog, and Align([ka]Af, L; Ft’, R), “The affix ka follows, is a suffix to, the head foot” for Ulwa (McCarthy and Prince 1993). So, while Align([um]Af, L; Stem, L) does not exist in every language, the constraint generator that created it does. The point could be extended to other constraint generators, and even those that refer to phonology-external factors. If a constraint generator refers to an articulatory or acoustic factor that varies among speakers, it could be that the same constraint generator will produce speakerspecific constraints.

5

Summary

This chapter has left a vast number of issues about constraints untouched and only barely skimmed over a few others. However, a few points about constraints emerge. A formal theory of constraint form – a “Constraint Definition Language” – provides valuable insight into which constraints can and cannot exist. There is fairly widespread (if tacit) agreement on many aspects of such a CDL, but also many disagreements about both fundamental issues and the details. Constraints are only one part of a complex system that determines phonological winners;

20

Paul de Lacy

constraints themselves do not determine winners, and no constraint or set of constraints has any predictive power on its own. Only when the entire collection of constraints, GEN, EVAL, and the phonetic module interface are examined together can anything be asserted about the predictive power or restrictive nature of the theory.

ACKNOWLEDGMENT My thanks to Marc van Oostendorp, Keren Rice, Matt Wolf, and an anonymous reviewer for their extensive comments.

REFERENCES Alderete, John. 2001. Dominance effects as transderivational anti-faithfulness. Phonology 18. 201–253. Bach, Emmon. 1968. Two proposals concerning the simplicity metric in phonology. Glossa 2. 128–149. Bakovio, Eric. 2007. Local assimilation and constraint interaction. In de Lacy (2007), 335–351. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst. Bermúdez-Otero, Ricardo. Forthcoming. Stratal Optimality Theory. Oxford: Oxford University Press. Bernhardt, Barbara H. & Joseph P. Stemberger. 1998. Handbook of phonological development from the perspective of constraint-based nonlinear phonology. San Diego: Academic Press. Blevins, Juliette. 2004. Evolutionary Phonology: The emergence of sound patterns. Cambridge: Cambridge University Press. Boersma, Paul & Bruce Hayes. 2001. Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry 32. 45–86. Bye, Patrik & Paul de Lacy. 2000. Edge asymmetries in phonology and morphology. Papers from the Annual Meeting of the North East Linguistic Society 30. 121–135. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Chomsky, Noam & Howard Lasnik. 1977. Filters and control. Linguistic Inquiry 8. 425–504. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In John A. Goldsmith (ed.) The handbook of phonological theory, 245–306. Cambridge, MA & Oxford: Blackwell. de Lacy, Paul. 2002. The formal expression of markedness. Ph.D. dissertation, University of Massachusetts, Amherst. de Lacy, Paul. 2006. Markedness: Reduction and preservation in phonology. Cambridge: Cambridge University Press. de Lacy, Paul (ed.) 2007. The Cambridge handbook of phonology. Cambridge: Cambridge University Press. Eisner, Jason. 1997. What constraints should OT allow? Handout from paper presented at the 71st Annual Meeting of the Linguistic Society of America, Chicago (ROA-204). Elías Ulloa, José. 2006. Theoretical aspects of Panoan metrical phonology: Disyllabic footing and contextual syllable weight. Ph.D. dissertation, Rutgers University (ROA-804). Gordon, Matthew. 2007. Functionalism in phonology. In de Lacy (2007), 61–78. Gouskova, Maria. 2007. Dep: Beyond epenthesis. Linguistic Inquiry 38. 759–770. Green, Thomas. 1993. The conspiracy of completeness. Unpublished ms., MIT (ROA-8).

Markedness and Faithfulness Constraints

21

Hall, T. A. 2007. Segmental features. In de Lacy (2007), 311–333. Hansson, Gunnar Ólafur. 2001. Theoretical and typological issues in consonant harmony. Ph.D. dissertation, University of California, Berkeley. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts, Amherst. Kager, René, 1999. Optimality Theory. Cambridge: Cambridge University Press. Kager René, Harry van der Hulst & Wim Zonneveld (eds.) 1999. The prosody–morphology interface. Cambridge: Cambridge University Press. Keer, Edward. 1999. Geminates, the OCP and the nature of Con. Ph.D. dissertation, Rutgers University. Kingston, John & Paul de Lacy. 2006. Synchronic explanation. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. Available (July 2010) at www. pauldelacy.net/webpage/publications.html. Kiparsky, Paul. Forthcoming. Paradigms and opacity. Stanford: CSLI. Kitto, Catherine & Paul de Lacy. 1999. Correspondence and epenthetic quality. In Catherine Kitto & Carolyn Smallwood (eds.) Proceedings of AFLA VI: The Sixth Meeting of the Austronesian Formal Linguistics Association, 181–200. Toronto: Department of Linguistics, University of Toronto. Lombardi, Linda. 2001. Why Place and Voice are different: Constraint-specific alternations in Optimality Theory. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 13–45. Cambridge: Cambridge University Press. Ìubowicz, Anna. 2005. Locality of conjunction. Proceedings of the West Coast Conference on Formal Linguistics 24. 254–262. McCarthy, John J. 1999. Sympathy and phonological opacity. Phonology 16. 331–399. McCarthy, John J. 2000. The prosody of phase in Rotuman. Natural Language and Linguistic Theory 18. 147–197. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. McCarthy, John J. 2003. OT constraints are categorical. Phonology 20. 75–138. McCarthy, John J. 2005. Optimal paradigms. In Laura J. Downing, T. A. Hall & Renate Raffelsiefen (eds.) Paradigms in phonological theory, 170–210. Oxford: Oxford University Press. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. McCarthy, John J. & Alan Prince. 1993. Generalized alignment. Yearbook of Morphology 1993. 79–153. McCarthy, John J. & Alan Prince. 1999. Faithfulness and identity in prosodic morphology. In Kager et al. (1999), 218–309. McCarthy, John J. & Matthew Wolf. 2005. Less than zero: Correspondence and the null output. Unpublished ms., University of Massachusetts, Amherst (ROA-722). Miner, Kenneth L. 1992. Winnebago accent: The rest of the data. Indiana University Linguistics Club Twenty-Fifth Anniversary Volume, 28–53. Bloomington: Indiana University Linguistics Club. Myers, Scott. 1997. OCP effects in Optimality Theory. Natural Language and Linguistic Theory 15. 847–892. Pater, Joe. 1999. Austronesian nasal substitution and other NY effects. In Kager et al. (1999), 310–343. Potts, Christopher & Geoffrey K. Pullum. 2002. Model theory and the content of OT constraints. Phonology 19. 361–393.

22

Paul de Lacy

Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Rose, Sharon & Rachel Walker. 2004. A typology of consonant agreement as correspondence. Language 80. 475–531. Samek-Lodovici, Vieri & Alan Prince. 1999. Optima. Unpublished ms., University College London & Rutgers University (ROA-363). Schein, Barry & Donca Steriade. 1986. On geminates. Linguistic Inquiry 17. 691–744. Struijke, Caro. 2000. Existential faithfulness: A study of reduplicative TETU, feature movement and dissimilation. Revision of Ph.D. dissertation, University of Maryland, College Park. Suzuki, Keiichiro. 1998. A typological investigation of dissimilation. Ph.D. dissertation, University of Arizona. Urbanczyk, Suzanne. 2007. Reduplication. In de Lacy (2007), 473–493. Wilson, Colin. 2001. Consonant cluster neutralisation and targeted constraints. Phonology 18. 147–197. Wolf, Matthew. 2007a. What constraint connectives should be permitted in OT? University of Massachusetts Occasional Papers in Linguistics 36. 151–179. Wolf, Matthew. 2007b. For an autosegmental theory of mutation. In Leah Bateman, Michael O’Keefe, Ehren Reilly & Adam Werle (eds.) Papers in Optimality Theory III, 315–404. Amherst: GLSA. Yip, Moira. 2007. Tone. In de Lacy (2007), 195–227.

64

Compensatory Lengthening Randall Gess

1

Types of compensatory lengthening

Compensatory lengthening is the lengthening of one segment, referred to here as the “target,” in compensation for the loss or reduction of another, referred to here as the “trigger” (see also chapter 37: geminates and chapter 20: the representation of vowel length). The segments are usually in close proximity to one another – either adjacent or in adjacent syllables. Theoretically, a consonant can be lengthened in compensation for the loss or reduction of another consonant or vowel, and a vowel can be lengthened in compensation for the loss or reduction of another vowel or consonant. In fact, an argument can be made that all types of compensatory lengthening exist, although as Table 64.1 indicates, some types are far more common than others. There is also a problem in the classification of some types as compensatory lengthening proper rather than as instantiations of other processes, such as total assimilation (2A, B) or rhythmic lengthening (1D). In Table 64.1, Row 1 lists cases in which the target for lengthening is a vowel, and Row 2 lists cases in which the target for lengthening is a consonant. Column A lists cases in which the trigger for lengthening (the reduced or deleted segment) is a reduced consonant following the target, Column B those in which the trigger is a reduced consonant preceding the target, Column C those in which the trigger is

Table 64.1 Types of compensatory lengthening Trigger A / __ C Target

B / C __

C / __ (X )V

D / V(X) __

1: V

numerous

limited

numerous

limited

2: C

limited

numerous

isolated

limited

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0064

Randall Gess

2

a reduced vowel following the target, and Column D those in which the trigger is a reduced vowel preceding the target. Examples representative of each cell in Table 64.1 are provided in the following brief sections.

1.1 Type 1A (Target V; Trigger / __ C) (1)

Old French (Gess 1998, 1999) a. b. c.

blasmer angle large

[blazmer] > [ãnglH] > [laú–H] >

[bla(mer] [ã(glH] [la(–H]

‘to blame’ ‘angel’ ‘wide’

The type of compensatory lengthening in (1) is common. Kavitskaya (2002: App. 1) lists 58 languages in which it is manifested. Other cases not mentioned by Kavitskaya can be found in Gordon (1999) and Beltzung (2008). More exotic types of compensatory lengthening triggered by a following consonant are ones in which the trigger is an intervocalic consonant, or in which the target and trigger are separated by an intervening consonant, i.e. in which the triggering consonant is the second of a sequence of two intervocalic consonants (e.g. Ancient Greek *odwos > East Ionic /o(dos/ (Wetzels 1986: 310). Hayes (1989) refers to this type of compensatory lengthening as a “double flop,” a term which has gained currency in the literature (see Beltzung 2008 for an extensive discussion of “exotic” types of compensatory vowel lengthening triggered by consonant loss).

1.2 Type 1B (Target V; Trigger / C __) (2)

Samothraki Greek (Katsanis 1996, as reported in Topintzi 2006; see also chapter 55: onsets) a. b. c. d. e. f. g.

’rafts ’ruxa ’rema ’protos ’vrisi me’trun ’extra

> > > > > > >

’a(fts ’u(xa ’e(ma ’po(tus ’vi(s’ mi’tu(n ’exta(

‘tailor (masc)’ ‘clothes’ ‘stream’ ‘first’ ‘tap’ ‘they count’ ‘hostility’

The type of lengthening shown in (2) is rare and somewhat controversial (see Beltzung 2008 for an overview), as it is predicted not to occur by the framework of Moraic Phonology developed in Hayes (1989). In Samothraki Greek this process is limited to rhotics in word-initial or post-consonantal position – it does not occur when the segment is in intervocalic position or in a coda position (there is no deletion of syllable-final /r/ in Samothraki Greek). According to Beltzung (2008), the segments implicated in this type of compensatory lengthening are rhotics, pharyngeals, and laryngeals.

Compensatory Lengthening

3

1.3 Type 1C (Target V; Trigger / __ CV) (3)

Hungarian (Kálmán 1972, as reported in Kavitskaya 2002) a. b. c. d. e. f. g.

*wizi *tyzy *utu *ludu *neÚi *modoru *teheni

> > > > > > >

vi(z ty(z u(t lu(d ne(Ú modo(r tehe(n

‘water’ ‘fire’ ‘road’ ‘goose’ ‘four’ ‘bird’ ‘cow’

Cases like the one illustrated here are relatively common, but appear to be more phonologically restricted and less widespread than the type shown in §1.1. Kavitskaya (2002: App. 2) lists 21 languages (neglecting to include Yapese (Jensen 1977), where it appears to be a synchronic process) in which this process occurs or has occurred, whereas she lists 58 languages (App. 1) in which the CVC type of compensatory lengthening shown in §1.1 is manifested. As indicated in Table 64.1, this type of compensatory lengthening does not always involve an intervening consonant (e.g. Old French [fi(] < [fiH]; Pope 1952: 205).

1.4 Type 1D (Target V, Trigger / VC __) According to Hayes (1989: 284), this type of compensatory lengthening “appears not to exist.” However, a process in Macuxi may prove problematic for this claim. In a section entitled “Compensatory length,” Carson (1981: 50) states that “[w]hen a short vowel is suppressed, the vowel that immediately precedes a stop consonant in its vicinity is lengthened.” Representative data are shown in (4). (4)

Macuxi (Carson 1981) kasa’pan kusupa’ra ’wakqrq’pe ’mqkq-’rq

→ → → →

ksaa’pan ’ksuu’pra ’wakrqq’pe ’mqq’krq

‘sand’ ‘machete’ ‘agreeable’ ‘he’

It appears in all the examples but the last one that the lengthened vowel is the one that follows the deleted one. However, the absence of otherwise expected *[’mq’krqq] in the last example may be attributed to what Pessoa (2006: 78, 2009: 117), citing Abbott (1991), refers to as the absence of “sílaba alongada fonológica na última posiçao,” since final syllables are already rhythmically lengthened. According to Kavitskaya (2002: 149), lengthening in Macuxi (as described in Kager 1997) is not a case of compensatory lengthening, and “should be attributed to the properties of iambic systems” since it “happens regardless of syncope” (see also chapter 44: the iambic–trochaic law). She cites the following examples from Kager (1997: 466 –467). (5)

More examples from Macuxi /piripi/ /waimujamq/

(‘pri().(’pi() (‘wai).(mja().(’mi()

‘spindle’ ‘rats’

4

Randall Gess

The first example is taken by Kager (1997) from Hawkins (1950: 89), and the second from Abbott (1991: 147). In both examples, the final syllable of the form in question is apparently lengthened, and there is no vowel syncope to point to as a trigger. However, with respect to the first form, no lengthening in any position is indicated by Hawkins (1950), who says nothing more about final vowels than that “[t]he last vowel in each stress contour in the basic form of any utterance is always retained,” and he transcribes the form as ‘pripí’, with the acute accent marking “the end point of contours” (Hawkins 1950: 89). It may be that length is present and not noted by Hawkins, but if so, it is just as easily described as phrase-final lengthening that is independent of iambic (foot-level) lengthening. According to Abbott (1991: 145), “[t]he final CV in a phonological phrase (i.e. a phrase bounded by a pause) is always long and stressed.” Abbott describes rhythmically derived length on even-numbered V or CV syllables, counting from the left, as well as on final syllables. Again, though, Hawkins (1950) says nothing about lengthening in any context, and does not indicate it in any transcriptions. Nevertheless, Kager (1997) introduces foot-level lengthening systematically in forms taken from both authors, as well as syncope in forms where Abbott indicates none. Hawkins (1950) does discuss “stress contours” in which the contour consists of “a stretch of speech marked by loud stress on the last vowel,” and he notes that “[w]hen more than one word occurs in a stress contour, the last vowel in each non-final word in the contour is retained.” However, retention of a vowel is a far cry from lengthening. Carson (1981) does not indicate final lengthening either. Kager (1997) chooses to ignore the data from Carson (1981), cited above, noting that she posits “lexical tone rather than stress,” and that her data must be “based on a different dialect than those studied by Hawkins and Abbott” (1997: 466). In fact Carson describes lexical pitch accent (1981: 42–45), which may be “disturbed” at the phrasal level (1981: 46). If Carson is correct, that the variety she documented manifests pitch accent, then the lengthening she describes cannot appropriately be attributed to iambic lengthening. Two points arise from the preceding discussion. First, it is not clear that rhythmic lengthening, independent of vowel reduction or deletion, does occur in the Macuxi variety described by Abbott (1991) (and Hawkins 1950, if lengthening other than in final syllables even occurs in the variety he describes). Second, the lengthening described by Carson (1981) cannot be dismissed on the grounds suggested by Kavitskaya (2002). An important question arises with respect to the first point in the preceding paragraph: If rhythmic lengthening is always tied to vowel reduction or deletion, can it properly be considered compensatory lengthening? Kavitskaya’s point is that it is not – that it is better in this case to consider it a property of iambic systems, together with rhythmic vowel deletion. It is unclear, however, why a foot-based process that can be described as CVCV > CvCV( (where “v” represents a reduced vowel) should be treated any differently than one that can be described as CVCV > CV(Cv, i.e. the fairly common type of compensatory lengthening described in (3) (1C in Table 64.1), that is uncontroversially labeled as such. Two other types of compensatory lengthening can result from a triggering vowel preceding the target vowel, in these cases with no intervening consonant. The first of these is compensatory lengthening through glide formation (typically from high vowels), a relatively common synchronic process in Bantu languages (e.g. Ganda

Compensatory Lengthening

5

/li+ato/ ‘boat’ → [ljaato]; Clements 1986: 47). The other type of process is also attested in Ganda, involving the deletion of non-high vowels in prevocalic position (e.g. /ka+oto/ ‘fireplace (dim)’ → [kooto]; Clements 1986: 49).

1.5 Type 2A (Target C; Trigger / __ C) (6)

Semitic (LipiUski 2001: 195) *us#tabbit *at#tarad *it#talaba *?HtzHkkar *jilkHdenhu( *gHma(lathu( *wesfi *nisf

> > > > > > > >

us#s#abbit at#t#arad it#t#alaba ?HzzHkkar jilkHdennu( gHma(lattu( wessi nHs#s#

‘he imprisoned’ ‘I sent’ ‘he sought’ ‘I remember’ ‘he shall capture him’ ‘she weaned him’ ‘awl’ ‘half’

(Assyro-Babylonian) (Assyro-Babylonian) (Arabic) (Ge’ez) (Hebrew) (Hebrew) (Gurage) (Colloquial Arabic)

This type of compensatory lengthening appears to be relatively uncommon, and like the type described in the following section, it is not formally distinguishable from total assimilation.

1.6 Type 2B (Target C; Trigger / C __) (7) Bengali (Hayes and Lahiri 1991: 81) bDrœa bDrdi bhorti korŒ he kor-lo

~ ~ ~ ~ ~

bDœœa bDddi bhotti ko-Œ he kol-lo

‘rainy season’ ‘elder sister’ ‘full’ ‘do-3pres’ ‘do-3fut’

Although this type of compensatory lengthening is quite common, unlike the type illustrated in (6), it shares with that process the lack of any formal distinctiveness from total assimilation.

1.7 Type 2C (Target C; Trigger / __ V) (8)

Bulgarian (Shishkov 2002) ’balite er’genite ku’œarite ’belezite ’babinata ven’Œiloto ameri’kancite done’sa

> > > > > > > >

’bal’(te er’gen’(te ku’œar’(te ’beles(te ’babin(ta ven’Œil(tu amer(’kancite don(’se

‘the bales’ ‘the bachelors’ ‘the (sheep) pens’ ‘the scars’ ‘the grandmother’s (things)’ ‘the wedding’ ‘the Americans’ ‘bring (3sg)’

Cases such as the one illustrated in (8), in which a consonant is lengthened before a following reduced vowel, appear to be isolated. It is noteworthy that the

6

Randall Gess

consonants involved in the Bulgarian process are of relatively high sonority – only sonorants and /z/, although there is at least one attested case of synchronic compensatory lengthening in this category in which sonority does not appear to be relevant (compensatory lengthening resulting from glide formation, as in Ilokano /’luto+en/ → /lutt’w-en/ ‘cook-goal focus’; Hayes 1989: 269).

1.8 Type 2D (Target C; Trigger / V __) (9)

Ganda (adapted from Clements 1986: 62–63) /li + kubo/ /li + tabi/ /li + daala/

> > >

[kkubo] [ttabi] [ddaala]

According to Clements (1986: 6), the synchronic rule deriving geminates from a CV prefix is “a restructuring of the historical situation, in which a phonetically motivated rule is replaced by a morphologically conditioned one.” Clements assumes the geminates to have arisen historically from earlier *VÔ sequences (Ô represents an upper high front vowel), “with the process giving rise to consonant gemination [being] one in which the articulation of a consonant is anticipated on a preceding postvocalic *Ô” (1986: 65). They are now associated with a certain class of nominal prefixes.

2

Approaches to compensatory lengthening

Most documented cases of compensatory lengthening, at least those formally distinguishable from total assimilation of adjacent consonants, involve the compensatory lengthening of vowels. Furthermore, the most common types of compensatory lengthening of vowels involve those in which the trigger follows, rather than precedes, the target – i.e. the types of cases illustrated in §1.1 and §1.3. These two types of compensatory lengthening are commonly referred to as CVC and CVCV compensatory lengthening, respectively. In this section, I focus on these most common types and, since synchronic cases of compensatory lengthening are derived from historical ones, I focus on the diachronic instantiation of the processes. I first summarize three general approaches to compensatory lengthening, all of which have in common an implicit assumption that the phenomenon is speaker-controlled. A fourth section outlines an alternative approach put forth in Kavitskaya’s (2002) quite comprehensive treatment of compensatory lengthening, which may be considered somewhat radical in proposing a strictly listener-oriented account of the process. The relevance of the various approaches to synchronic cases of compensatory lengthening, as well as to the other types illustrated in §1, is discussed in §4. The first three approaches to be examined in this section fall into two categories, as described by Kavitskaya (2002): one that treats compensatory lengthening as a type of conservation and one that does not. The first category is the most common, and assumes that compensatory lengthening is fundamentally teleological in that its goal is to preserve length present in the input in the output string. Being

Compensatory Lengthening

7

the most common category, it comprises two of the three approaches: a phonetic conservation approach and a phonological conservation approach. The third approach, in a category of its own, is the non-conservation approach, which denies the existence of any intrinsic connection between the loss of a segment and the lengthening of another.

2.1 Phonetic conservation approach In a phonetic conservation approach, compensatory lengthening is viewed as a goal-oriented process functioning to preserve some or all of the physical duration of lost segmental material. Timberlake (1983) discusses a case in Slavic in which a number of modern dialects have long or tense vowels in syllables that preceded a weak jer in Late Common Slavic (see also chapter 122: slavic yers). This is a case of CVCV compensatory lengthening (Timberlake does not discuss CVC cases). According to Timberlake, a long reflex of a vowel in a syllable before a Late Common Slavic weak jer “is in some way a result of the phonetic weakening and the eventual phonemic loss of the following jer vowel” (1983: 293). He suggests that: Late Common Slavic was subject to a constraint on the preservation of word timing, such that phonetic reduction in one syllable (containing the “weak” jer) was compensated for by increased phonetic duration in the preceding, “strong” syllable.

In Timberlake’s model, compensatory lengthening takes place phonologically through re-analysis. Re-analysis depends upon both the phonemic elimination of jers and the surpassing of a “critical duration” on the part of the phonetically lengthened preceding vowel. If, when jers are “eliminated phonemically, either by identification with another vowel or by identification with null” (1983: 299), phonetically lengthened vowels are sufficiently lengthened, the latter are reanalyzed as phonemically long (or tense). Timberlake sets the critical duration for re-analysis arbitrarily at anything beyond 1.5 times “full duration (nearly or exactly 1.0 morae. [. . .] numerical values for duration [. . .] are intended to be highly approximate)” (1983: 298). Timberlake’s model is an additive one, “in which the duration of vowels is adjusted by adding or subtracting increments of duration depending on various factors.” The various factors at play in Late Common Slavic were the consonant intervening between the jer and the lengthened preceding vowel; the position of the CVCH sequence in the word (final or internal); and the accent of the lengthened vowel. The phonetic process of compensatory lengthening is described by Timberlake (1983: 298) as in (10). (10)

Compensatory lengthening as a phonetic process /CVCH/ > [CV1.0+aCH−a]

The formula in (10) states that for any reduction of value a in the phonetic length of a jer, a preceding vowel is realized at full duration (1.0) plus a.

Randall Gess

8

In order to model the gradual nature of phonetic lengthening, Timberlake breaks the process down into discrete stages, arbitrarily shown in 0.2 increments, as illustrated in (11), from Timberlake (1983: 298). (11)

Jer reduction and compensatory lengthening a. b. c. d.

/CVCH/ /CVCH/ /CVCH/ /CVCH/

> > > >

[CV 1.2CH−0.2] [CV 1.4CH−0.4] [CV 1.6CH−0.6] [CV 1.8CH−0.8]

{a {a {a {a

= = = =

0.2} 0.4} 0.6} 0.8}

Finally, as indicated above, Timberlake assumes re-analysis at anything beyond 1.5 times full duration. Regarding the cut-off, Timberlake (1983: 299) explains: When reduced jers were eliminated phonemically, the phonetic phase of CL was necessarily interrupted, and the lengthened variant of a vowel in strong position had to be identified as phonemically long (tense) or short (lax).

This view of re-analysis is illustrated schematically in (12). (12)

Phonemic analysis a. b. c. d.

[CV 1.2CH−0.2] [CV 1.4CH−0.4] [CV 1.6CH−0.6] [CV 1.8CH−0.8]

⇒ ⇒ ⇒ ⇒

/CVC/ /CVC/ /CV(C/ /CV(C/

{a {a {a {a

= = = =

0.2} 0.4} 0.6} 0.8}

Timberlake’s account of CVCV compensatory lengthening can be straightforwardly extended to CVC compensatory lengthening. A demonstration of this is provided in Gess (forthcoming). Explicit criticisms of phonetic conservation approaches to compensatory lengthening are minimal. The most obvious thing to point out here is the limited relevance of this approach to synchronic cases of compensatory lengthening. That is, while the approach may be useful in showing how compensatory lengthening may arise historically, it is not well suited for modeling the processes in synchronic grammars unless the process at hand is, or at least could be, a change in progress (i.e. it is a gradient, post-lexical process). Another problem with the phonetic conservation approach, pointed out by Gess (forthcoming), is that its extension to CVC cases of compensatory lengthening, while straightforward in a mechanical sense, seems to entail at least implicitly the problematic assumption that moras associated with consonants are equivalent in duration to those associated with vowels. This problem could be overcome by making assumptions regarding the formalisms used more explicit.

2.2 Phonological conservation approach In a phonological conservation approach, compensatory lengthening is viewed as a goal-oriented process functioning to preserve some aspect of the phonological representation (a suprasegmental unit) associated with the loss of segmental

Compensatory Lengthening

9

material. In fact, compensatory lengthening phenomena crucially informed the debate as to the best way to characterize the prosodic tier (or timing tier) assumed in autosegmental phonology, i.e. in terms of C- and V-slots, X-slots, or moras (McCarthy 1979; Clements and Keyser 1983; Hyman 1984, 1985; Levin 1985; Lowenstamm and Kaye 1986; McCarthy and Prince 1986; Hayes 1989; Beltzung 2008 (especially Chapter 3); see also chapter 54: the skeleton). In order to explain constraints on compensatory lengthening (e.g. that triggers in CVC cases are only coda consonants and not onset consonants [an assumption later proven problematic; Topintzi 2006; Beltzung 2008], and even more specifically that coda consonants are triggers only when they contribute to syllable weight in the language in question), Hayes (1989), in probably the most influential single article on compensatory lengthening, suggests that lengthening only occurs when deletion results in an empty prosodic position and that only a prosodic frame defined in terms of moras yields the correct typological results. (Hayes 1989: 260–261 also provides a simple and straightforward demonstration of the inability of a linear approach to account for compensatory lengthening.) Hayes (1989) accounts for CVC compensatory lengthening as illustrated in (13), with Latin [kasnus] > [ka(nus] ‘dog’. (13)

Compensatory lengthening in CVC sequences (Hayes 1989: 262) a.

/s/-deletion s → Ø / __

b.

G+sonJ (segmental tier only) I+ant L

Compensatory lengthening [ [′

where [′ is a segmentally unaffiliated mora

a c.

q

q [

k a

q

[

[

[ →

[

s n

u

s

a

k

q [ n

q

[

[ →

[

u

s

a

k

q [ n

[

[ = [ka(nus]

u

s

A theory assuming a prosodic frame defined in terms of X-slots can account for the example above, but not for the fact that in the same language (Latin), /s/deletion does not trigger lengthening when it is word-initial, as in snurus > nurus (the same problem holds for the type of compensatory lengthening illustrated in §1.2). In a segmental theory based on X-slots, any deleted segment should trigger lengthening, whereas in a moraic theory only those segments that are mora bearing will do so. The moraic theory of compensatory lengthening accounts for CVCV cases as illustrated in (14) with Middle English [talH] > [ta(l] ‘tale’ (see Minkova 1982 for an in-depth discussion of this case).

Randall Gess

10 (14)

Compensatory lengthening in CVCV sequences (Hayes 1989: 268–269) a.

t b.

t c.

q

q

[

[

a

l

H

q

q

[

[

a

Parasitic delinking [

[

d. .

a

l Compensatory lengthening

q [ t

e.

[ l

a

Resyllabification

q [ t

a

Schwa drop

l

q

t

Input

[ l

= [ta(l]

Parasitic delinking, illustrated in (14c), is a principle that eliminates ill-formed syllable structure, caused in this case by the loss of the vowel segment via schwa drop. A very positive aspect of Hayes (1989) is that a wide range of cases of compensatory lengthening are discussed (although a potentially problematic empirical gap is discussed below). Besides the classic CVC and CVCV cases, Hayes (1989) treats the so-called “double flop” cases in which the deletion of a glide triggers compensatory lengthening of a vowel in a preceding syllable, as in Ancient Greek *odwos > /o(dos/ (1989: 265–266), and compensatory lengthening from glide formation, as in the Ilokano case mentioned earlier /’luto+en/ → /lutt’w-en/. He also mentions (without providing a formal treatment) “straightforward” cases like compensatory lengthening through progressive and regressive total assimilation of consonants, compensatory lengthening through prenasalization, and so-called “inverse compensatory lengthening,” which involves the lengthening of a consonant triggered by the shortening or loss of a vowel (1989: 279 –281).

Compensatory Lengthening

11

Fox (2000) points out a number of what he sees as problems for Hayes’s (1989) approach. The first is that the principle of parasitic delinking is “a radical measure which is not required in most other processes of Compensatory Lengthening.” The second relates to the required linking of the vowel of the first syllable to the mora stranded by deletion of the second syllable. According to Fox (2000: 100 –102), this is: unmotivated by the normal principles of the model, since, according to one view at least, the syllable would be perfectly well-formed without this linking; the final mora would be linked to the final consonant and is thus not left stranded.

Finally, Fox (2000) suggests that Hayes’s (1989) principle of mora conservation is inappropriate as a motivation for CVCV compensatory lengthening. This is because Hayes defines the mora as “the basic unit for syllable weight” (1989: 285) and syllable weight is not maintained in these cases. Rather, what is maintained is the length of the foot (2000: 101). One might also argue that a problem for phonological conservation approaches generally is that they are ill equipped to deal with the gradual nature of diachronic compensatory lengthening. For example, the Middle English case discussed above did not take place as a discrete change in one fell swoop. According to Minkova (1982: 50): Before becoming identified with existing long vowels or developing into new ones, i.e. prior to the establishment of a phonological length contrast, the short vowels in the environment / __ C11e# undergo phonetic lengthening. [. . .] In a situation where forms with and without the second syllabic element, the -e, are both available to the speaker, there will be a negative correlation between it and the first syllabic element. Phonetically, “the word as a whole has a certain duration that tends to remain relatively constant.” (Lehiste 1970: 40)

One way to show intermediate length in moraic phonology is to have segments share a mora. In this case, a standard interpretation of the formalism prevents this because it would involve the crossing of association lines, as illustrated in (15). (15)

Potential inadequacy of Hayes’s mora conservation approach with respect to modeling gradual change: CVCV cases *

t

q

q

[

[

a

l

H

If one adopts a strict and unnuanced view of the ban on crossed association lines, since parasitic delinking (illustrated in (14c)) is triggered only when ill-formed syllable structure is present, the moraic account of compensatory lengthening is only able to succeed if the final vowel is entirely deleted and parasitic delinking applies. That is, the account is unable to account for the allophonic lengthening that must be assumed to precede phonemic lengthening. However, a more relaxed view

Randall Gess

12

might interpret the ban on crossed association lines as applying separately to distinct C and V tiers. Gradual change might also be seen as a problem for the mora conservation approach in diachronic cases involving CVC compensatory lengthening. Hock (1986) mentions a case reported in Brockelman (1908) from Tunisian Arabic (with similar instances in Ge’ez and Tigrinya), in which a preconsonantal glottal stop is reduced (not deleted), with compensatory lengthening on the preceding vowel. The examples provided are shown in (16). (16)

Compensatory lengthening triggered by segmental reduction in Tunisian Arabic (Hock 1986: 444) œeffa?ni sma?tkum

> >

œeffa(?ni sma(?tkum

In autosegmental representation, the output of this process would be as illustrated in (17). (17) Potential inadequacy of Hayes’s mora conservation approach with respect to modeling gradual change: CVC cases *

…f

q

q [

[

a

?

[ n

i

While the shared mora representation in (17) is adequate for representing an intermediate stage between a fully moraic glottal segment following the short vowel and a long vowel with no following glottal, one might argue that it cannot represent any more than a single such stage, whereas more stages might well be warranted. However, this potential criticism disregards the possibility of a single phonemic representation having different phonetic interpretations at different periods (or indeed across speakers at a single period). We must also note (as have others) one apparent empirical weakness with the phonological conservation approach, as put forth in Hayes (1989). This involves cases in which compensatory lengthening is triggered by a prevocalic consonant that in normal circumstances would not be associated with a mora. Some such cases are discussed in Hock (1986), a paper cited by Hayes (1989), but without mention of these specific examples. Strangely, the cases are also problematic for Hock, although he does not treat them as such. Hock’s interest in the cases is that they involve compensatory lengthening triggered not by deletion of a segment, but by its weakening only (as in the case illustrated in (17), but in intervocalic position). The first case is from Tyrone Irish (as discussed in Stockman and Wagner 1965), where: vowels are dialectally distinctively lengthened before the highly reduced glottalfricative outcome of earlier voiceless fricatives (as well as before sonorant + consonant etc. and in “ordinary” CL environments, but not in open syllables). (Hock 1986: 443–444, emphasis added – RG)

Compensatory Lengthening

13

The fact that Hock takes care to note that compensatory lengthening does not take place in open syllables suggests that the reduced segments in question might be ambisyllabic, but this is not made explicit anywhere, including in the original source. If it is the case that the segments in question are ambisyllabic, then an argument could be made that they are doubly linked – to a mora in the first syllable and to the onset of the second. The relevant data are shown in (18). (18)

Compensatory lengthening triggered by segmental reduction in Tyrone Irish (Hock 1986: 444) srathar tachas

[stra(hHr] [tD(hHs]

The second problematic case mentioned by Hock (1986) is from the Westphalian dialect of Soest, as reported in Holthausen (1886: 28 –29). In this dialect, as illustrated in (19), compensatory lengthening occurs before highly reduced, voiced labial and velar fricatives, and before a deleted “secondary” (analogically reinserted) voiced alveolar stop. (19)

Compensatory lengthening triggered by segmental reduction in Westphalian (Hock 1986: 444) *hege *seven *snede

‘hedge’ ‘seven’ ‘slice’

> > >

(*)hiHgH (*)siHv(e)n (*)sniHde

> > >

hi(H:H si(Hvn sni(H

(with HH > H)

Again, there is no suggestion that the consonants triggering compensatory lengthening in these cases were ambisyllabic, nor is there any reason to believe that they were. This case therefore represents an apparent problem for Hayes’s (1989) mora conservation approach, aside from the issue of the gradient nature of the triggering segmental reduction. (The case also poses a problem for Hock 1986, which represents a mora conservation approach as well, although one not couched in an autosegmental framework.) As mentioned earlier, other cases of compensatory lengthening triggered by the loss of intervocalic consonants are discussed by Beltzung (2008: especially ch. 2). Finally, the phonological conservation approach à la Hayes (1989), particularly in the case of CVC compensatory lengthening, has proven difficult to model in Optimality Theory (OT). The basic problem is that in order for lengthening to occur, consonants must be assigned weight before deletion happens, thus suggesting a serial analysis. Getting around this problem has necessitated the abandonment of some of the basic tenets of OT. For example, one could simply assume that consonants are moraic in the input (Sprouse 1997), but this requires a sidestepping of OT’s principle of Richness of the Base, whereby output well-formedness is determined solely by constraints and their ranking, and not by restrictions on input. Other ways of handling the problem involve treatments designed to handle opacity more generally, such as stratal OT (Kiparsky 2000), which rejects strict serialism, Turbidity Theory (Goldrick 2001), Sympathy Theory (McCarthy 2003), or OT with candidate chains (McCarthy 2005; Shaw 2009), which require reference to what amounts to one or more intermediate representations.

Randall Gess

14

2.3 Non-conservation approach In an influential article, de Chene and Anderson (1979) take a novel approach to compensatory lengthening by rejecting the notion that such a process exists as “an independent mechanism of phonetic change” (1979: 505) (they discuss only cases of CVC compensatory lengthening). For them, putative cases of compensatory lengthening can be decomposed into two independent processes: weakening of the consonant in question to a glide, and subsequent monophthongization of the resulting vowel + glide sequence. De Chene and Anderson further contend that monophthongization will result in a long vowel only if the language in question has a pre-existing vowel length distinction. The latter claim is about a structurepreserving condition on compensatory lengthening and not about the process itself. Discussion of it is explored further in §3. For now, let us look in more detail at the first claim. The proposal that cases labeled compensatory lengthening are in fact the result of two unrelated processes has generated much discussion. Gess (1998) points out that it has been challenged by numerous scholars, including Hock (1986), Poser (1986), Sezer (1986), and Gildea (1995). According to Gess (1998: 353), “[E]ach of these scholars provides a strong case against the view that compensatory lengthening is always decomposable into two distinct stages. The ensemble of their arguments renders this claim simply untenable.” Without laboring the point, then, we simply illustrate de Chene and Anderson’s hypothesis with one straightforward example. According to de Chene and Anderson (1979: 512): In Latin, compensatory lengthening involving loss of a dental spirant is limited in source to *Vz[C, +dent] sequences, where *z is the reconstructed allophone of *s before a voiced segment. Thus we have *ni-sd-o > ncdus ‘nest’ and *si-sd-d > scdd ‘I sit down’, both involving the zero grade of *sed ‘to sit’ (cf. seded ‘I sit’).

De Chene and Anderson continue: “Our posited intermediate development involves the loss of occlusion in (preconsonantal) *[z], leading to the voiced glottal spirant [Â]” (1979: 512). In this type of analysis, de Chene and Anderson were not in fact alone. Jeffers and Lehiste (1979) propose the analysis in (20) for the remarkably similar change from Proto-Indo-European (PIE) *nisdo to Sanskrit /ni(Õa-/. (20)

Jeffers and Lehiste’s analysis of PIE *nisdo > Sanskrit /ni(Õa-/ (as presented in Hock 1986: 435) nisdnizdnisdnisÕnijÕni(Õ-

voicing assimilation retroflexion retroflex assimilation gliding contraction

In noting the similarity in analyses, Hock (1986: 435) points out that what distinguishes Jeffers and Lehiste’s analysis from de Chene and Anderson’s is the fact that the former is “not proposed as explanations for all cases traditionally labeled loss with compensatory lengthening, but only for a certain subset, however poorly that subset may be defined.” (Note that Kavitskaya 2002: 38 incorrectly

Compensatory Lengthening

15

interprets this sentence as referring to de Chene and Anderson 1979, rather than to Jeffers and Lehiste 1979.) Hock (1986: 435) continues by saying that he is: ready to concede that many instances of what has traditionally been called loss with compensatory lengthening may well be ambiguous, and can be analyzed either as weakening-cum-assimilation or as cases of loss-with-mora-retention. However, in light of the fact that there are [. . .] cases of loss with CL which cannot be explained in terms of weakening-cum-assimilation, any theory which recognizes only the latter process must be considered insufficient.

One of the examples provided by Hock that is not amenable to explanation under de Chene and Anderson’s non-conservation approach is from Icelandic, and is provided in (21). (21)

Compensatory lengthening in Icelandic (Hock 1986: 442) *liugan *keosan (*)priar (*)se(an

‘lie’ ‘choose’ ‘three-fem’ ‘see’

> > > >

lju(ga kjo(sa prja(r sja(

In this case, which is similar to the Ganda case at the end of §1.4, although there is weakening to a glide, it involves a preceding vowel rather than a following consonant, and there is no monophthongization involved. Hock’s (1986: 435) reproach of de Chene and Anderson (1979) for proposing an alternative explanation of all cases traditionally called compensatory lengthening while neglecting to treat all types extends to other cases as well. For example, while de Chene and Anderson are aware of CVCV compensatory lengthening, they choose not to discuss it (1979: 506, n. 1).

2.4 Kavitskaya (2002) Kavitskaya (2002) puts forth a model of compensatory lengthening that can be considered a radical departure from previous treatments, in that it assumes the process to be entirely listener-oriented (see chapter 98: speech perception and phonology). In this respect, her model is representative of the overall approach to phonological change espoused in Blevins (2004, 2006), Evolutionary Phonology. This is a model that rejects any explanations for historical phenomena that involve the synchronic phonologies of speakers (e.g. by assuming a role for phonological rules or markedness constraints) when there is an alternative, diachronic explanation available. This essentially removes the speaker from the story of phonological change, except as a source of variation from which potential changes may or may not take root through “innocent misperception” on the part of the listener. This variation is constrained by speaker-specific anatomical differences, and within the speech of a given speaker, due to phonetic transforms of speech dependent (at least) on: rate of speech; degree of physical effort involved; and the humanly physical impossibility of making exactly the same sound twice. (Blevins 2006: 125 –126)

16

Randall Gess

According to Kavitskaya’s (2002) listener-oriented account of compensatory lengthening: diachronic CL through consonant loss [CVC > CV(] ultimately has its origin in the phonetic lengthening of vowels in the environment of neighboring consonants; the subsequent loss of a consonant conditioning such length causes the length to be re-analyzed as phonological. (Kavitskaya 2002: 8)

Further, according to Kavitskaya, with respect to diachronic compensatory lengthening through vowel loss [CVCV > CV(C]: Prior to the deletion of the final vowel, the longer vowel duration characteristic of open syllables is correctly parsed by listeners as a phonetic consequence of syllable structure in the first syllable of a CVCV sequence, and is discounted [. . .] Upon deletion of the final vowel, however, the duration of the vowel in the newly-closed syllable becomes inexplicable, since it is longer than is expected in the closed syllable. (Kavitskaya 2002: 9)

If Kavitskaya’s arguments are right, then compensatory lengthening is not really compensatory in nature. For the process to be compensatory, the compensatory aspect would rely on a role for the speaker, as is assumed at least implicitly in all other models of compensatory lengthening. In so far as Kavitskaya (2002) is representative of the Evolutionary Phonology framework proposed in Blevins (2004, 2006), it is susceptible to the general criticisms that have been leveled against that framework. Lindblom (2006) criticizes the Evolutionary Phonology framework for its reliance on so-called “extraphonological” explanations over phonological accounts. According to Blevins (2006: 20), “principled extra-phonological explanations for sound patterns have priority over competing phonological explanations unless independent evidence demonstrates that a purely phonological account is warranted.” Lindblom takes exception to this stance on the grounds that it highlights a “phonetics/phonology split” and traps the framework in “the conceptual prison of the form/substance distinction” (2006: 242). As the title of his response to Blevins (2006) declares very loudly, Lindblom (2006) rejects the phonetics/phonology split. Lindblom admonishes us to: Deduce sound structure from language use. Anchor theory construction in the universal conditions under which all speech communication must take place. Start from “first principles” and not circularly from the data to be explained (cf. “markedness”). At the level of the individual user, model phonological structure, not as autonomous form, but as an emergent organization of phonetic substance acquired by each native speaker in the context of socially shared, ambient knowledge. At the population level, model this knowledge as a use- & user-dependent process that undergoes change along the historical time scale. Get rid of the distinction between “phonological” and “extraphonological.” Here is a key step: Make the “intrinsic content” an integral part of the theory from scratch. Treat “intrinsic content” as the source that helps generate discrete structure and that constrains both synchronic and diachronic phonological patterning. (2006: 243)

In §4, we will explore how a rejection of the phonetics/phonology split might be helpful in accounting for the many types of compensatory lengthening as a

Compensatory Lengthening

17

unified phenomenon. We conclude this section by looking at an empirical challenge to Kavitskaya’s listener-oriented approach – a synchronic case of compensatory lengthening that suggests an explanation in terms of speaker-controlled behavior. McRobbie-Utasi (1999) provides evidence for a synchronic case of compensatory lengthening that is apparently speaker-controlled and that suggests the relevance of a principle of isochrony in a synchronic production grammar. In an acoustic analysis of quantity in Skolt Saami, McRobbie-Utasi shows a clear connection between the distribution of duration in V1, intervening C, and V2 sequences in disyllabic groups, and a phonological process realized as “an optional rule that either reduces word-final short vowels or deletes them” (1999: 111). Deletion of the final vowel is a feature of casual speech. The relevant optional rule is shown in (22). (22)

Word-final vowel deletion in Skolt Saami (McRobbie-Utasi 1999: 111) V → Ø / __ # Vowel deletion rule A word-final vowel is optionally deleted in Type 1–5 disyllabics.

It is important to note that in the V1, intervening C, and V2 sequences, the intervening C can be long and ambisyllabic (in four of the five types mentioned), or short and affiliated as the onset of the second syllable. The principal relevant passage from McRobbie-Utasi (1999) is shown below, where the “stress-group locations” referred to are the V1 and following C in the relevant sequences. V2 constitutes a third “stress-group location.” According to McRobbie-Utasi (1999: 114 –115): From the [. . .] measurements an important tendency can be deduced: namely, that the presence or absence (or reduced duration) of the vowel in the second syllable has clear consequences for the distribution of duration in the first syllabic vowel and the consonant(s) following it. Thus, an increase in duration takes place as a result of compensatory lengthening. It will be recalled that second syllabic vowel durations were constant in all the structural types once they were realized as full vowels, with an average of 87 msec [. . .]; also, that durations signaling differences between the structural types and/or gradation types are manifested in the first syllabic vowel and the consonant(s) following it. The fact that the presence or absence of the second syllabic has a considerable effect on these durational distributions in the segments preceding has important implications. The durational changes noticeable in these two stress-group locations (i.e. first syllabic vowel and the consonant(s) following) must be recognized as exemplifying the phenomenon of compensatory lengthening. The absence or reduced status of the second syllabic vowel results in an increase of duration in both of the stress-group locations referred to above.

Compensatory lengthening in Skolt Saami, triggered by the reduction or deletion of a final vowel, affects both the preceding vowel and consonant in four of five types (those in which the consonant is long and ambisyllabic), and the lengthening that occurs does so in a way that precisely preserves the overall V/C ratio. In the remaining type, in which the consonant is short and syllabified as the onset of the second syllable, “reduced duration of the second syllabic vowel results in compensatory lengthening in the first syllabic vowel only. There is practically

18

Randall Gess

no durational increase in the consonant following this vowel” (McRobbie-Utasi 1999: 118). It is difficult, although perhaps not impossible, to reconcile McRobbie-Utasi’s (1999) findings with a listener-based approach. Although McRobbie-Utasi’s study involved only two speakers of Skolt Saami, their behavior with respect to the 550 test words used (recorded three times by both speakers, for a total of 3079 usable tokens) was remarkably consistent. Nor do the types of sequences involved lend themselves readily to Kavitskaya’s line of explanation for CVCV compensatory lengthening, since they do not involve (except for Type 3) phonetically lengthened vowels in open syllables. (As expected, V1 in Type 3 sequences is longer than in other types, both when V2 is fully realized and when it is not.) Nor has any reanalysis occurred (whatever that might look like given the sequences involved and their variety [five types]) since the trigger for compensatory lengthening is still synchronically recoverable. Rather, it appears that the speakers are guided directly or indirectly by a principle of isochrony with respect to the disyllabic group. Other empirical problems for Kavitskaya’s approach, from historical French (manifesting types 1A and 1C), are discussed in Gess (forthcoming).

3

A putative constraint on compensatory lengthening

This section briefly explores the second claim made by de Chene and Anderson (1979): that compensatory lengthening can only occur in a language with a pre-existing vowel length contrast – i.e. that it is strictly structure preserving (chapter 76: structure preservation: the resilience of distinctive information). This issue is discussed in detail in Gess (1998), which treats the very data from Old French on which de Chene and Anderson base their claim, thus adding a particularly severe blow to a claim already questioned in other work (for example, in Hock 1986, Hayes 1989, Morin 1994, and Lin 1997, as well as two further cases discussed more recently in Beltzung 2008: 20–21). According to de Chene and Anderson (1979: 517), “a necessary condition for the development of contrastively long vowels through monophthongization is the independent existence of a length contrast in the language.” With respect to historical French, de Chene and Anderson compare two distinct processes (in the ninth and sixteenth centuries) of monophthongization of the diphthong [aw]. At the earlier stage, the resulting monophthong [o] was short. However, at the later stage, the outcome was the long vowel [o(]. (Strangely, de Chene and Anderson (1979: 519) also suggest a sixteenth-century date for the loss of preconsonantal [l] – the same century in which they contend that monophthongization of the vowel + glide sequence resulting from its loss had occurred. However, Gess (1999) provides strong evidence for a much earlier date for the loss of syllable-final [l], after the latter part of the eleventh century – and many scholars assume a much earlier date still.) The difference in outcomes in the monophthongization of derived [aw] was due, according to de Chene and Anderson, to the introduction of vowel length into the language via the loss of intervocalic consonants, in the late ninth and early tenth centuries (1979: 521). This introduction of vowel length also allowed for compensatory lengthening, according to de Chene and Anderson, following the loss (through an intermediate stage as a glide) of syllable-final [z], [s], and nasals.

Compensatory Lengthening

19

Loss of the latter is incorrectly dated by de Chene and Anderson in the sixteenth century, while loss of the former, [z] and [s], is dated in the twelfth and thirteenth centuries. According to Gess (1999), loss of nasal consonants dates from the thirteenth century, and loss of [z] and [s] dates from the eleventh to the thirteenth centuries. De Chene and Anderson (1979: 522) make the following claim with respect to the establishment of long vowels in Old French: There is a solid body of long vowels, however, that were established by 1100 through deletion of the consonant in original ViCVi sequences. In these cases, no leveling or assimilation being necessary, a long vowel is the automatic result of loss of the consonant.

They go on to provide a list of several words illustrating the relevant consonant loss and the resulting putative long vowels. However, Gess (1998) found each of the forms listed by de Chene and Anderson in twelfth- and thirteenth-century Old French poetry and, in each case, the forms are clearly treated as consisting of two syllables. Gess (1998: 358) “found many other examples of orthographic geminate vowels in 12th and 13th-century Old French poetry, all of which are treated as bisyllabic.” The fact that sequences of two vowels were still counted as bisyllabic in the thirteenth century, when the loss of [z] at the very least had occurred, with compensatory lengthening, shows that a pre-existing vowel length contrast in the language was not a prerequisite for compensatory lengthening to take place. Rather, Hayes’s (1989) assumption is likely the right one, that a syllable weight distinction in the language in question is necessary and, crucially, sufficient for compensatory lengthening to take place. Gess (1998: 364) points out that from an optimality-theoretic perspective this would be a rather unsurprising consequence of the general principle of minimal violation, in this case of faithfulness to the input. While a given constraint ranking may allow for the erosion of segmental features, it may still protect prosodic structure.

4

Assessment and recent directions

§2 outlined various approaches to compensatory lengthening: a phonetic conservation approach, a phonological conservation approach, a non-conservation approach, and a listener-based approach. We saw that the non-conservation approach, proposed only in the context of CVC compensatory lengthening, is basically untenable, both because it fails to account for any other type of compensatory lengthening and because there are instances of CVC compensatory lengthening that appear not to be decomposable into the stages suggested by de Chene and Anderson (1979). This leaves us with two conservation approaches, both suggestive of a speaker-based process, and a listener-based approach. We have noted problems with each of these approaches, which I will summarize briefly here. We have observed that the phonetic conservation approach proposed in Timberlake (1983) is most relevant to those instances of compensatory lengthening that are gradient in nature and that may be characterized as changes in progress. This approach seems ill suited for dealing with synchronic cases of

20

Randall Gess

compensatory lengthening that involve complete loss of the trigger. Without some refinement, the approach also has difficulty with CVC cases of compensatory lengthening given the assumption implicit in its formalism that moras associated with consonants are of equal duration to those associated with vowels. With respect to synchrony/diachrony dichotomy, the phonological conservation approach suffers the opposite problem from the phonetic conservation approach. That is, while it can account for most, if not all, cases of synchronic compensatory lengthening, it is not ideally suited to account for compensatory lengthening as a gradual diachronic process or as a process that involves a trigger that is only reduced and not entirely lost. Other criticisms of the phonological conservation approach relate to formalisms (e.g. motivation for parasitic delinking in a rule-based approach and required mechanisms for dealing with opacity in an OT approach) and empirical weakness (its inability to account for compensatory lengthening involving non-moraic segments). The listener-based approach is explicitly an historical approach. That is, it aims to account only for the diachronic development of compensatory lengthening and does not attempt to model synchronic instantiations of the process. For synchronic cases, it is compatible with (but also limited by) the formalisms required by the phonological conservation approach so long as the process involves a trigger that is synchronically recoverable (see Kavitskaya 2002: ch. 5). The principal problem with the listener-based approach is its inability to account for cases that do not lend themselves to re-analysis via misperception. One such case is the synchronic process in Skolt Saami, which involves a prevocalic VC complex as the target, with lengthening affecting the complex as a whole, and with VC ratios precisely maintained. Also problematic for the listener-based approach is left-to-right CVCV compensatory lengthening (1D), which is why it is so important for Kavitskaya to dismiss such cases as instances of non-compensatory, rhythmic lengthening. Further research is necessary to shed light on this particular issue. The listener-based approach will also have difficulty accounting for left-to-right compensatory lengthening processes in which both target and trigger are consonants (2A). Note that this difficulty will obtain for an Evolutionary Phonology inspired approach whether one labels the process as compensatory lengthening or as total assimilation since the second consonant in such sequences will normally be perceptually stronger. Further challenges for the listener-based approach come from the compensatory lengthening of consonants triggered by either a following vowel (2C) or a preceding one (2D). Whether the approach can be developed sufficiently to meet these challenges will be interesting to see. It may be worthwhile for future work to explore the compatibility of the different approaches to compensatory lengthening. Note, for example, that a phonetically based speaker-oriented analysis neither denies a role for the listener, nor necessarily discounts phonological (e.g. moraic) structure. Timberlake (1983), for example, makes clear reference to the mora as the “full duration” of a vowel. Hock (1986: 432), who does not cite Timberlake (1983), also points out that “at least some traditional historical linguists have offered a phonetic explanation of CL in terms of the concept ‘mora’.” Note that mora-based phonological approaches do not necessarily deny a gradual, phonetic aspect either, at least those prior to autosegmentalist accounts. Minkova (1982) provides a clear phonological account of compensatory lengthening in Middle English, based on syllables and rhythmic units (metrical feet) “described [. . .] with reference to their phonological/moric

Compensatory Lengthening

21

composition” (1982: 48), but is careful to “complete the picture by adding some considerations of purely phonetic nature” (1982: 50). Minkova also touts as an advantage of her revised environment for Middle English Open Syllable Lengthening the fact that “it is the only way in which the shift from allophonic to phonemic length of the stressed vowel can be accounted for” (1982: 51). Hock (1986: 434) cites the “striking extent” to which historical evidence coincides with “fine-phonetic” experimental data. Indeed, Hock goes even further (1986: 445), citing the apparent fact “that CL may set in before the complete loss of a segment, simply as the result of TC [temporary compensation] for the reduction of the segment” (emphasis in the original). An important consequence of this is that: the situation just described requires an important modification of the notion “mora”: Rather than referring to a temporal unit measureable in terms of segment length, it must – at least for CL – be permitted to refer to time spans which are fractions of ordinary segment length. (Hock 1986: 445)

This view of the “mora” is entirely in keeping with an analysis along the lines of the one proposed in Timberlake (1983). It is also in keeping with the spirit of Lindblom’s (2006) view of phonological structure as non-autonomous and emergent from phonetic substance, at least if we assume both that the mora is an abstract temporal unit and that reference is permitted to time spans that may be fractions thereof. A rejection of the phonetics/phonology split with respect to the mora may be the only way to achieve real explanatory adequacy with respect to compensatory lengthening. It allows us to explain the gradual nature of compensatory lengthening – a clearly phonetic aspect of the process. On the phonological side, it also accounts for the fact that CVC compensatory lengthening tends to occur mostly in languages with moraic consonants and for the fact that the process in general functions to preserve moraic structure. In conceptual terms, it is the phonological status of the mora, as an abstract unit of weight functioning in the grammar, that provides the motivation for preserving it when an associated segment is subject to gradual reduction and eventual elimination. On the other hand, it is the physical timing associated with moraic elements that guides the actual articulatory implementation of reduction with concomitant compensatory lengthening, a process that is gradual (and variable) in nature. Since all segments have physical timing associated with them, the only (unsurprising) assumption we have to make is that preservation of timing associated with weight-bearing units is generally privileged over the preservation of timing associated with units that do not bear weight. Recent work by Topintzi (2006) and Beltzung (2008) demonstrates the continuing relevance of compensatory lengthening for phonological theorizing. It has directly tackled the problems that compensatory lengthening poses for OT and manages to maintain the basic tenets of the framework. Both pieces of work must be categorized as phonological conservation approaches, both seek to expand the empirical coverage of previous approaches (notably to account for non-moraic consonant triggers of compensatory vowel lengthening), and, interestingly, both demonstrate the need for formal appeal to the preservation of segment positions in addition to moras. Both appear compatible, therefore, with a rejection of the phonetics/phonology split as described above (whether or not the authors

22

Randall Gess

themselves agree with such a rejection). Both also appear compatible with a potential complementary, phonetically based OT approach that might focus on the functional motivation for and phonetic implementation of non-categorical reduction with concomitant lengthening, as it has for assimilation (Jun 1995) and lenition (Kirchner 1998, 2004; Gess 2003, 2004, 2009) (again, whether or not the authors themselves agree with such a move, which is not part of the OT orthodoxy). It seems safe to conclude that compensatory lengthening will continue to be a topic of some interest in the phonological community.

REFERENCES Abbott, Miriam. 1991. Macushi. In Desmond C. Derbyshire & Geoffrey K. Pullum (eds.) Handbook of Amazonian languages, vol. 3, 23–160. Berlin & New York: Mouton de Gruyter. Beltzung, Jean-Marc. 2008. L’allongement compensatoire dans les representations phonologiques: Nature, contraintes et typologie. Ph.D. dissertation, Université Paris III, Sorbonne-Nouvelle. Blevins, Juliette. 2004. Evolutionary Phonology: The emergence of sound patterns. Cambridge: Cambridge University Press. Blevins, Juliette. 2006. A theoretical synopsis of Evolutionary Phonology. Theoretical Linguistics 32. 117–166. Brockelmann, Carl. 1908. Grundriss der vergleichenden Grammatik der semitischen Sprachen, vol. 1. Hildesheim: Georg Olm. Carson, Neusa M. 1981. Phonology and morphosyntax of Macuxi (Carib). Ph.D. dissertation, University of Kansas. Chene, Brent de & Stephen R. Anderson. 1979. Compensatory lengthening. Language 55. 505–535. Clements, G. N. 1986. Compensatory lengthening and consonant gemination in LuGanda. In Wetzels & Sezer (1986), 37–77. Clements, G. N. & Samuel J. Keyser. 1983. CV phonology: A generative theory of the syllable. Cambridge, MA: MIT Press. Fox, Anthony. 2000. Prosodic features and prosodic structure: The phonology of suprasegmentals. Oxford: Oxford University Press. Gess, Randall. 1998. Compensatory lengthening and structure preservation revisited. Phonology 15. 353–366. Gess, Randall. 1999. Rethinking the dating of Old French syllable-final consonant loss. Diachronica 16. 261–296. Gess, Randall. 2003. On re-ranking and explanatory adequacy in a constraint-based theory of phonological change. In D. Eric Holt (ed.) Optimality Theory and language change, 67–90. Dordrecht: Kluwer. Gess, Randall. 2004. Phonetics, phonology and phonological change in Optimality Theory: Another look at the reduction of three-consonant sequences in Late Latin. Probus 16. 21–41. Gess, Randall. 2009. Reductive sound change and the perception/production interface. Canadian Journal of Linguistics 54. 229 –253. Gess, Randall. Forthcoming. Compensatory lengthening in Old French: The role of the speaker. In Deborah Arteaga (ed.) Old French: The state of the research. Dordrecht: Kluwer. Gildea, Spike. 1995. A comparative description of syllable reduction in the Cariban language family. International Journal of American Linguistics 61. 62–102. Goldrick, Matthew. 2001. Turbid output representations and the unity of opacity. Papers from the Annual Meeting of the North East Linguistic Society 30(1). 231–245.

Compensatory Lengthening

23

Gordon, Matthew. 1999. Syllable weight: phonetics, phonology, and typology. Ph.D. dissertation, University of California, Los Angeles. Hawkins, W. Neil. 1950. Patterns of vowel loss in Macushi (Carib). International Journal of American Linguistics 16. 87–90. Hayes, Bruce. 1989. Compensatory lengthening in moraic phonology. Linguistic Inquiry 20. 253–306. Hayes, Bruce & Aditi Lahiri. 1991. Bengali intonational phonology. Natural Language and Linguistic Theory 9. 47–96. Hock, Hans Henrich. 1986. Compensatory lengthening: In defense of the concept “mora.” Folia Linguistica 20. 431–460. Holthausen, Ferdinand. 1886. Die Soester Mundart. Norden & Leipzig: Soltau. Hyman, Larry M. 1984. On the weightlessness of syllable onsets. Proceedings of the Annual Meeting, Berkeley Linguistics Society 10. 1–14. Hyman, Larry M. 1985. A theory of phonological weight. Dordrecht: Foris. Jeffers, Robert J. & Ilse Lehiste. 1979. Principles and methods for historical linguistics. Cambridge, MA: MIT Press. Jensen, John T. 1977. Yapese reference grammar. Honolulu: University Press of Hawaii. Jun, Jongho. 1995. Perceptual and articulatory factors in place assimilation: An Optimality Theoretic approach. Ph.D. dissertation, University of California, Los Angeles. Kager, René. 1997. Rhythmic vowel deletion in Optimality Theory. In Iggy Roca (ed.) Derivations and constraints in phonology, 463–499. Oxford: Clarendon. Kálmán, Bela. 1972. Hungarian historical phonology. In Loránd Benk4 & Samu Imre (eds.) The Hungarian language, 49 –83. The Hague: Mouton. Katsanis, Nikolaos. 1996. Sn ckwqqijó idíw[a rgp Ra[nhoájgp [The dialect of Samothraki Greek]. Dg[np Ra[nhÙájgp [Municipality of Samothraki]. Kavitskaya, Darya. 2002. Compensatory lengthening: Phonetics, phonology, diachrony. London & New York: Routledge. Kiparsky, Paul. 2000. Opacity and cyclicity. The Linguistic Review 17. 351–365. Kirchner, Robert. 1998. An effort-based approach to consonant lenition. Ph.D. dissertation, University of California, Los Angeles. Published 2001. New York & London: Routledge. Kirchner, Robert. 2004. Consonant lenition. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 313–345. Cambridge: Cambridge University Press. Lehiste, Ilse. 1970. Suprasegmentals. Cambridge, MA: MIT Press. Lehiste, Ilse. 1977. Isochrony reconsidered. Journal of Phonetics 5. 253–263. Levin, Juliette. 1985. A metrical theory of syllabicity. Ph.D. dissertation, MIT. Lin, Yen-Hwei. 1997. Syllabic and moraic structures in Piro. Phonology 14. 403–436. Lindblom, Björn. 2006. Rejecting the phonetics/phonology split. Theoretical Linguistics 32. 237–243. LipiUski, Edward. 2001. Semitic languages: Outline of a comparative grammar. 2nd edn. Leuven: Peeters. Lowenstamm, Jean & Jonathan Kaye. 1986. Compensatory lengthening in Tiberian Hebrew. In Wetzels & Sezer (1986), 97–132. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCarthy, John J. 2003. Sympathy, cumulativity, and the Duke-of-York gambit. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 23–76. Cambridge: Cambridge University Press. McCarthy, John J. 2005. Candidate chains. Paper presented at the 2nd Old World Conference on Phonology, University of Tromsø. McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. McRobbie-Utasi, Zita. 1999. Quantity in the Skolt (Lappish) Saami language: An acoustic analysis. Bloomington: Indiana University.

24

Randall Gess

Minkova, Donka. 1982. The environment for open syllable lengthening in Middle English. Folia Linguistica Historica 3. 29 –58. Morin, Yves-Charles. 1994. Phonological interpretations of historical lengthening. In Wolfgang U. Dressler, Martin Prinzhorn & John R. Rennison (eds.) Phonologica 1992, 135–155. Turin: Rosenberg & Selier. Pessoa, Katia N. 2006. Fonologia Taurepang e comparação preliminar da fonologia de línguas do grupo Pemóng (família Caribe). M.A. thesis, Universidade Federal de Pernambuco. Pessoa, Katia N. 2009. O accento ritmico na língua Taurepang (família Karíb). Guavira 9. 114–126. Pope, Mildred K. 1952. From Latin to Modern French with especial consideration of Anglo-Norman: Phonology and morphology. Manchester: Manchester University Press. Poser, William J. 1986. Japanese evidence bearing on the compensatory lengthening controversy. In Wetzels & Sezer (1986), 167–186. Sezer, Engin. 1986. An autosegmental analysis of compensatory lengthening in Turkish. In Wetzels & Sezer (1986), 227–250. Shaw, Jason. 2009. Compensatory lengthening via mora preservation in OT-CC: Theory and predictions. Papers from the Annual Meeting of the North East Linguistic Society 38. 323–336. Shishkov, Peter. 2002. Elision of unstressed vowels in the ErkeL dialect. Retrieved from http://escholarship.org/uc/item/7tm865v7. Sprouse, Ronald. 1997. A case for enriched inputs. Ms. (ROA-193.) Stockman, Gerard & Heinrich Wagner. 1965. Contributions to a study of Tyrone Irish. Lochlann 3. 43–236. Timberlake, Alan. 1983. Compensatory lengthening in Slavic 2: Phonetic reconstruction. In M. S. Flier (ed.) American Contributions to the 9th International Congress of Slavists, vol. 1: Linguistics. 293–319. Topintzi, Nina. 2006. A (not so) paradoxical instance of compensatory lengthening. Journal of Greek Linguistics 7. 71–119. Wetzels, W. Leo. 1986. Phonological timing in Ancient Greek. In Wetzels & Sezer (1986), 279–344. Wetzels, W. Leo & Engin Sezer (eds.) 1986. Studies in compensatory lengthening. Dordrecht: Foris.

65

Consonant Mutation Janet Grijzenhout

1

Introduction

The phenomenon of “consonant mutation” occurs in a wide array of unrelated languages and comprises changes that are also known as “consonant weakening” (or lenition), “consonant strengthening” (or fortition), and “nasalization.” In this chapter, “consonant mutation” will be defined as a change in one phonetic property of a consonant that affects its degree of sonority and that does not depend on the position of the consonant within a prosodic domain (i.e. neutralization and enhancement phenomena are excluded), nor on the position immediately adjacent to a segment with which it forms a natural class (i.e. progressive and regressive voicing and place assimilations are not regarded as instances of “consonant mutations”). More specifically, the term “consonant mutation” refers to a class of processes by which a consonant turns into a segment with a different degree of voicing, continuancy, or nasality that is not due to neutralization or assimilation to a neighboring segment of the same natural class.1 Some types of consonant mutation can be described as alternations that take place in a particular phonological environment; for instance, an oral stop may turn into a fricative between a sonorant consonant and a vowel. Other types of consonant mutation take place in a certain morphological or lexical context; for example, stem-initial oral stops are realized as continuants under certain morphosyntactic conditions, while continuants are deleted or realized as a laryngeal sound under the same conditions in Modern Irish (e.g. Ní Chiosáin 1991; chapter 117: celtic mutations). The interesting aspect of consonant mutation in general is a diachronic one: what starts out as a purely phonological alternation induced by neighboring segments may gradually turn into a morphological alternation for which the phonological context is no longer transparent (chapter 93: sound change). In the course of this chapter, we will encounter various examples of such developments. 1

Note that changes in voicing, continuancy, or nasality make a consonant either more or less sonorant. Consonant mutation processes thus have in common that they alter a consonant’s degree of sonority.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0065

2

Janet Grijzenhout

Typically, mutations are “scalar.” In the languages of the world, we find consonant mutations where a consonant’s degree of stricture decreases (e.g. in Archaic Irish and Finnish an underlying geminate stop is realized as a singleton stop, while an underlying singleton stop is realized as a continuant sound) or increases. Another example of a scalar mutation is one in which a consonant’s degree of laryngeal specificity and/or nasality increases (e.g. in Old Irish, an aspirated voiceless stop is realized as an unaspirated one in the same context in which an unaspirated oral stop is realized as a prenasalized stop). This chapter first discusses possible consonant alternations in more detail (§2). As examples of languages that have relatively many types of consonant mutations (i.e. spirantization, gemination, nasalization, and/or prenasalization), we discuss Southern Paiute (§3) and Fula (§4). Balto-Finnic, Sami, and some Australian languages show scalar mutations (§5). §6 points out the merits and drawbacks of some theoretical accounts of consonant mutation that exist in phonological literature. §7 summarizes the discussion.

2

Consonant alternations within prosodic and morphological domains

Consonants are highly adaptable elements that may change their properties for a variety of reasons. This section will focus on some phonological and morphological environments that may trigger a change in one phonetic property of a consonant. We start with consonant alternations that are characterized by the fact that a phonological opposition is neutralized in a certain prosodic environment, viz. final devoicing and debuccalization. We will also briefly consider consonant alternations that occur at the left edge of a prosodic domain. Next, five consonant alternations that are not triggered merely by a prosodic environment (i.e. that are independent of the position within a prosodic domain) and that fall under the rubric of “consonant mutation” are introduced: (a) stopping (Soninke), (b) obstruent voicing (Burmese), (c) spirantization or fricativization (Djapu; the first stage of Grimm’s Law), (d) devoicing (the second stage of Grimm’s Law), and (e) deaspiration (the third stage of Grimm’s Law). Other types of consonant mutation that are frequently encountered in languages – both diachronically and synchronically – are gemination, degemination, and (pre)nasalization. The latter phenomena will be discussed in later sections. Many consonant alternations are characterized by the fact that a phonological opposition is neutralized in a certain prosodic environment. In a variety of unrelated languages, e.g. Catalan, Czech, Dutch, German, Ojibwa, Polish, Russian, Turkish, and Wolof, we find that the opposition between voiced and voiceless obstruents is neutralized in one particular environment only, viz. at the end of a prosodic domain (usually the syllable; see Brockhaus 1995 and chapter 69: final devoicing and final laryngeal neutralization). In other languages, place of articulation contrasts are neutralized at the end of a prosodic domain – usually the syllable coda – and this phenomenon is known as “debuccalization” (chapter 80: mergers and neutralization). In some generative frameworks, alternations at the end of a prosodic domain are described as processes where consonants “lose” their underlying marked specification for laryngeal features or place of

Consonant Mutation

3

articulation features.2 Conversely, in other positions of the word, consonants may become reinforced phonetically, e.g. by initial aspiration (which is seen as a form of enhancement, for instance, by Keyser and Stevens 2006: 42ff.). Consonant alternations that involve neutralization of an opposition in a particular prosodic context (e.g. the contrast between voiced and voiceless obstruents in Dutch and German is neutralized at the end of a prosodic domain), or that involve adding a feature to enhance an opposition in a particular prosodic domain (e.g. the contrast between voiced and voiceless stops is enhanced by adding aspiration for the voiceless plosives at the left word edge in English) do not fall under the category of “consonant mutations” as understood here. In the examples in (1), stem-final consonants optionally change their laryngeal properties to become more similar to the neighboring consonants within a phonological phrase (e.g. Berendsen 1983 for Dutch) and in (2) the consonant /n/ changes its place of articulation to the same place of articulation as the following obstruent (chapter 81: local assimilation). (1)

Regressive laryngeal assimilation in obstruents a.

b.

(2)

Dutch zeep + doos kas + boek zak + doek Hungarian zseb + kendo

→ → →

zee[bd]oos ka[zb]oek za[gd]oek

soap + box cash + book pocket + cloth

‘soap-box’ ‘cash-book’ ‘handkerchief’



zse[pk]endõ

pocket + cloth

‘handkerchief’

Regressive place assimilation of n- (marker of noun classes 9 and 10) in Kisukuma (data from Batibo 2000: 169) n + buli n + dama n + guzu

→ → →

mbuli ndama Iguzu

‘goat’ ‘calf’ ‘strength, energy’

Cases where consonants change a laryngeal or place feature under the influence of an adjacent consonant within a certain prosodic domain are most commonly referred to as “assimilations” rather than “mutations,” and these processes are relatively easy to describe in theoretical frameworks, e.g. in autosegmental theory as spreading of laryngeal or place features or in Optimality Theory (OT) as phenomena that are the result of ranking Agree[feature] and *[afeature] constraints higher than the corresponding Ident[feature] constraint (e.g. Lombardi 1996). The reason for assimilation is not to increase or decrease the degree of sonority of a segment, but rather to become “more similar” with respect to laryngeal or place properties to an immediately adjacent obstruent (in the Dutch or Hungarian examples) or a stop (in Kisukuma). 2

Note that the assumption about laryngeal neutralization being a case of “weakening” in the sense that the consonant “loses” an underlying feature is highly controversial, as can be seen, for example, by the German terminology Auslautverhärtung (“final hardening”) for syllable-final devoicing. Foley (1970), for instance, claims that a change from voiced to voiceless obstruent should be considered a case of strengthening (“fortition”) rather than weakening (“lenition”). For further discussion on the issue of what exactly constitutes strengthening or weakening, I refer to chapter 66: lenition.

Janet Grijzenhout

4

The set of data below illustrates a process whereby the oral stricture of initial consonants of nouns increases, i.e. voiceless fricatives or voiced continuants are realized as oral stops (3a), /l/ (3b), or nasal stops (3c) after nasal segments:3 (3)

Consonant alternations after nasals in Soninke a.

b. c.

fare si xore raqqe wulle jaaxe

‘donkey’ ‘horse’ ‘charcoal’ ‘mouth’ ‘dog’ ‘eye’

n n n n n n

pare Œi gore laqqe Iulle Jaaxe

‘my ‘my ‘my ‘my ‘my ‘my

donkey’ horse’ charcoal’ mouth’ dog’ eye’

In the examples in (3), all continuant consonants are affected and the trigger of the change is always the same, i.e. a nasal element. Intuitively, we may formulate the consonant alternation as a process that increases a consonant’s oral stricture after a nasal stop or nasalized vowel. Nevertheless, it is not easy to describe this process as one in which a single feature of the stem-initial consonant changes under the influence of a preceding nasal vowel or stop.4 Furthermore, some nouns (especially names) “resist” change and do not participate in the consonant alternation process. Moreover, under some morphological conditions, the nasal trigger is absent, but the change takes place nonetheless, for instance in some imperative forms, e.g. /pagu/ ‘fill up!’ and /Œi/ ‘shave!’ (cf. /si/ ‘to shave’). According to Kendall and Bird (1982), the language is thus in a transitional stage in which the phonologically triggered process of consonant change has developed into a process that is no longer purely phonological; there are exceptions as well as overapplications, i.e. cases where the consonantal change takes place without an overt phonological trigger. We next consider another type of consonant alternation where the phonological context determines the shape of a consonant: in Burmese, voiceless stops are voiced in intervocalic position (4a), (4b) or following a nasal (4c). (4)

Intersonorant stop gradation in Burmese (data from Campbell 1995: 98–102) a. b. c.

3

hwa + pa hwa + tD + lu kauI + kauI

→ → →

hwàba hwàdGlu kauIgauI

‘please go’ ‘the man who is going’ ‘to be good’

Soninke is a Mende language spoken in West Africa. All Soninke data presented in this chapter are from Kendall and Bird (1982). The nasals in (3a) assimilate in place of articulation to the following segment. Kendall and Bird (1982: 1, 3) state that the same consonant alternation occurs both after nasal consonants and after nasalized vowels, e.g. /r/ in /ri/ ‘to come’ is realized as /l/ after a nasalized vowel in /nhli/ ‘I came’. 4 In SPE (Chomsky and Halle 1968), we could formulate a rewrite rule [+continuant] → [−continuant] / [+nasal] __, with some additional rules to account for the fact that voiced continuants turn into a lateral or nasal stop. In autosegmental phonology, there is no single feature that nasal consonants and vowels have in common that could spread unto the following consonant. In OT, it is possible to formulate a constraint that bans continuant consonants after a nasal consonant or vowel (e.g. *[+nas][+cont, –voc]), but this constraint would be an ad hoc one and leaves open the possibility of other ad hoc constraints such as *[+strid][−cont] or *[+nas][lab], etc.

Consonant Mutation

5

In the Australian language Djapu, stops are only realized as such in lexical words after obstruents and nasal stops. Thus the dative suffix /-ku/ appears as such in [buurutj-ku] ‘mosquito’; when preceded by a vowel or a liquid, however, the labial and velar stops become a labial-velar glide (e.g. [Iajmil-wu] ‘Ngaymil clan’) and the dental stop becomes a palatal glide (Morphy 1983). Cases where consonants change in an intersonorant environment are most often referred to as “gradations” or “mutations.” When glides become fricatives, when underlying continuants becomes non-continuant segments (as in Soninke), or when singleton stops are geminated, the mutation is often referred to as “consonant hardening” or “fortition.” If the mutated consonant increases its degree of sonority, this type of mutation is often referred to as “consonant weakening” or “lenition.” The environment for gradation in Soninke can be characterized as “after a nasal”; gradation in Burmese is “in intervocalic position or between a nasal stop and a vowel,” whereas the environment for gradation in Djapu is “in intervocalic position or between a liquid and a vowel.” chapter 66: lenition provides more examples of intervocalic voicing and spirantization. The interesting problem that consonant mutations pose for phonological theory is that they change the degree of sonority of a segment (chapter 49: sonority) and that they are not easily accounted for by means of autosegmental processes such as spreading, inserting, or deleting a phonological feature or class of features within a natural phonological context (chapter 14: autosegments). Neither is it easy to account for them by means of wellformedness constraints that correctly predict surface forms in non-mutation contexts and the corresponding alternating forms in mutation contexts. Consider in this respect that a possible constraint that penalizes intersonorant voiceless stops could be *[+son][−son, –voice][+son] (i.e. no voiceless obstruent in between two sonorants). This constraint would be violated in a potential output [hwàpa]. If the markedness constraint in question is ranked higher than the faithfulness constraint Ident[voice] – which, presumably would be the case in Burmese – the output [hwàba] would win. Now consider the fact that the winning candidate in Burmese would be a form that is disallowed in Spanish (see e.g. Harris 1969). Whereas Spanish allows intervocalic voiceless stops – suggesting that *[+son][−son, −voice][+son] is low-ranked – it does not have output forms with intervocalic voiced stops (resembling the winning candidates for Burmese). In an optimality-theoretical framework, we could again propose a constraint, e.g. *[+son][−son, −cont, +voice][+son] (i.e. no voiced stop between two sonorants), which would have to be ranked higher than a faithfulness constraint, e.g. Ident[−son, −cont] to generate the correct output for Spanish. Apart from the fact that there is an ad hoc flavor to the OT accounts suggested immediately above, a further complicating factor is the fact that some languages exhibit both consonant alternations in their grammars. In Northern Corsican, for example, voiceless stops become voiced where voiced stops spirantize. Moreover, we find exactly the same consonant alternations in contexts that cannot be described in a straightforward way as being “inter-sonorant.” Rather, as will be shown in subsequent sections, the same consonant alternations as described here appear in different morphological contexts or are lexicalized in various unrelated languages. The cases presented so far all reflect synchronic processes. The interest in consonant mutations, however, first arose with respect to diachronic changes such

6

Janet Grijzenhout

as the first consonant shift in West Germanic (also known as Grimm’s Law) and early Celtic consonant mutations (Pedersen 1897; Thurneysen 1898).5 Grimm’s Law is often formulated as follows: the Indo-European stops /p t k kw/ spirantized and became fricatives, the unaspirated stops /b d g gw/ became voiceless aspirated stops, and so-called “breathy voiced” stops /bh dh g h gwh/ were replaced by voiced unaspirated stops (chapter 73: chain shifts). Iverson and Salmons’s (1995) account of the first shift in Grimm’s Law runs as follows. The voiceless aspirated stops became voiceless fricatives when aspiration was audible (i.e. only when the stop was released, so that stops following /s/, geminates and stop–stop clusters did not undergo the shift). The unaspirated stops – in which voicing was optional – had a “stronger and longer” closure phase, which made them unlikely candidates for fricativization. Thus, the outcome of the first shift in Grimm’s Law is a system in which we find fricatives (specified for [stiff vocal folds]), unaspirated stops (unspecified for laryngeal features), and voiced aspirated stops (specified for [slack vocal folds, spread glottis]). The second stage of Grimm’s Law is the process whereby the contrast between stops unmarked for laryngeal features (/b d g gw/) and stops marked as “slack with aspiration” (/bh dh g h gwh/) is increased by introducing [stiff vocal folds] (which is available already in the fricative series) for the unmarked stops. In the system where [stiff vocal folds] stops contrast with [slack vocal folds] ones, there is no need to maintain the aspiration contrast and hence the feature [spread glottis] for breathy voiced stops gradually loses its place in the stop system. The three subsequent mutations may thus be characterized as follows: (a) fricativization of voiceless stops (/p t k kw/ → /f h x xw/), (b) fortition or devoicing of lax stops (/b d g gw/ → /p t k kw/), and (c) deaspiration of aspirated lax stops (/bh dh g h gwh/ → /b d g gw/). This section has introduced five consonant alternations that affect the degree of sonority and that fall under the rubric of “mutations”: stopping, obstruent voicing, spirantization or fricativization, devoicing, and deaspiration. Other mutations that are frequently encountered in languages – both diachronically and synchronically – are gemination, degemination, and (pre)nasalization. In the following sections we will discuss informative examples of synchronic consonant mutations. We start with Southern Paiute, a language that exhibits spirantization, gemination, and nasalization.

3

Southern Paiute consonant mutations

Sapir (1930) describes the Shoshonean dialect Kaibab Paiute, as it was spoken in southwestern Utah and northwestern Arizona during the 1910s. In this dialect, consonants can appear in one form when in suffix-initial position after a consonant or in word-initial position, and in various alternating forms when – by the process of derivation or compounding – they are immediately preceded by a vowel. The particular process that a suffix undergoes is dependent on the lexical item it is attached to and is not phonologically predictable in any obvious way. 5

For a discussion on the history and present-day exponents of Celtic consonant mutations the reader is referred to chapter 117: celtic mutations.

Consonant Mutation (5)

7

Southern Paiute consonant mutations (Sapir 1930: 62)6 underlying spirantized geminated p ß p( t P t( k : k( kw :w kw( ts/Œ ts/Œ ts(/Œ( s/œ s(/œ( m Iw m( n n(

nasalized mp nt Ik Ikw nts/nΠm( n(

Under spirantization, oral stops change into voiced continuants, affricates do not change, and the labial nasal becomes a “back palatal” labialized nasal consonant (represented as /Iw/ by Sapir). Harms (1966) attempts to describe Paiute spirantization as a regular phonological rule ([+consonantal, –vocalic, –strident] → [+continuant, +voice] / [−consonantal, +vocalic, +voice] __), but such a rule only accounts for part of the alternation, and leaves the change from labial nasal to dorsal labialized nasal unexplained. Before we turn to another analysis of Southern Paiute consonant mutations, we first introduce some examples. To illustrate the effect of mutation, Sapir (1930: 63, 67) mentions the verbalizing suffix /-ka/ and the agentive suffix /-pi/, with their respective mutated initial consonants following adjectival (6a)–(6c) and nominal stems (6d)–(6e): (6)

Consonant mutations affecting morpheme-initial /k/ and /p/ in Southern Paiute a. b. c. d. e.

aIka k‚Œa paq nD taIa

+ + + + +

-ka -ka -ka -pi -pi

→ → → → →

aIka:a k‚Œak(a paqIka nDvi taIampi

‘to be red’ ‘to be gray’ ‘to be smooth’ ‘carrier’ ‘kicker’

spirantization gemination prenasalization spirantization prenasalization

Note that the stem /aIka/ ‘red’ triggers spirantization in a following suffix (6a), but is followed by a geminated consonant in compounds (7): (7)

Consonant mutations affecting stem-initial /k/ and /p/ in Southern Paiute compounds aIka aIka

+ +

kani pajq

→ →

aIkak(anê aIkap(ajq

red + house red + fish

‘red house’ ‘trout’

As a possible explanation for the difference in the choice of consonant mutation between derivation and compounding in some nouns, Sapir (1930: 70) suggests that “the tendency to use geminated consonants in composition is probably due to the greater phonetic similarity thus brought about between a simplex and its compound.” 6

The consonants and the consonant alternations mentioned by Sapir (1930) are represented here by means of the corresponding IPA symbols. According to Sapir, spirantized consonants following voiced vowels are voiced, and those following voiceless vowels are voiceless. I present the voiced allophones only. The glides /w/ and /j/ and the nasal /I/ do not occur in initial positions and are hence not affected by mutations. The glottal stop does not undergo mutations. The three blanks in (5) indicate that [s/œ] and [n] do not have a spirantized counterpart and that [s/œ] lacks a nasalized form.

8

Janet Grijzenhout

McLaughlin (1984: 70–71) notes that prefixes are followed by one fixed mutation, i.e. all prefixes affect the initial consonant of a following stem, and Southern Paiute distinguishes “spirantizing,” “geminating,” and “nasalizing” prefixes. In compounds, noun stems may trigger one of three mutations, but in most cases nouns in compounds trigger gemination only. In verbal compounds, the verb of the second member has its initial consonant geminated in the majority of cases. To McLaughlin, these facts suggest that prefixes show the strongest reflection of an earlier stage in the language, viz. one in which a prefix-final phoneme triggered a change in the following consonant. The phoneme in question is lost, but the mutation that it triggered is still in effect. Nouns also show this effect, but are in a transient stage; nominal stems are cautiously on their way to induce one particular mutation only, i.e. gemination. Verbs have developed even further and generally geminate a following consonant. To account for Southern Paiute consonant mutations, McLaughlin (1984) proposes that all prefixes, most noun stems, some adjectival stems, and a few verb stems end either in (a) a vowel, (b) an unspecified stop C, or (c) an unspecified nasal N.7 The last two are reflexes of final stops and nasals that used to be present in earlier stages of the language. If a stem is preceded by a prefix or another stem ending in a vowel, its first consonant will undergo a rule of spirantization. A suffix or stem following an unspecified stop or nasal, however, will undergo a place assimilation rule (e.g. /taIaN + pi/ → [taIampi]). Finally, stems following an unspecified stop will also be subject to a lengthening rule, so that the result will be a geminate consonant. Another rule may subsequently degeminate the stop in question when it follows an unstressed vowel. McLaughlin’s analysis of spirantization thus involves a regular spirantization rule, which turns stops into voiced spirants in inter-sonorant position. The analysis of gemination and nasalization, on the other hand, depends on “ghost” elements, i.e. a final stop or nasal that never surfaces, but has a geminating or nasalizing effect if another consonant follows in the next morpheme (i.e. a suffix in the case of derivation and a stem in the case of compounding). These processes are thus lexical in the sense that in the lexicon, the morphemes in question (prefixes, most noun stems, and some adjectival stems) end either in a vowel or in an unspecified stop or nasal. The mutation triggered by verb stems is becoming more and more morphologized; i.e. morphological leveling is producing ever larger numbers of geminating verb stems. Since oral stops become continuants under spirantization, we would perhaps expect the labial nasal stop to turn into a labial approximant [w] (chapter 28: the representation of fricatives), as is the case in Celtic (see chapter 117: celtic mutations). The question arises why this is not the case. With respect to the quirky character of spirantized /m/, McCarthy and Prince (1995: 349ff.) make the following suggestion. They first observe that the segments [w] and [Iw] are in complementary distribution: the former is found word initially and the latter occurs post-vocalically.8 This distribution follows from the following 7

For similar proposals to account for Celtic consonant mutation, see chapter 117: celtic mutations. The only exception is the context of reduplication, where “w” surfaces between vowels due to an identity constraint that says that a base and a reduplicated form cannot have different values for the feature [nasal]. The fact that [Iw] is not allowed to occur in word-initial positions (*[Iw) rules out a candidate like hypothetical *[Iwa-Iwaxipija]. In *[wa-Iwaxipija] the constraint Ident-BR[nasal] is violated and the candidate [wa-waxipija] is the winner even though it has a “w” after a vowel (i.e. the constraints mentioned in this footnote are ranked as follows: *[Iw, Ident-BR[nasal] >> *VwV). 8

Consonant Mutation

9

ranking of constraints: *VwV >> *Iw >> *w. Even though this ranking accounts for the fact that we are more likely to find [Iw] rather than [w] in intervocalic position, it is not immediately clear why /m/ alternates with [Iw] in the first place, i.e. why /m/ surfaces as a nasal with a “back palatal” place of articulation in a spirantization context. If spirantization means that oral or nasal stops are realized as continuants, and if there is an operation or constraint (*VwV) that prevents the nasal /m/ from being realized as the corresponding continuant [w], it is still not evident that /m/ undergoes a transformation whereby it turns into a labialized dorsal nasal stop. I will leave this issue for further research.

4

Fula consonant mutation

Fula (also known as Fulani, Fulbe, Fulfulde, Pular, Pulaar, and Peul) is spoken in West Africa; the majority of the speakers live in Nigeria (Campbell 1995: 178). In all dialects of Fula, initial consonants of verbal radicals and nominal, numeral and verbo-nominal stems can have different forms. In contrast to Southern Paiute – where stem-initial consonants change their form depending on the mutating requirements of the preceding prefix or stem – one cannot argue that in Fula a segment has a different surface form depending on a preceding vowel or a morpheme specified for a mutating feature. Arnott (1970) shows that in the Gombe dialect of Fula, stem-initial consonants surface as homorganic stops, continuants or prenasals, depending on the adjective or noun class. (8)

Fula consonant mutations (based on Arnott 1970: 42–43)9 stop spirant p f b w d r 10 œ s Á j k h g j/w11

prenasal p mb nd œ Jj k Ig

According to Arnott (1970), adjectives and nouns are marked as belonging to one of 25 possible classes. Class membership is indicated by a suffix12 and a particular manner of articulation of the initial consonant of the stem: 11 classes are marked by a stem-initial voiceless or voiced stop, six classes are marked by 9

The consonant alternations mentioned by Arnott are represented here by means of the corresponding IPA symbols. Note that the glottalized consonants /+ e —/, the nasals /m n J I/, and the coronal consonants /t c l/ are “invariable” and do not alternate. 10 Note that this sound is a fricative, but in the mutation system it functions as a stop. In most analyses of Fula consonant mutations that can be found in the literature, this sound is usually transcribed as the palatal stop /c/, but this does not tally with Arnott’s (1970) description of this sound. 11 The palatal glide is found before the front vowels /i e/ and the labial-velar one before the back vowels /u o a/. 12 Suffixes belong to certain grades, indicated as grades A, B, C, or D in Arnott (1970). In the phrase /gude daneeje/ ‘white cloths’, both the noun and the adjective belong to class 24 (i.e. one of the classes that are marked by an initial stop), but the noun /gude/ ‘cloths’ has a grade A suffix /-e/ and the adjective /daneeje/ ‘white’ has a grade B suffix /-je/.

10

Janet Grijzenhout

an initial spirant and eight are marked by an initial voiceless oral stop or a prenasalized stop. The marker of noun class 1 (singular nouns referring to persons), for instance, is a suffix ending in the vowel /o/ and a stem-initial stop. The marker of noun class 2 (plural nouns referring to persons) is a spirant-initial stem consonant and the suffix /-+e/. The marker of class 7 is a suffix ending in the vowel /a/ and a stem-initial oral voiceless stop or prenasalized stop for consonants that alternate with a voiced stop in class 1: (9)

Stem-initial consonant alternations in Fula nouns (data from Arnott 1970: 98–99) class 1 pull-o beer-o dim-o œook-o Áuul-eo kor-eo gim-eo

class 2 ful-+e weer-+e rim-+e sook-+e juul-+e hor-+e jim-+e

class 7 pul-a mbeer-a ndim-a œook-a Jjuul-Iga kor-ga Igim-Iga

‘Fula’ ‘host’ ‘free man’ ‘poor man’ ‘Moslim’ ‘female slave’ ‘person’

In an autosegmental framework that assumes radical underspecification (chapter 7: feature specification and underspecification), Wiswall (1989) proposes that Fula voiceless continuants are underlyingly specified for [−voice], but not for manner of articulation. The voiceless stops are specified for [−voice] as well, and all underlying oral stops have a specification for [−continuant]. Underlying non-alternating nasal stops have a specification for [+sonorant] and [+nasal]. Wiswall furthermore suggests that noun class markers in Fula have floating features that associate to the initial consonant of the stem (chapter 82: featural affixes); the stop classes have a floating [−continuant] feature (which is problematical for the sound /œ/), the prenasal classes have a floating [+nasal] feature (which presumably cannot dock onto sounds underlyingly specified as being [−voice]), but the spirant classes do not have a floating feature. After association of the floating features, redundancy rules (e.g. [−continuant] → [−sonorant]), and default rules (Ø → [+continuant]) fill in the unspecified feature values. Grijzenhout (1991) points out some problems relating to the rule ordering (chapter 74: rule ordering) that Wiswall has to assume, especially related to the account of prenasals, which would involve a late counterintuitive redundancy rule [+nasal] → [−sonorant]. For an alternative account of the data involving the theory of “charm and government,” the reader is referred to work by Paradis (e.g. Paradis 1987a, 1987b, 1992). Elzinga (1996) provides an OT account of Fula consonant mutation, which makes use of alignment and parsing constraints on the mutating features [continuant] and [nasal]. He furthermore employs morpheme constraints, i.e. constraints that indicate the type of stem (“invariable,” “partially variable,” or “fully variable”). The morpheme constraints are placed in between the feature alignment and parsing constraints, so that invariable stems and suffixes are ranked higher in the hierarchy than the mutation constraints, partially variable stems and suffixes are ranked below some mutation constraints and above others, and fully variable stems and suffixes are ranked lower than the mutation constraints. We now turn to cases of scalar mutations, i.e. mutations that cause a change from one underlying consonant to another consonant, which also mutates into a different one.

Consonant Mutation

5

11

Balto-Finnic and Sami consonant gradation

Most languages belonging to the Balto-Finnic group (i.e. Finnish, Estonian, Votic, Ingrian, and Karelian; excluding Livonian and Veps), as well as northern and eastern dialects of the closely related Sami group, exhibit consonant gradations. It is striking that a similar phenomenon, by which long stops were shortened and short stops were turned into spirants, occurred in the history of Iwaidjan languages in Australia (e.g. Evans 1998). The phonological condition that generated consonant gradation in languages belonging to the Balto-Finnic group was originally the following: after a vowel or sonorant consonant in a stressed syllable, stops in the so-called “strong grade” that appeared in the onset of a syllable that was closed by certain inflectional or derivational endings were mutated such that they appeared in the corresponding “weak grade.” Under this condition, underlying long (or “geminate”) stops were reduced to short (or “singleton”) stops and underlying short stops were replaced by voiced and fricated consonantal variants. Thus, in the Balto-Finnic languages, long /p(/ alternated with short /p/ while short /p/ alternated with /b/ or /v/, and long /t(/ alternated with short /t/ while short /t/ alternated with /d/ or /Ï/. Similarly, in the history of Iwaidjan languages, long intervocalic stops were shortened and short intervocalic stops became approximants or liquids (e.g. /p(/ → /p/ → /w/ and /c(/ → /c/ → /j/). Different varieties of Sami and Estonian now have a three-way opposition between the strongest grade or quantity, the strong grade, and the weak grade, but the phonemes involved in the alternations are somewhat different for each language or language variety. We most often find that in contexts where gradation applies in these languages, (a) “overlong stops” (usually written as , corresponding to the sounds /p(( t(( k((/) are realized as “long stops,” (b) “long stops” (orthographic

, corresponding to the geminates /p( t( k(/) are realized as “short” consonants, and (c) underlyingly “short stops” (i.e. those with the shortest closure duration, usually written as , corresponding to unaspirated /p, t, k/) spirantize. Consonant gradation is more or less regular in most dialects of Sami. In many Sami dialects, consonant gradation was extended to consonants that had not been subject to gradation in earlier stages of the language and now also affects sonorant consonants; see (10d). Gradation applies when an affix closes an unstressed syllable. Moreover, gradation still occurs in genitive forms – see (10c) and (10d) – even though the original inflectional ending [-n] has been lost in some variations of Sami, so that the final syllable is no longer closed: (10)

Sami consonant gradation (Gordon 1998; Campbell 2004: 322)13 non-gradated a. bapppa ‘priest (nom sg)’ b. loppe ‘permission (nom sg)’ c. jokkâ ‘river (nom sg)’ d. guolle ‘fish (nom sg)’

13

gradated bappast lobest jogâ guole

(elat sg) (elat sg) (gen sg) (gen sg)

Recall that orthographic in examples (10b) and (10c) correspond to short unaspirated /p k/.

Janet Grijzenhout

12

Interestingly, in some Sami languages, the original gradation process now also works in reverse; single consonants are geminated in open syllables: (11)

Sami “reversed” consonant gradation; i.e. gemination (cf. Uralic languages 2010) a. b.

geminated Œuotte *Œuote borra

‘hundred (nom sg)’ ‘eats’

gradated ŒuoÏe ‘hundred (gen sg) borâm ‘I eat’

Thus, by “reversed gradation,” the contrast between two related forms is enhanced: instead of a short stop–fricative contrast, we now have a geminate–fricative contrast in (11a). The examples below illustrate the phenomenon of consonant gradation for Modern Finnish (e.g. Karttunen 1970; Skousen 1972; Keyser and Kiparsky 1984; Vainikka 1988); geminate stops degeminate (12a) in the same environment in which the singleton stops lessen their degree of stricture and become continuants (12b),14 assimilate to a preceding sonorant consonant with the same place features as the stop in question (12c), or are not realized (12d):15 (12)

Finnish consonant gradation in closed syllables underlying form (nominative) a. lappu matto kukka b. tapa mato c. rampa lintu d. poika selkä

gradation (genitive) lapun maton kukan tavan madon ramman linnun pojan selän

‘piece of paper’ ‘rug’ ‘flower’ ‘custom’ ‘worm’ ‘lame’ ‘bird’ ‘boy’ ‘back’

As is the case in some variations of Sami – see (10c), (10d), and (11a) above – the original genitive ending /-n/ has been lost in Estonian, but gradation still occurs in genitive forms: (13)

Estonian consonant gradation in genitive nouns (Harms 1962) nominative a. leib b. madu c. lind d. selg

14

genitive leiva mao linnu selja

‘bread’ ‘snake’ ‘bird’ ‘back’

Finnish represents an approximant; is a dental sonorous element whose value varies from dialect to dialect; see e.g. Vainikka (1988). Here and in what follows, I abstract away from dialectal variation. 15 Finnish represents /j/; see e.g. Vainikka (1988).

Consonant Mutation

13

We again witness how a phonologically triggered mutation gradually changes into a morphologically triggered mutation. The original consonantal morphological ending caused the syllable in question to become closed and thus provided the phonological environment for gradation to take place. When the consonantal ending disappeared in the course of history, gradation still took place. In presentday Estonian, consonant gradation is thus triggered by morphology rather than phonology. The type of consonant gradation presented in this section is not unique to Balto-Finnic languages and varieties of Sami. Similar processes are found in some languages belonging to the Samoyedic branch of the Uralic language family (in particular Nganasan and Selkup; see chapter 39: stress: phonotactic and phonetic evidence for examples and an OT account of consonant gradation in Nganasan).

6

Mutation and phonological representations

Our first examples of consonant mutation – i.e. stopping after nasal vowels and consonants in Soninke, inter-sonorant voicing in Burmese, and inter-sonorant spirantization in Djapu – have been accounted for in phonological literature as cases where one feature of a preceding segment (e.g. [−continuant] in the case of nasal stops, [+voice] in the case of vowels and sonorant consonants, and [+continuant] in the case of vowels) spreads to the target consonant.16 The first drawback of such an analysis is the obvious fact that spreading of [±continuant] is hardly attested in obstruents (chapter 13: the stricture features); cases where an oral stop induces a change in a preceding or following fricative (e.g. hypothetical /t + fare/ → [tpare]) are hardly – if ever – attested. Moreover, even though we find relatively many cases where vowels cause spirantization of stops, fricatives never cause spirantization of adjacent stops (see Wetzels 1991), and this makes the suggestion that intervocalic spirantization is analyzed best as a case of spreading the feature [+continuant] problematic (chapter 28: the representation of fricatives). Furthermore, a spreading analysis is problematic for cases where there is no phonological segment that triggers the mutation. In those cases, e.g. Southern Paiute gemination, McLaughlin (1984) and others have suggested underlying underspecified segments that usually do not surface and only affect a following segment in mutation contexts, resulting in, for instance, a geminated consonant. Others, e.g. Wiswall (1989), propose “floating” features such as [+nasal] to account for initial prenasalization in, for instance, Fula (chapter 82: featural affixes). Mutation, then, is viewed as the change of one sound into another sound due to feature insertion; it is unpredictable which feature will cause a change under which circumstances in which language. The problem with this particular approach is the existence of the “scalar” mutations witnessed in Northern Corsican, Finnish, or Sami, for instance, which cannot be accounted for by means of insertion of a 16

Thus, in these accounts, strengthening and weakening are treated as local processes where the target sound is adjacent to another sound that spreads one of its features to the target. Cf. Harris (1994), who proposes an account where an element expressing full closure or aperiodic energy (as for aspiration) spreads from one position to another in the case of strengthening, and where an element is delinked or deleted under lenition, so that the result is a “weaker” – i.e. less complex – segment.

14

Janet Grijzenhout

single “floating” feature. Scalar mutations are changes in underlying consonants that result in more sonorant segments (e.g. voiceless stop → voiced stop → voiced continuant), or in segments with a higher degree of oral aperture (e.g. geminate stop → singleton stop → continuant consonant, with perhaps eventual segmental loss); feature-based analyses seem to be unable to capture this aspect. Van der Hulst and Ewen (1991) suggest that the scalar nature of consonant mutation is most adequately accounted for by a framework in which sonority (chapter 49: sonority) is expressed by C and V nodes that can either function as governing or governed nodes. In their system, the interpretation of a governing “C” is “some degree of oral closure that characterizes an obstruent” ([−sonorant]) and the interpretation of a governing “V” is “sonorant.” The two elements “C” and “V” can also be adjoined to a governing element; an adjoined “C” is interpreted as “oral closure for sonorants” (in the case of nasal stops and laterals), and an adjoined “V” is interpreted as “periodic sound source” or “vocal cord vibration” (as expressed in feature-based frameworks by the feature [+voice]). An element “V” that is governed by an element “C” is interpreted as “continuous airflow” ([+continuant]): (14)

C and V components in phonological representations (van der Hulst and Ewen 1991) governing

C

C

V

V

voiceless fricative

voiced fricative

V

V

V

C

C

lateral

approximant

C

C

V

governed interpretation

voiceless oral stop

voiced oral stop

governing

C

C

V

governed interpretation

nasal stop

V

vowel

Van der Hulst and Ewen (1991) further view the consonant mutations referred to as “inter-sonorant lenitions” as the result of imposing the element “V” from neighboring sonorant segments onto the consonant in question. Thus, in intersonorant positions, “V” can be added to the structure of a consonant either by adjunction – to give a voiced obstruent (/p/ → [b] and /f/ → [v]) – or by subjunction, to give a continuant (/p/ → [f] and /b/ → [v]). Within this theory, we might thus account for Northern Corsican consonant mutation as a process by which the element “V” is (a) adjoined to “C” segments that do not have an adjoined node and (b) subjoined to segments that involve “C” and an already adjoined “V” (thus, /p/ → [b], while /b/ → [ß]).

Consonant Mutation

15

Under this approach, another scalar mutation may involve a process by which the element “V” is adjoined to “C” segments that do not have an adjoined node and by which the element “V” is turned into the head element, so that the element “C” becomes the adjunct (thus, /p/ → [b], while /b/ → [m], as in e.g. Modern Irish initial nasalization). The advantage of the approach suggested by van der Hulst and Ewen (1991) is clearly the fact that consonant mutations generally known as “lenitions” can be described as a more or less unified process involving an increase in the dominance of the element “V” (i.e. sonority). An account of stopping after nasals as observed in Soninke is less straightforward in this framework. Also, it is not immediately clear how the different mutations that do not have an overt phonetic trigger – such as the ones found in Southern Paiute or Fula – can be explained, and it is even less obvious how graded mutations in the Balto-Finnic and Sami languages would fit in this picture. Another proposal for the representation sonority – especially sonority related to the degree of oral stricture – is formulated by Steriade (1993, 1994). Steriade defines slots to which laryngeal, place, and other features attach in terms of degrees of oral aperture. Released stops and affricates are viewed as sequences of a phase with complete oral closure (zero aperture, i.e. A0) followed by a release phase (Afric or Amax, also referred to as Arel below). (15)

The phonological representation of consonantal segments in Aperture Theory (Steriade 1993, 1994) oral stricture place interpretation

nasality oral stricture place interpretation

A0 Amax

A0 Afric

Afric

[labial]

[labial]

[labial]

labial oral released stop

labial affricate

labial fricative

[nasal]

[nasal]

A0 Amax

A0 Amax

Amax

[labial]

[labial]

[labial]

labial prenasal stop

labial nasal stop

labial approximant

Grijzenhout (1995, 1996) uses this framework to describe lenition as a form of loosening oral stricture and fortition as a procedure that increases oral stricture. The process by which a fricative is realized as a stop in Soninke can thus be described as one by which A0 is inserted; the process by which a stop is realized as a fricative, e.g. in Spanish, involves deleting the A0 slot from the representation: (16)

a.

Stopping (increase stricture) as insertion of A0 Arel → A0 Arel (e.g. /f/ → [p] in Soninke)

Janet Grijzenhout

16 b.

Spirantization (reduce stricture) as deletion of A0 A0 Arel → Arel (e.g. /b/ → [ß] in Spanish)

Assuming that the articulation of long stops or geminates involves a long closure phase, long stops can be represented as elements with two A0 nodes (17) (chapter 37: geminates). Southern Paiute gemination thus involves the same procedure as indicated above for Soninke stopping, i.e. insertion of A0. Under consonant gradation, constriction loosens, which can be described as the loss of an A0 slot (17b): (17)

a.

b.

Gemination (increase stricture) as insertion of A0 short stop long stop A0 Arel → A0 A0 Arel /p/ → [p(] Southern Paiute Consonant gradation (reduce stricture) as deletion of A0 long stop short stop approximant A0 A0 Amax → A0 Amax → Amax /p(/ → [p] Finnish /p/ → [w]

Other cases where single released stops are realized as fricatives also often involve a preceding vowel. Examples are Tigrinya and Biblical Hebrew postvocalic spirantization; stops become more sonorant in the context of vowels, and the phonetic effect is that they are realized as fricatives. In Biblical Hebrew, post-vocalic singleton /p/ is realized as [f], /k/ is realized as [x], /t/ as [h], /b/ as [v], /g/ as [:], and /d/ as [Ï] (Sampson 1973; Kenstowicz 1994: 53, 411, 417):17 (18)

Biblical Hebrew post-vocalic spirantization of single short stops /pa(gaœ/ /ji-pgo(œ/ /ka(tab/ /ji-ktob/ /ga(dal/

[pa(:aœ] [jifgo(œ] [ka(hav] [jixtov] [ga(Ïal]

‘meet (perf)’ ‘meet (imperf)’ ‘write (perf)’ ‘write (imperf)’ ‘become great (perf)’

Note that in the same context, geminate stops are not affected, i.e. post-vocalic geminate stops do not alter: (19)

Biblical Hebrew post-vocalic geminate stops /sappir/ /gibbo(r/ /gidde(l/

17

[sap(ir] [gib(o(r] [gid(e(l]

*[safpir] *[givbo(r] *[giÏde(l]

‘sapphire’ ‘hero’ ‘magnify (perf)’

One reviewer asked why Biblical Hebrew and Tigrinya could not be accounted for as spread of [+continuant]. First, it is not obvious in current phonological theory that vowels are underlyingly specified for this feature. Second, fricatives – i.e. segments that are specified for the feature [+continuant] – do not trigger the spirantization process.

Consonant Mutation

17

In Aperture Theory, we can capture the fact that vowels cause a change in oral stops and, at the same time, explain that a fricative does not cause a change in an oral or nasal stop (*/aspa/ → [asfa]; see Wetzels 1991). The change from released stop to fricative in the context of a preceding vowel involves decrease of constriction, which is expressed by deletion of an aperture node, as in (16b) and (20). After deletion of the aperture node for complete obstruction in the oral tract (i.e. A0), the place feature is associated to the Arel node and the result is a single fricative. (20)

Post-vocalic spirantization as reduction of complete constriction in single stops (represented as deletion of an aperture node) A0 Arel

a.

b.

Arel

[labial]

[labial]

/p/

/f/

Geminate stops do not undergo this process in Biblical Hebrew, due to the “uniformity condition” (Hayes 1986), which says that if a certain environment incurs a change in a feature or node of a particular segment – in this case a vowel that incurs omission of an aperture node of a following stop – every dominating slot linked to that feature (or node) must satisfy that environment. The second A0 slot dominating the place feature is not adjacent to a vowel, and deletion of the aperture node in (21) is therefore blocked. (21)

Geminate stop: Two aperture slots for closure sharing one place feature A0 A0 Arel [labial] /p(/

Note that the “uniformity condition” does not apply in the case of consonant gradation in Balto-Finnic languages and Sami, because the process of gradation is not triggered by a preceding vowel. In the languages discussed in §5, no vowel incurs gradation (or omission of an aperture node). Rather, the morphology determines whether or not gradation takes place, so that our account of this process is as shown in (17b) above. Another case where Aperture Theory offers a more straightforward explanation than feature-based theories is Yukatec Maya degemination. In Yukatec Maya, a sequence of two homorganic stops is illicit. When two stops become adjacent due to a morphological process, the first one is realized as the placeless sound /h/ and the other retains its place of articulation (/k + k/ → [h k]; /t + tœ/ → [h tœ]). In cases where an affricate is immediately followed by an oral stop, it is realized as the corresponding fricative (/ts + t/ → [s t]; /tœ + t/ → [œ t]); see McCarthy 1988; Lombardi 1990; Padgett 1991: 358–362. Under the assumption that this process involves delinking of the feature [−continuant], it is curious that oral stops do not retain their place of articulation when they are realized as a

Janet Grijzenhout

18

continuant, whereas affricates do. Under the proposal advocated here, this comes as no surprise. If a language disallows two adjacent stops, one of the aperture nodes for complete closure is deleted and in Yukatec Maya it is deleted together with the elements it dominates; i.e. when in oral stops the node for complete obstruction in the oral cavity (i.e. A0) together with the node for the location of obstruction (i.e. the place feature) is deleted, an “empty” consonantal position remains that is phonetically realized as placeless /h/ (22a); conversely, when in affricates the A0 node is deleted, a Arel–Place association (which characterizes fricatives) remains (22b): (22)

Delinking of one aperture node triggered by the OCP in Yukatec Maya a.

C =

+

A0 Arel



A0

C

C

Arel

A0

[dorsal]

[dorsal]

b.

C

/k/

+

/k/

C =

+

C

[dorsal]



[h]

[k]

C

C

Arel

A0

A0 Arel

A0

[cor]

[cor]

[cor]

[cor]

/t/

[s]

[t]

/ts/

+

In Aperture Theory, mutations that involve voicing or nasalization may be described as processes whereby the features [voice], [nasal], and/or Sonorant Voicing are attached to one of the aperture slots (Grijzenhout 1995, 1996). The framework of Aperture Theory encounters problems in accounting for the type of gradation that is attested in Northern Corsican, where voiceless stops alternate with voiced ones in the same context in which voiced stops alternate with continuants. Voiced stops have a shorter closure duration than voiceless ones, and the intuition behind lenition processes is thus that they reduce closure phases of stops (by degemination, voicing, reduction to incomplete closure, or even reduction to segment loss).

7

Summary and conclusion

This chapter discussed some cases of consonant mutation. By consonant mutation we understand here a change in a consonant that is not the result of neutralization (e.g. syllable-final obstruent devoicing) or assimilation (e.g. nasal place assimilation). Rather, consonant mutation is viewed as a process that increases or decreases the degree of sonority and/or the length and degree of oral

Consonant Mutation

19

stricture of a segment when there is: (i) a purely phonological context in which the mutation always takes place irrespective of speech style, e.g. inter-sonorant voicing in Burmese; intersonorant spirantization in Djapu and Spanish; or (ii) a “mixed” morphophonological environment, e.g. stopping in Soninke, where the mutation is sometimes brought on by a preceding nasal consonant or vowel, but may also have a morphological function, and consonant gradation in Estonian and Sami, where the consonant alternations sometimes take place in well-defined phonological contexts and sometimes in a context where the mutation has a grammatical function; or (iii) a morphosyntactic environment that induces mutation, e.g. word-initial spirantization in Southern Paiute and stem-initial spirantization in Fula. Even though, from a historical perspective, consonant mutations may originally have had a phonological trigger (usually a preceding vowel or nasal), the fact that they also occur independently of a phonological environment is the first indication that an account that relies on the phonological contexts may be flawed. Moreover, accounts that regard mutations as phonological processes that involve spreading of a feature from a vowel or nasal onto the mutating consonant encounter problems when explaining why vowels and nasals spread these segmental features rather than fricatives and oral stops. §6 discussed one theory that elegantly accounts for those mutations that involve increasing or decreasing sonority and one theory that provides an insightful account of mutations that involve increasing or decreasing oral stricture. Van der Hulst and Ewen (1991) propose to represent segments by means of C and V elements. In their view, sonority-increasing mutations involve adding a V element either as a governing node, an adjunct, or a governed node; sonority-decreasing mutations would presumably involve adding a C element or taking away a V element. §6 showed that this framework not only works for mutations that alter one class of segments into another class of segments (e.g. when voiceless stops become voiced, when stops become fricatives, or when voiced oral stops become nasal stops), but also for scalar mutations that induce a change in different classes of segments (e.g. when underlying voiceless stops become voiced while underlying voiced stops are nasalized or when underlying voiceless stops become voiced stops in the same environment where voiced stops are spirantized). This theory is less successful, however, in explaining other types of consonant gradations, for instance those where geminates become singletons in the same context where singletons spirantize (Balto-Finnic and Sami consonant gradation). Steriade (1993, 1994) proposes using aperture nodes indicating the degree of oral stricture in phonological representations. This theory is useful in explaining processes where oral stricture increases (e.g. when continuants become stops or when the closure duration of short stops is extended) or decreases (e.g. when geminates degeminate where short stops spirantize). The downside of the theory is that it encounters difficulties in explaining scalar mutations of the type where underlying voiceless stops are voiced while underlying voiced stops are nasalized (e.g. Modern Irish initial nasalization), or when underlying voiceless stops are voiced in the same environment in which voiced stops are spirantized (e.g. Northern Corsican intervocalic consonant gradation). §6 thus focused on two theories that are an improvement compared to featurebased accounts in the sense that they are able to explain scalar mutations that

20

Janet Grijzenhout

take place without an overt phonological trigger. However, the mutations that the one theory elegantly accounts for pose a puzzle for the other theory and vice versa. The problem for phonological theory thus remains the fact that there is as yet no unified account for the different types of consonant mutations that we can describe in layman’s terms as changes that increase or decrease the level of sonority (a) expressed by laryngeal and nasal configurations, or (b) expressed by changes in oral aperture. Even though the description of consonant mutation is thus relatively simple, a phonological account is not.

ACKNOWLEDGMENTS I would like to thank the editors for their invitation to contribute to the Companion. Thanks are also due to Colin Ewen, Harry van der Hulst, Glyne Piggott, Holly Winterton, and Wim Zonneveld for helpful discussions on the topic of consonant mutation. I also appreciate the comments of Bernhard Wälchli and two anonymous reviewers.

REFERENCES Arnott, David W. 1970. The nominal and verbal systems of Fula. Oxford: Oxford University Press. Batibo, Herman. 2000. System in the sounds of Africa. In Vic Webb & Kembo-Sure (eds.) African voices: An introduction to the languages and linguistics of Africa, 160–196. Oxford: Oxford University Press. Berendsen, Egon. 1983. Final devoicing, assimilation, and subject clitics in Dutch. In Hans Bennis & W. U. S. van Lessen Kloeke (eds.) Linguistics in the Netherlands 1983, 21–29. Dordrecht: Foris. Brockhaus, Wiebke. 1995. Final devoicing in the phonology of German. Tübingen: Niemeyer. Campbell, George L. 1995. Concise compendium of the world’s language. London & New York: Routledge. Campbell, Lyle. 2004. Historical linguistics: An introduction. 2nd edn. Cambridge, MA: MIT Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Elzinga, Dirk. 1996. Fula consonant mutation and morpheme constraints. UC Irvine Working Papers in Linguistics 2. 43–57. Evans, Nicholas D. 1998. Iwaidja mutation and its origins. In Jae Jung Song & Anna Sierwierska (eds.) Case, typology and grammar: In honour of Barry J. Blake, 115–149. Amsterdam & Philadelphia: John Benjamins. Foley, James. 1970. Phonological distinctive features. Folia Linguistica 4. 87–92. Gordon, Matthew. 1998. A fortition-based approach to Balto-Fennic-Sámi consonant gradation. Folia Linguistica Historica 18. 49–79. Grijzenhout, Janet. 1991. Fula initial consonant mutation. In Peter Coopmans, Bert Schouten & Wim Zonneveld (eds.) OTS Yearbook 1991, 33 –47. Utrecht: Research Institute for Language and Speech, University of Utrecht. Grijzenhout, Janet. 1995. Irish consonant mutation and phonological theory. Ph.D. dissertation, University of Utrecht. Grijzenhout, Janet. 1996. Consonant weakening processes and aperture theory. Toronto Working Papers in Linguistics 15. 105–124. Harms, Robert T. 1962. Estonian grammar. Bloomington: Indiana University & The Hague: Mouton.

Consonant Mutation

21

Harms, Robert T. 1966. Stress, voice, and length in Southern Paiute. International Journal of American Linguistics 32. 228–235. Harris, James W. 1969. Spanish phonology. Cambridge, MA: MIT Press. Harris, John. 1994. English sound structure. Oxford: Blackwell. Hayes, Bruce. 1986. Inalterability in CV phonology. Language 62. 321–351. Hulst, Harry van der & Colin J. Ewen. 1991. Major class and manner features. In Pier Marco Bertinetto, Michael Kenstowicz & Michele Loporcaro (eds.) Certamen phonologicum II: Papers from the 1990 Cortona Phonology Meeting, 19–41. Turin: Rosenberg & Sellier. Iverson, Gregory K. & Joseph C. Salmons. 1995. Aspiration and laryngeal representation in Germanic. Phonology 12. 369–396. Karttunen, Frances E. 1970. Problems in Finnish phonology. Ph.D. dissertation, Indiana University, Bloomington. Kendall, Martha B. & Charles Bird. 1982. Initial consonant change in Soninké. Anthropological Linguistics 24. 1–13. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Keyser, Samuel J. & Paul Kiparsky. 1984. Syllable structure in Finnish phonology. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 7–31. Cambridge, MA: MIT Press. Keyser, Samuel J. & Kenneth N. Stevens. 2006. Enhancement and overlap in the speech chain. Language. 33–63. Lombardi, Linda. 1990. The nonlinear organization of the affricate. Natural Language and Linguistic Theory 8. 375–425. Lombardi, Linda. 1996. Restrictions on direction of voicing assimilation: An OT account. University of Maryland Working Papers in Linguistics 4. 88–112. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249–384. Amherst: GLSA. McLaughlin, John E. 1984. A revised approach to Southern Paiute phonology. Kansas Working Papers in Linguistics 9. 47–79. Morphy, Frances. 1983. Djapu, a Yolngu dialect. In R. M. W. Dixon & Barry J. Blake (eds.) The handbook of Australian languages, vol. 3, 1–188. Amsterdam: John Benjamins. Ní Chiosáin, Máire. 1991. Topics in the phonology of Irish. Ph.D. dissertation, University of Massachusetts, Amherst. Padgett, Jaye. 1991. Stricture in feature geometry. Ph.D. dissertation, University of Massachusetts, Amherst. Paradis, Carole. 1987a. Glide alternations in Pulaar (Fula) and the theory of charm and government. In David Odden (ed.) Current approaches to African linguistics 4, 327–338. Dordrecht: Foris. Paradis, Carole. 1987b. Strata and syllable dependencies in Fula: The nominal class. Journal of African Languages and Linguistics 9. 123–139. Paradis, Carole. 1992. Lexical Phonology and Morphology: The nominal classes in Fula. New York: Garland. Pedersen, Holger. 1897. Aspirationen i Irsk. Copenhagen: Spirgatis. Sampson, Geoffrey. 1973. Duration in Hebrew consonants. Linguistic Inquiry 4. 101–104. Sapir, Edward. 1930. Southern Paiute, a Shoshonean language. Proceedings of the American Academy of Arts and Sciences 65. 1–296. Skousen, Royal. 1972. On capturing regularities. Papers from the Annual Regional Meeting, Chicago Linguistic Society 8. 567–577. Steriade, Donca. 1993. Closure, release, and nasal contours. In Marie K. Huffman & Rena A. Krakow (eds.) Nasals, nasalization, and the velum, 401–470. Orlando: Academic Press.

22

Janet Grijzenhout

Steriade, Donca. 1994. Complex onsets as single segments: The Mazateco pattern. In Jennifer Cole & Charles W. Kisseberth (eds.) Perspectives in phonology, 203–291. Stanford: CSLI. Thurneysen, Rudolf. 1898. Review of Holger Pedersen (1897). Anzeiger für indogermanische Sprach- und Altertumskunde. Beiblatt zu den Indogermanischen Forschungen 9. 42–48. Uralic languages. 2010. In Encyclopædia Britannica. Available (August 2010) at www. britannica.com/EBchecked/topic/619069/Uralic-languages. Vainikka, Anne. 1988. Finnish consonant gradation unified. Unpublished ms., University of Massachusetts, Amherst. Wetzels, W. Leo. 1991. Contrastive and allophonic properties of Brazilian Portuguese vowels. In Dieter Wanner & Douglas A. Kibbee (eds.) New analyses in Romance linguistics, 77–99. Amsterdam & Philadelphia: John Benjamins. Wiswall, Wendy J. 1989. Fula consonant gradation: In support of radical underspecification. Proceedings of the West Coast Conference on Formal Linguistics 8. 429–444.

66

Lenition Naomi Gurevich

Lenition (German Lenierung, from Latin lenire ‘weaken’) is most commonly defined as “a ‘relaxation’ or ‘weakening’ of articulatory effort” (Hock 1991: 80). The term was coined by Thurneysen as one “used to describe a mutation of consonants which normally originated in a reduction of the energy employed in their articulation,” and affects mostly consonants in intervocalic position (Thurneysen 1946: 74). Below I present the processes that most commonly fall under the label of lenition, and make some observations that emerge from this list. The similarities between these processes and the way in which they pattern support the mostly uncontroversial view that lenition is indeed a group of similar phenomena. Kirchner writes that to his knowledge “no linguist has ever explicitly maintained the contrary view, that ‘lenition’ is merely an arbitrary collection of unrelated processes” (1998: 5). But while most acknowledge that the processes considered leniting are indeed related and form a coherent group, defining the exact criteria for group membership remains largely controversial and debates about the formalization, motivation, and even goal of lenition abound. Three main approaches to lenition are presented here: formal, phonetic, and functional. The formal approach is the generative theory-based search to formalize synchronic rules of lenition processes. The phonetic approach seeks to determine what motivates the sound changes in question and what exactly constitutes “articulatory effort.” The third, functional, take on lenition looks to matters of contrast maintenance, which exhibit an additional way in which lenitions pattern with great similarity, in its search for what constrains and sometimes triggers lenition phenomena.

1

Leniting processes

In this section I summarize the processes most commonly considered to fall under the cover term lenition. There is general agreement in the literature that all the processes listed below may be considered lenitions, but there is little agreement regarding the exact criteria required for membership in the lenition group. In coining the term, Thurneysen suggests that leniting processes are characterized by some reduction in articulatory effort, but to date there has been no agreement The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0066

Naomi Gurevich

2

on exactly what this entails. Among linguists there are intuitions regarding articulatory effort, some more accepted than others. Voicing, for example, has an explanation rooted in the laws of physics, specifically aerodynamics: intervocalically the vocal cords may continue to vibrate after the first vowel, through the consonant, and into the second vowel; but in final position – where final devoicing is encountered – aerodynamic conditions are not conducive to voicing (Westbury and Keating 1986). Other processes listed below have explanations that are more difficult to quantify, such as “some reduction in constriction degree or duration” (Kirchner 1998: 1). Both open questions, criteria for group membership, and how to formalize effort reduction, are reiterated in the discussion of the formal and phonetic approaches to lenition (§3.1 and §3.2).

1.1

Degemination

Degemination is the shortening of a CC cluster, where both consonants are the same, to a singleton C (see also chapter 37: geminates). Two examples are diachronic degemination in Numic *kk > k / V __ V (Manaster Ramer 1993) and word-final degemination in Tiberian Hebrew. (1)

Word-final degemination in Tiberian Hebrew (Malone 1993: 73) [qal]

1.2

‘light (masc)’

[qallDD] ‘light (fem)’

Deaspiration

Deaspiration is the loss or reduction of aspiration. For example, in the Pattani dialect of the Sino-Tibetan language Lahaul, aspiration of the bilabial voiceless stop is reduced in pre-accented syllables. In medial and final contexts, [ph] is in free variation with [p]. (2)

Deaspiration in the Pattani dialect of Lahaul (Sharma 1982: 48) a.

b.

1.3

Reduction of aspiration i. aspiration in accented syllable ph ukH ‘body’ h ii. reduced aspiration in pre-accented syllable p ukan ‘flour’ [ph] ~ [p] i. ÕHgegph i ~ ÕHgegpi ‘to tremble, shiver’ ii. hjuph Œi ~ hjupŒi ‘to open’

Voicing

Voicing involves a change from a voiceless sound to a voiced one and is a very common lenition process, second in prevalence only to spirantization (Gurevich 2004). Voicing usually affects whole series of sounds in the language where it applies. And although voicing can affect fricatives as in Sekani, a Na-Dene language spoken in Canada (3), where voiceless initials of noun and postposition stems voice when prefixed or preceded by a nominal possessor or object, it is much more common with stops, such as the intervocalic voicing in the Yanomam language Sanuma (4).

Lenition (3)

/s l ç x R/ → [z l ù : w] in Sekani (Hargus 1985: 270–271) a. b. c.

(4)

3

xàs ‘planning tool’ çHn ‘song’ xàz ‘windfall roots’

sH:àsè sHùHnè tse :àz-e

‘my planning tool’ ‘my song’ ‘Old Friend Mt. (roots stem)’

/p t ts k/ → [b d dz g] / V __ V in Sanuma (Borgman 1990: 220) a. b. c. d.

1.4

ipa hute hatsa ãka

[ipa] or [iba] [hute] or [hude] [hatsa] or [hadza] [ãka] or [ãga]

‘my’ ‘heavy’ ‘deer’ ‘tongue’

Spirantization

Cross-linguistically, this is by far the most common lenition process. Spirantization involves the change of a stop to a fricative, most commonly in intervocalic position. This quite often affects whole series of sounds in a language’s inventory, as for example in the Paya dialect of the Chibchan language Kuna (5), or in the Tümpisa dialect of the Uto-Aztecan language Shoshone (6). (5)

b d g → ß Ï : / V __ V following a stressed syllable in Paya Kuna (Pike et al. 1986: 459) a. b. c.

(6)

1.5

paßa ‘father’ peÏe ‘you’ na:a ‘foot’

p t c k kw → z h ç x xw / V __ [voiceless]V in Tümpisa Shoshone (Dayley 1989: xxviii–xxix) a. b. c. d.

wisipin tapettsi citoohin petq-gem

e. f.

katq-gem mi?akwa

[wiœizê] [taßetŒê] [ciÏo(hê] [peÏq] ~ [peh[]

‘thread’ ‘sun’ ‘push’ ‘daughter’ (ends with a geminating segment) [ka7q] ~ [kaí[] ‘sit’ w w [mh?a: a] ~ [mha: a] ‘go away!’

Flapping

Flapping is a process whereby a sound is replaced by a flap (usually either alveolar [7] or retroflex [È]; see chapter 113: flapping in american english). Two examples are intervocalic flapping of the trill in the Sino-Tibetan language Kagate spoken in the village of Phedi (7) and final flapping of the retroflex stop in the Indo-European language Gujarati (8). (7)

Flapping in Kagate: /r/ → [7] / V __ V (Hoehlig and Hari 1976: 19) a. b. c.

/tari/ /tihriI/ /guhri/

[tâ7i] [t≤7iI] [gu7i]

‘axe’ ‘today’ ‘cat’

Naomi Gurevich

4 (8)

Free variation of [Õ] and [È] in final position in Gujarati (Cardona 1965: 24) [ ù haÕ] ~ [ ù haÈ] ‘tree’

Quite often the alveolar and retroflex stops undergo flapping as part of a more general spirantization process that affects other stops. For example, in the CalabarCreek Town dialect of Efik (a Niger-Congo language) /b d k/ → [ß 7 :] noninitially in a word stem and before a vowel: where /b/ and /k/ spirantize, the alveolar /d/ flaps (9). In the case of the retroflex, the [È] appears to be the usual output of a process where the rest of the stop series spirantize. For example, in the Afro-Asiatic language of Somali, /b d Õ g/ spirantize to [ß Ï È :] intervocalically, especially after a stressed syllable: where /b d g/ spirantize, the /Õ/ flaps (10). In the case of the alveolar stops, however, there is a discernible pattern: spirantization of the stop series in languages whose inventories include a phonemic trill usually results in a /d/ → [Ï] substitution, while in trill-less languages the alveolar stop flaps (/d/ → [7]) where the rest of the stop series spirantizes. This happens with regularity in all 33 languages where alveolar stops are affected (Gurevich 2004). (9)

Non-initial in a word stem and before a vowel in the Calabar-Creek Town dialect of Efik (Dunstan 1969: 38) a. b. c.

(10)

→ → →

[ß] [7] [:]

dwòß-èbà íkà7-âkpâná úfä:-údwà

‘twelve’ ‘name of a town’ ‘market store’

Spirantization of stops in Somali (Armstrong 1964) a. b. c. d.

1.6

/b/ /d/ /k/

’laba ’badag ’tiÕi ’sagaal

[’Îaßa] [’baÏag] [’tiÈi] [’sa:aal]

‘two’ ‘goose’ ‘she said’ ‘nine’

Debuccalization

Debuccalization is the loss of place of articulation, preserving only glottal constriction, resulting most commonly in either [h] or [?] (see also chapter 22: consonantal place of articulation). For example, in the Cuisnahuat dialect of the Uto-Aztecan language Pipil, a syllable-final /w/ becomes [h] in word-final or preconsonantal position (11). In the Austronesian language Toba Batak, preconsonantal voiceless stops surface as glottal stops (12). (11)

Debuccalization in the Cuisnahuat dialect of Pipil: /w/ – /h/ (Campbell 1985: 34) a. b.

(12)

kuwa puwa

‘to buy’ ‘to count’

kuhki puhki

‘bought’ ‘counted’

Debuccalization in Toba Batak: /p t k/ → [?] / __ C (Hayes 1986: 341) halak

‘person’

hala? batak ‘Batak person’

Lenition

1.7

5

Gliding

Gliding is the replacement of stops or spirants with a homorganic glide (see chapter 15: glides). For example, in the Djapu dialect of the Australian language Yolngu, [{ Á] ~ [j], [b g] ~ [w] in word-medial position following a vowel, liquid, or semivowel (13). (13)

Gliding in the Djapu dialect of Yolngu (Morphy 1983: 29) a.

minj?ci ‘colour, paint’ + {arpu-NG ‘pierce’ minj?ci-jarpu-NG ‘paint’ b. ÁarakaÎa?ju-N ‘move in an uncontrolled way’ ÁarakaÎa?-jarakaÎaju-N ‘keep moving in an uncontrolled way’ c. {a( ‘mouth’ + birka?ju-N ‘try’ {a(-wirka?ju-N ‘ask’ d. {awaÎ ‘country’ + gujaNi-Ø ‘I think’ {awaÎ-wujaNi-Ø ‘be born’

1.8

Loss

Loss is the deletion of a sound (most commonly a glide or a glottal) in certain contexts (see chapter 68: deletion), for example, the occasional loss of the intervocalic glottals [?] and [h] in the Uto-Aztecan language Tümpisa Shoshone (14), where the presence and absence of these sounds is in free variation. (14)

[? h] ~ Ø in Tümpisa Shoshone (Dayley 1989: xxix) a. b.

1.9

mi?akwa [mh?a:wa] ~ [mha:wa] ‘go away!’ so?oppqtqn [sø?Dp(q7q] ~ [sD(p(q7q] ‘much, many’

Devoicing

Devoicing is the loss of voicing, usually in final positions (see chapter 69: final devoicing and final laryngeal neutralization). For example, the final devoicing of obstruents in Standard Bulgarian (15): (15)

Final devoicing in Bulgarian (Scatton 1984: 73) a. b.

2

grad-ove grat

‘cities’ ‘city’

Patterns of lenition

The examples in §1 touch on the cross-linguistic prevalence of some leniting phenomena, how common they are among the world’s languages, and how widespread their effect may be within the languages where they apply. In this section I elaborate on a few other patterns that emerge from the list of leniting processes. The degree to which each theory of lenition discussed in the following section accounts for these patterns provides added perspective into the differing points of view.

6

Naomi Gurevich

The processes described above operate in two main contexts: syllable/ word-finally and intervocalically. The bulk of the processes, those that apply in intervocalic context, line up in a discernible sequence: the products of degemination and deaspiration (/tt t h/ → [t]) are the sounds that undergo voicing (/t/ → [d]), resulting in the sound that most commonly undergoes spirantization, flapping, debuccalization, or gliding (/d/ → [Ï ?/h 7 j]), and glides and glottals are the sounds most commonly lost (/j ? h/ → Ø). This chain shift alignment of intervocalic lenition processes, where the output of some is the exact input of others, is illustrated in (16). In (17), the processes are listed in the order they could apply if they were to affect the same phoneme or its correspondent diachronically, although in some cases like spirantization and flapping or debuccalization and gliding, this order is arbitrary. (16)

Hierarchy of input/output sounds in intervocalic lenition processes tt t h > t > d > (h) Ï 7 > ? h j > Ø

(17)

The general order in which intervocalic lenition processes might apply Degemination Deaspiration Voicing Spirantization Flapping Debuccalization Gliding Loss

tt → t th → t t→d t d → (h) Ï td→7 t→?h t→j ?hj→Ø

If the oft-cited observation that “a segment X is said to be weaker than a segment Y if Y goes through an X stage on its way to zero” (Venneman, cited in Hyman 1975: 165) is an accurate diagnostic of intervocalic consonant strength, then (16) lists consonants in order of their relative strength, from strongest to weakest (where [tt] is stronger than [t], which is stronger than [d], etc., to the weakest possible outcome of lenition, which is Ø). The resulting “weakening hierarchy” gives birth to the notion of lenition as gradation toward loss. There are two ways in which the patterns illustrated in (16) and (17) manifest themselves in language data: as an outline of attested diachronic sound changes of the same phoneme or its correspondent, and as a list of synchronic sound substitutions that occur, often simultaneously, in any given language. Two examples of attested diachronic sound changes where some of the lenition processes listed in (17) sequentially affect the same phoneme are French, where intervocalic stops were voiced then spirantized before eventual gliding (not shown in the example) and deletion (18), and Latin, where there was a similar intervocalic voicing, spirantization/gliding, then loss (19). (18)

French lenition: t → d → Ï → Ø (Jacobs 1994: 2) fratrem > *[fradre] > [fraÏre] > frère ‘brother’

(19)

Latin lenition (Hock 1991: 81) pacatum > (*)pagado > Sp. [pa:aÏo] > dialectal [pa:ao] ‘pacified, pleased’

Lenition

7

Lenition processes that operate simultaneously in a given language may also exhibit the pattern in which the output of one process is the exact input of another, but in some cases these actually affect different phonemes. For example, in a given language there could be synchronic voicing of voiceless stops and spirantization of voiced ones. In such cases the chain-shift pattern of [t] → [d] and [d] → [Ï] is maintained, but the phone [d] that is the output of voicing and the phonetically comparable [d] that undergoes spirantization do not represent the same phoneme. Two examples of such phonemic overlap, an intersection of phonemes where “a given sound [. . .] may belong to two or more different phonemes in the same dialect” (Bloch 1941: 93), are the interactions between intervocalic voicing and spirantization in Northern Corsican, and between debuccalization and loss in Nepali. In Northern Corsican, intervocalic voiceless stops are voiced (20), while existing voiced stops in the language are spirantized in the same context (21). The phonetic contrast between intervocalic voiceless and voiced stops shifts to spirantization and is maintained. In the Indo-European language Nepali there is intervocalic debuccalization of [tsh] to [h] (Bandhu and Dahal 1971) and in the same context loss of the existing [h] in normal speech (22). So while the chain-shift pattern of [tsh] → [h] and [h] → Ø is maintained, the output of the first process and the input of the second are not the same phoneme, and the previous contrast between the phonemes /tsh/ and /h/ is preserved, and shifts to /h/ and Ø. (20)

[p t k] → [b d g] / V __ V in Northern Corsican (Dinnsen and Eckman 1977: 6) a. b. c.

(21)

‘foot’ ‘I have’ ‘house’

[u beÏe] [u dengu] [a gaza]

‘the foot’ ‘I have it’ ‘the house’

[b d g] → [ß Ï :] / V __ V in Northern Corsican (Dinnsen and Eckman 1977: 6) a. b. c.

(22)

[peÏe] [tengu] [kaza]

[bokka] ‘mouth’ [dente] ‘tooth’ [gola] ‘throat’

[a ßokka] [u Ïente] [di :ola]

‘the mouth’ ‘the tooth’ ‘of throat’

Intervocalic loss of /h/ in Nepali (Bandu and Dahal 1971: 26) a. b.

/bHhiro/ /mHhH/

[bHiro] [mHH]

‘deaf’ ‘honey’

The discussion of phonemic overlap brings up two additional features common to most lenition processes. First, Nepali intervocalic /h/-loss is reported to occur in normal speech. This added dimension to what we know about the context of many lenition processes is rather common cross-linguistically, and many leniting sound substitutions are reported mostly to occur in relaxed, fast, and normal speech (see chapter 79: reduction). Second, the matter of contrast maintenance is raised (see chapter 2: contrast). In both the Northern Corsican and the Nepali examples the phonetic distinction between two phonemes is threatened by a leniting sound substitution (in Corsican the /p/:/b/, /t/:/d/, /k/:/g/ oppositions are threatened by the voicing of /p t k/, and in Nepali the /tsh/:/h/ opposition is threatened with the debuccalization of /tsh/ to [h]). In both cases, however, the distinctions are maintained by an additional leniting sound substitution in the same context (in Corsican the intervocalic

Naomi Gurevich

8

spirantization of the existing voiced stops shifts the /p/:/b/, /t/:/d/, /k/:/g/ contrast to /b/:/ß/, /d/:/Ï/, /g/:/:/, and in Nepali the loss of intervocalic /h/ shifts the /tsh/:/h/ contrast to /h/: Ø). As it turns out, lenition phenomena in general very rarely lead to neutralization, and almost never result in homophonic forms. A survey of 230 mostly leniting processes in 153 languages found that 92 percent avoid neutralization, while only 8 percent could potentially result in the kind of homophony that leads to the loss of lexical distinction that may interfere with communication (Gurevich 2004). In summary, several ways in which lenition behaviors pattern together have been outlined in this section. These are listed in (23). (23)

Patterns of lenitions a.

The prevalence of certain processes (that is, how common some lenition processes are cross-linguistically and whether they affect a single sound or an entire series). b. The fact that many lenition processes are reported to occur in natural or fast speech. c. The gradation pattern of intervocalic lenition processes. d. Phonemic overlap. e. The strong tendency of lenition phenomena to avoid neutralization. Current debates on lenition focus on formulating a unified description of all leniting processes, isolating what exactly motivates them and in some cases what constrains them. Exploring how and the degree to which each approach accounts for the patterns presented here sheds light on the foundation of each theory and its capacity to accommodate empirical data.

3

Theoretical approaches to lenition

There is somewhat of a general agreement among phonologists about the main processes that can be considered leniting, but formalizing this agreement has proved controversial. In this section, three main approaches to the question are explored. The formal approach (§3.1) seeks to define unified rules that would encode all vital information about lenition processes. These rules should form a model that can be used to determine unambiguously which processes are leniting, including all those that are and excluding all those that aren’t. The phonetic approach (§3.2) seeks to isolate the underlying physical causes of all lenition processes. The third approach (§3.3) builds on the contrast-maintaining behavior of most lenition processes to identify the forces that may constrain the progress and outcome of such sound changes.

3.1

The formal approach

The formal approach to lenition is taken by generative grammarians. Its goal is to formalize a set of purely synchronic rules that would model all cases of lenition. Three notable formalizations of lenition under this approach are feature spreading, sonority promotion, and simplification. Additional formalizations exist, but they are

Lenition

9

mostly variations on these three models. All three models attempt to subsume the various sound changes that can be considered leniting, while excluding all other processes, under one formal expression. Success in this endeavor would result in a rule to be included in Universal Grammar (UG). Lenition as autosegmental feature spreading (e.g. Jacobs and Wetzels 1988) involves the spreading of some feature of the surrounding sounds to the element undergoing lenition (see chapter 81: local assimilation), for example the spreading of either the [+voiced] or [+continuant] feature of the vowels surrounding an intervocalic stop to that stop, causing it to either voice or spirantize. This formalization works well for voicing and spirantization, but not for the leniting process of debuccalization which, if anything, involves the delinking of features rather than the acquisition of new ones. Additional rules would be required to predict when a stop is voiced and when it is spirantized, since the surrounding vowels possess both features. Lenition as sonority promotion (e.g. Hock 1991; Lavoie 1996) formalizes lenition rules as replacing a sound by a more sonorous version of itself in certain contexts. Sonority is determined based on the principle that “requires onsets to rise in sonority toward the nucleus and codas to fall in sonority from the nucleus” (Kenstowicz 1994: 254; see also chapter 49: sonority). On the scale of sonority, stops are least sonorous, followed by fricatives, nasals, liquids, glides, and finally vowels, which are most sonorous. Lenition as sonority promotion is descriptively accurate for some of the leniting processes such as spirantization and gliding, but as a unified formalization of lenition it fails to include other processes commonly considered leniting such as deaspiration and degemination, neither of which has a more sonorous output than input. A third formal view of lenition is one of simplification, where segmental complexity is measured by the number of features required to describe a consonant, and lenition is a process that simplifies this complexity by delinking some of the features. For example, deaspiration would involve the delinking of laryngeal features and debuccalization would delink place of articulation features. When this formalization is faced with a leniting sound change that does not appear to reduce the number of basic features of a given element, it turns to markedness for help: markedness is used as a measure of some degree of “naturalness,” meant to make phonological features less abstract in terms of their intrinsic content (e.g. Chomsky and Halle 1968; Guitart 1976; McMahon 1994; Rice 1999). An element is considered marked by definition if it is less natural or more complex than another. In this way, every case that does not immediately conform to the view of lenition as simplification is solved, because within this approach the input of any lenition process, by definition, is more marked than the output, hence every lenition process is one of a move to the unmarked, or less complex state (see chapter 4: markedness). Let us examine how the formal models presented here account for the patterns of lenition discussed in §2. Although there is not an explicit concern with the prevalence of some lenition processes over others (23a), the frequency of any element in comparison with another can be accommodated within generative theory as part of the UG principles of markedness: the less marked elements are expected to be more frequent. The view of lenition as gradation (23c) is relevant to both the sonority promotion and the simplification models: both formalize lenition as a move along a graded scale, either of sonority or of segmental complexity.

10

Naomi Gurevich

The sonority scale emulates to a large extent the lenition hierarchy mapped out in (16), and the case of loss especially provides compelling support for lenition as simplification. Due to its superlative nature, the output of loss is arbitrarily the least complex, least marked, and most natural segment, which, as the ultimate step in the gradation pattern, gives the impression that it is the goal. The remaining three patterns, fast speech (23b), phonemic overlap (23d), and contrast maintaining behavior of lenitions (23e), are not addressed. The advent of Optimality Theory (Prince and Smolensky 1993) allows for the incorporation of elements of phonetic and functional detail into the formal grammatical expression of lenition patterns. Kirchner (1998), for example, incorporates notions of “articulatory ease” directly into his formal statement, in the form of so-called “lazy” constraints that are ranked with respect to (presumably “non-lazy”) faithfulness constraints. Such a research program provides a very promising link to the phonetic and functional pressures that are demonstrably acting on patterns of lenition. We hold off our investigation of such approaches – despite their formal rigor – until the next section. To summarize, the formal approach to lenition arises from the generative tradition. Its main goal is to find a unified formal rule that would subsume all the various processes that can be considered leniting. If such a rule is defined, it would be recognized as a linguistic universal, and the question of lenition would be solved. If a model is posited that does not apply to all processes generally considered leniting, then the offending processes are either amended (as in the case of the simplification model’s use of markedness) or removed from the list of lenitions, using the argument that the model is sound, and if it is sound and does not include X, X must not be relevant. For example, degemination does not fit the view of lenition as “delinking of privative features” (a variation on the simplification model), so it is argued that this process should be excluded from the list of unambiguously agreed-on lenition phenomena (Szigetvári 2008).

3.2

The phonetic approach

The second approach is more in the spirit of how the term lenition came to be coined: an observation of how common certain processes are in certain environments and that there is some “reduction of articulatory energy” associated with these sound changes. Research in this direction is especially concerned with the physical motivation behind lenition, be it articulatory effort reduction, prosody maintenance, related to acoustics and perception, or possibly some combination of these factors. Ohala (1981) notes that sound changes which are attested in diverse and unrelated languages are likely to have a phonetic origin. The degree to which lenition processes are common cross-linguistically suggests that they are motivated by what is common among speakers, i.e. biological factors which include the physical shape of the vocal organs, their movement, and their acoustic correlates. However, while it is widely assumed that lenition is conditioned by phonetic properties such as ease of production and perception (e.g. Flemming 1995; Jun 1995; Kirchner 1998; Steriade 2000), how exactly these properties should be described and measured is far from settled. The concept of effort minimization precedes formal generative theories: Zipf, as early as 1935, suggests that the frequency of sounds depends on their degree of

Lenition

11

articulatory complexity. Rejection of this concept, based on the fact that effort is difficult to measure, was swift: Trubetzkoy (1939) argues that it is difficult to pinpoint the degree of complexity of sounds (e.g. which is more complex: tense vocal cords and relaxed mouth organs, or lax vocal cords and tense mouth organs?). Trubetzkoy’s objection to describing sounds in terms of phonetic complexity leads to the introduction of markedness values and the eventual move toward formalizing phonology rules completely removed from phonetic information. But interest in characterizing lenition in terms of effort minimization has not waned. Kirchner’s (1998) phonetically based approach posits an effort minimization model of lenition where greater articulatory movement constitutes greater effort, and the push to reduce this effort results in the reduction of constriction degree or duration of an affected sound. Kirchner incorporates phonetic theory into a formal approach and models it within the framework of Optimality Theory (Prince and Smolensky 1993), where conflicting universal constraints are ranked with respect to each other. Articulatory effort minimization is therefore identified as a constraint (Lazy) that is ranked with respect to the counter-force constraints of faithfulness and fortition. Lenition is thus viewed as a force, encoded in universal grammar, to reduce articulatory effort by reducing articulatory movement and timing which results in falling short of articulatory targets. Kingston argues that: the differences in effort between the lenited and unlenited pronunciations are so miniscule that they can hardly be what motivates a speaker to lenite. Both the differences in the distance the articulators travel (mere millimeters) and the time scales (at most tens of milliseconds) are much too small for effort to differ detectably between the two pronunciations. (Kingston 2008: 1)

He suggests that lenition’s purpose is to maintain prosodic structure. He shows that lenition occurs most commonly inside prosodic constituents and argues that it is meant to communicate a continuing constituent, thereby reducing a sound’s interruption of the stream of speech. Within this view, lenited pronunciation is the result of achieving a specific target that produces the desired acoustic consequences, such as greater intensity, rather than falling short of the desired target. Kingston also notes that “[l]enition is more likely in more frequent words than less frequent ones, because the listener needs less information to recognize more frequent words” (2008: 17); see also chapter 90: frequency effects. Acoustics and perception must also play some role in lenition, even if only as part of the natural interaction between speaker and hearer: “Speaker and hearer are interested in communicating and will pronounce words only as they have heard them (or think they have heard them) pronounced by others” (Ohala 1981: 197); see also chapter 98: speech perception and phonology. Acoustic considerations may affect the perceived differences between certain sounds in certain contexts, which may facilitate or otherwise influence lenition phenomena. For example, acoustic theory suggests that prevocalic distinctions are more perceptible than preconsonantal ones (e.g. Silverman 1995; Ségéral and Scheer 1999; Steriade 1999). Since lenition processes are most frequent in intervocalic contexts, it is possible that lenition may proceed more easily in contexts where a sound is easier to perceive even when it is lenited. Kaplan (2008) suggests another angle: that in certain contexts the perceptual difference between a sound and its lenited counterpart plays a role

12

Naomi Gurevich

in the prevalence of some lenition processes. She has had some success in showing that the perceptual difference between intervocalic voiced stops and spirants is smaller than between voiced and voiceless stops, which is somewhat consistent with the frequency of spirantization, although it is too soon to draw any broad conclusions based on such a limited study. The interaction between articulation and perception, speaker and hearer, also plays a role in the functional approach discussed in the following section. Turning to the patterns listed in §2, by attributing the sound changes to physical properties common to all speakers, a unified approach to lenition that is based on phonetic considerations accounts for the similarities between, and to some extent the frequency of, cross-linguistic lenition processes (23a) and the fact that many leniting changes are reported to occur in relaxed or fast speech (23b). In fact, it is these empirical observations that suggest lenition phenomena may be phonetically driven in the first place. The view of lenition as gradation (23c), and especially the weakening hierarchy in (16), are central to effort-reduction theories. As with some formal models discussed above, the case of loss provides compelling support for the view of lenition as a graded move along a scale of effort minimization: loss is the ultimate step in lenition and it results in an element that unambiguously requires the least effort to articulate. Phonemic overlap (23d) is not addressed, but the strong tendency of lenition phenomena to avoid neutralization (23e) is inherent in Kingston’s observation that lenition is more likely to affect more common words which rely less on acoustic cues to be recognized. Lenition is less likely to proceed unhindered and have widespread consequences if it obliterates meaning distinctions to the point where it interferes with communication (Gurevich 2004). This is also supported by the acoustic studies that suggest distinctions are more perceptible in prevocalic contexts, where most lenition phenomena occur. I return to this point in the following section.

3.3

The functional approach

The functional approach focuses on the effect that leniting sound substitutions have on contrasts in the languages where they apply, that is, the fact that lenition phenomena very rarely result in obliteration of contrast (neutralization). This is the goal of Gurevich (2004), who investigates 230 mostly leniting processes in 153 languages, and finds that 92 percent of these avoid neutralization. This approach is termed functional because the meaning distinction of words depends on the system of contrasts in a given language, and these distinctions affect communication, which is the primary function of language. This approach to lenition is not independent of phonetically based motivation, and does not explicitly contradict any of the models presented in the previous section. It does, however, influence the extent to which physical properties can drive sound changes. If there is a physical “push” – whatever that push may be – to lenite, the degree to which this substitution may affect contrast in a given language may hinder the progress of the sound change. Lenition processes that threaten contrast may lead to loss of lexical distinctions which, in turn, could induce a significant amount of homophony that would result in confusion. And since confusing signals are less likely to be reproduced as listeners become speakers (Silverman 2006), the neutralizing sound substitutions, regardless of the physical “push” to invoke them, are less likely to become widespread.

Lenition

13

Gurevich (2004) clearly shows that the progress and outcome of lenition processes are constrained by the functional considerations of contrast maintenance. The systems of contrast in languages appear to exert a gradual diachronic force over phonetic processes, affecting the progress and outcome of such processes depending on the degree to which they threaten contrast. Lenition processes that do not threaten contrast are far more likely to proceed unhindered, with the widespread consequences of affecting entire series of sounds in a language (this most commonly happens with voicing and spirantization). Changes that somewhat threaten contrast pattern according to the shapes of the phonemic inventory of a language. An example of this is the flapping that patterns with spirantization discussed in §1.5: in languages where the entire stop series is spirantized, the retroflex stop [Õ] always results in a flap [È], while the alveolar stop [d] results in a flap [7] only in languages that do not have a phonemic trill; otherwise it spirantizes to [Ï]. This suggests that flapping may be the preferred outcome of alveolar spirantization, as in the case of the retroflex stops, but is avoided when contrast is threatened. This threat comes in the form of a phonemic trill, a sound that is phonetically similar to a flap (the /r/:/7/ contrast exists, but is rare; it is found in only three of the 153 languages investigated). Finally, changes that clearly threaten contrast, such as sound mergers and loss, often induce further changes that reshape phonological systems, thereby avoiding contrast obliteration: for example, cases of phonemic overlap discussed in §2 (e.g. in (20), where [p t k] → [b d g], and (21), where [b d g] → [ß Ï :] intervocalically in Northern Corsican, which results in the avoidance of neutralization between the output of the voicing process and existing voiced stops in the language), or contrast shifts, where a sound may be deleted but its absence maintains contrast with Ø, an example of which is provided below in (24). Phonetically conditioned sound changes that are found to be neutralizing (18 of the 230 processes, or 8 percent) are more common in preconsonantal positions where contrast is less perceptible and harder to maintain (see again Silverman 1995, Ségéral and Scheer 1999, and Steriade 1999). That is, the potential obliteration of contrast to the extent where it could interfere with lexical distinctions occurs more frequently in contexts where contrast is already less perceptible, in which case fewer words should depend exclusively on such contrasts for their distinctions. Hence the potential of these few neutralizing sound substitutions to hinder communication by inducing homophony is reduced, which further suggests that the relationship between leniting sound substitutions and contrast is not arbitrary. Turning again to the patterns listed in §2, the cross-linguistic similarities of lenition processes are accounted for by relying on phonetically based motivation, and, in addition, the prevalence of certain processes, and possible wide-reaching consequences of affecting entire series of sounds within a language’s inventory (23a), are accounted for directly by the degree to which these sound substitutions affect contrasts in a given language. That is, the less likely a sound substitution is to induce homophony, the more likely it is to proceed unhindered and have a widespread effect on the sound system of a language, as is in fact the case for both voicing and spirantization. Voicing, which comprises 39 cases of the 230 investigated, never results in neutralization and is not only common cross-linguistically, but is also most likely to affect whole series of sounds in the languages where it applies. Spirantization, of which there are 76 cases, is 95 percent non-neutralizing,

14

Naomi Gurevich

is the most common cross-linguistic form of lenition, and is also likely to affect entire series of sounds. The fact that lenitions commonly occur in natural or fast speech (23b) is not explicitly addressed within the functional approach, except as a natural consequence of its reliance on phonetic motivation as the force that drives most lenitions. The gradation pattern of intervocalic lenition processes (23c) has no implications for the functional approach. Interestingly, although the effect of contrast considerations on lenition is not related to the view of such processes as a trajectory of consonants toward their weakest state, the case of loss – which provides compelling support for the gradation view of several models discussed above – also plays a key role here. This final step in the view of lenition as erosion also has a superlative consequence for contrast: loss always results in phonetic neutralization because the elimination of a phoneme in some context always obliterates the phonetic distinction between that phoneme and Ø. But meaning distinctions are actually preserved in 71 percent of the cases of loss in the corpus of 230 processes. The most common ways in which contrast is maintained in such cases are phonemic overlap, like the examples in (20)–(22) above, and contrast shifts such as the one in Bulgarian (24), where nasals are lost between vowels and fricatives, followed by nasalization of the vowel. Here the /n/ is lost, but the nasalization of the preceding vowel maintains the distinction between /n/ and Ø. (24)

Loss of nasal and nasalization of vowel in Bulgarian (Scatton 1984: 57) /onzi/

[õzi]

‘that’

Phonemic overlap (23d) is central to the functional approach. It shows how leniting processes that threaten contrast may induce further changes, and suggests that some leniting changes may actually be, at least partially, triggered by contrast maintenance. Finally, the strong tendency of lenition to avoid neutralization (23e) is, of course, the basis of the functional approach, which posits that this tendency is what constrains the progress and outcome of lenitions. One could question the significance of a pattern that is so important to one approach but is not addressed by others. However, it is a pattern that is extremely prevalent and something that lenitions have more in common than almost any other characteristic. Its omission from all formal and most phonetically based models stems in part from the fact that this pattern had not previously been investigated, and in part from the general belief that, because contrasts are language specific, they have no place in universal grammar.

4

Conclusion

Leniting sound changes are common and exhibit similar cross-linguistic behaviors – this much we know. How to formalize this information in a unified model of lenition that all phonologists can agree on has so far eluded us. Depending on one’s approach, such strong cross-linguistic similarities must be encoded in the UG and/or grounded in the physical properties of both speakers and hearers. Current debates range from the generative approach of how to encode lenition as purely phonological synchronic universal rules, to isolating the physical driving

Lenition

15

force that motivates these sound changes, to how the systems of contrasts in a given language play a role in constraining the progress and outcome of lenition processes. The various models of lenition presented in this chapter have differing approaches to the questions of how to delimit the collection of processes that almost everyone agrees are related. Five general patterns of lenitions – all based to some extent on empirical data – are identified. The relative significance of each tendency depends on the theory one supports. Below is the summary of each pattern’s role within the differing approaches to lenition. (a) The prevalence of certain lenition processes as well as their overall crosslinguistic similarity: cross-linguistic frequency of certain lenitions is implicit in the markedness constraints of formal models, where the less marked element is more natural and therefore more frequent, and explicit in the phonetic approach’s reliance on physical properties which are common to all speakers and hearers. The prevalence of some processes over others, as well as how widespread their consequence may be, is predicted by the contrast-maintenance considerations of the functional approach, where the less a process threatens contrast the more prevalent it is. (b) The fact that many lenitions are reported to occur in natural or fast speech: this pattern is not addressed by the formal models but is inherent in the view of lenition as motivated by physical properties and therefore works with both the phonetic and the functional approaches. (c) The gradation pattern of intervocalic lenition processes: this pattern emerges from the juxtaposition of diachronic lenition processes, and is critical to the sonority scale and simplification models of lenition under the formal approach, and to the view of lenition as consonant erosion within the effort-minimization model. This view of lenition is crucial to the characterization of lenitions as all part of one general process, a notion that is most advantageous to theories concerned with building a UG, whose hypothesized existence is directly tied to encoding rules in their most general and simple manner. (d) The phonemic overlap pattern of some lenitions: this is a pattern that often emerges in languages where two synchronic lenition processes interact in the sense that the output of one is phonetically similar to the input of another. It is only addressed by the functional approach, which views such cases as an indication that the threat of contrast obliteration may trigger additional lenitions. (e) The strong tendency of lenition phenomena to avoid neutralization: this recently identified tendency is the foundation on which the functional approach is built. Since this pattern is based on contrast considerations, which are languagespecific, formal models do not address it in their search for universals.

REFERENCES Armstrong, Lilias E. 1964. The phonetic structure of Somali. Ridgewood, NJ: Gregg Press. Bandhu, C. M. & B. M. Dahal. 1971. Nepali segmental phonology. Kirtipur: Tribhuvan University. Bloch, Bernard. 1941. Phonemic overlapping. American Speech 16. 278 –284. Reprinted in Martin Joos (ed.) 1957. Readings in linguistics: The development of descriptive linguistics in America since 1925, 93 –96. Chicago: Chicago University Press. Borgman, Donald M. 1990. Sanuma. In Desmond C. Derbyshire & Geoffrey K. Pullum (eds.) Handbook of Amazonian languages, vol. 2, 15 –248. Berlin & New York: Mouton de Gruyter.

16

Naomi Gurevich

Campbell, Lyle. 1985. The Pipil language of El Salvador. Berlin: Mouton. Cardona, George. 1965. A Gujarati reference grammar. Philadelphia: University of Pennsylvania Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Dayley, Jon P. 1989. Tümpisa (Panamint) Shoshone dictionary. Berkeley: University of California Press. Dinnsen, Daniel A. & Fred R. Eckman. 1977. Some substantive universals in atomic phonology. Lingua 45. 1–14. Dunstan, Elizabeth (ed.) 1969. Twelve Nigerian languages. New York: Africana Publishing Corporation. Flemming, Edward. 1995. Auditory representations in phonology. Ph.D. dissertation, University of California, Los Angeles. Guitart, Jorge M. 1976. Markedness and a Cuban dialect of Spanish. Washington, DC: Georgetown University Press. Gurevich, Naomi. 2004. Lenition and contrast: The functional consequences of certain phonetically conditioned sound changes. New York & London: Routledge. Hargus, Sharon. 1985. The lexical phonology of Sekani. Ph.D. dissertation, University of California, Los Angeles. Published 1988, New York: Garland. Hayes, Bruce. 1986. Inalterability in CV phonology. Language 62. 321–351. Hock, Hans Henrich. 1991. Principles of historical linguistics. 2nd edn. Berlin & New York: Mouton de Gruyter. Hoehlig, Monika & Maria Hari. 1976. Kagate phonemic summary. Kathmandu: Institute of Nepal and Asian Studies, Tribhuvan University. Hyman, Larry M. 1975. Phonology: Theory and analysis. New York: Holt, Rinehart & Winston. Jacobs, Haike. 1994. Lenition and Optimality Theory. Unpublished ms., University of Nijmegen (ROA-127). Jacobs, Haike & Leo Wetzels. 1988. Early French lenition: A formal account of an integrated sound change. In Harry van der Hulst & Norval Smith (eds.) Features, segmental structure and harmony processes, part I, 105 –129. Dordrecht: Foris. Jun, Jongho. 1995. Perceptual and articulatory factors in place assimilation: An Optimality Theoretic approach. Ph.D. dissertation, University of California, Los Angeles. Kaplan, Abby. 2008. Perceptual, articulatory, and systemic influences on lenition. Unpublished ms., University of California, Santa Cruz. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kingston, John. 2008. Lenition. In Laura Colantoni & Jeffrey Steele (eds.) Selected Proceedings of the 3rd Conference on Laboratory Approaches to Spanish Phonology, 1–31. Somerville, MA: Cascadilla Press. Kirchner, Robert. 1998. An effort-based approach to consonant lenition. Ph.D. dissertation, University of California, Los Angeles. Lavoie, Lisa. 1996. Consonant strength: Results of a data base development project. Working Papers of the Cornell Phonetics Laboratory 11. 269 –316. Malone, Joseph L. 1993. Tiberian Hebrew phonology. Winona Lake, IN: Eisenbrauns. Manaster Ramer, Alexis. 1993. On lenition in some Northern Uto-Aztecan languages. International Journal of American Linguistics 59. 334 –341. McMahon, April. 1994. Understanding language change. Cambridge: Cambridge University Press. Morphy, Frances. 1983. Djapu, a Yolngu dialect. In R. M. W. Dixon & Barry J. Blake (eds.) The handbook of Australian languages, vol. 3, 1–188. Amsterdam: John Benjamins. Ohala, John J. 1981. The listener as a source of sound change. Papers from the Annual Regional Meeting, Chicago Linguistic Society 17(2). 178–203.

Lenition

17

Pike, Eunice V., Keith Forester & Wilma J. Forester. 1986. Fortis versus lenis consonants in the Paya dialect of Kuna. In Benjamin F. Elson (ed.) Language in global perspective: Papers in honor of the 50th anniversary of the Summer Institute of Linguistics 1935 –1985, 451– 464. Dallas: Summer Institute of Linguistics. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Rice, Keren. 1999. Featural markedness in phonology: Variation. GLOT International 4.7: 3–6, 4.8: 3 –7. Scatton, Ernest A. 1984. A reference grammar of modern Bulgarian. Columbus: Slavica. Ségéral, Philippe & Tobias Scheer. 1999. The coda mirror. Unpublished ms., Université Paris 7 & Université de Nice. Sharma, D. D. 1982. Studies in Tibeto-Himalayan linguistics: A descriptive analysis of Pattani (a dialect of Lahaul). Hoshiarpur: Panjab University. Silverman, Daniel. 1995. Phasing and recoverability. Ph.D. dissertation, University of California, Los Angeles. Silverman, Daniel. 2006. A critical introduction to phonology: Of sound, mind, and body. London & New York: Continuum. Steriade, Donca. 1999. Alternatives to syllable-based accounts of consonantal phonotactics. In Osamu Fujimura, Brian Joseph & Bohumil Palek (eds.) Item order in language and speech, 205 –242. Prague: Karolinum Press. Steriade, Donca. 2000. Paradigm uniformity and the phonetics–phonology boundary. In Michael B. Broe & Janet B. Pierrehumbert (eds.) Papers in laboratory phonology V: Acquisition and the lexicon, 313–334. Cambridge: Cambridge University Press. Szigetvári, Péter. 2008. What and where. In Joaquim Brandão de Carvalho, Tobias Scheer & Philippe Ségéral (eds.) Lenition and fortition, 93–130. Berlin & New York: Mouton de Gruyter. Thurneysen, Rudolf. 1946. A grammar of Old Irish. Dublin: Dublin Institute for Advanced Studies. Trubetzkoy, Nikolai S. 1939. Grundzüge der Phonologie. Göttingen: van der Hoeck & Ruprecht. Translated 1969 by Christiane A. M. Baltaxe as Principles of phonology. Berkeley & Los Angeles: University of California Press. Westbury, John R. & Patricia Keating. 1986. On the naturalness of stop consonant voicing. Journal of Linguistics 22. 145 –166. Zipf, George K. 1935. The psycho-biology of language: An introduction to dynamic philology. Boston, MA: Houghton Mifflin.

67

Vowel Epenthesis Nancy Hall

1

Introduction

The term “vowel epenthesis” can refer to any process in which a vowel is added to an utterance. Beyond this simple description, however, vowel epenthesis processes vary enormously in their characteristics, and many aspects of their typology are still not well understood. Accordingly, the empirical focus of this chapter is on the heterogeneity of vowel epenthesis processes. This chapter is organized around several empirical questions, namely: What is the function or cause of vowel epenthesis (§2)? What determines the location (§3) and quality (§4) of an epenthetic vowel? Do epenthetic vowels differ phonetically or psycholinguistically from lexical vowels (§5)? What distinguishes an excrescent vowel (§6)? How does vowel epenthesis interact with other phonological processes (§7)? Finally, §8 reviews research on epenthetic vowels in loanwords, and revisits some of the previous questions to discuss how the answers may differ in the case of loanwords. Throughout this chapter, epenthetic vowels are underlined for visual clarity.

2

What is the function/cause of vowel epenthesis?

In most cases, the function of vowel epenthesis is to repair an input that does not meet a language’s structural requirements. In particular, vowel epenthesis allows the surfacing of consonants that underlyingly appear in phonotactically illegal contexts. For example, Lebanese Arabic epenthesizes vowels into many CC codas to break up undesirable coda clusters. Epenthesis is more or less obligatory in coda clusters of an obstruent followed by a sonorant, as in (1a), and optional in most other clusters as in (1b) (see Haddad 1984a for a detailed breakdown of coda types). (1)

Epenthesis in Lebanese Arabic (Abdul-Karim 1980: 32–33) a.

/?ism/ /?ibn/ /œi:l/

?isim ?ibin œi:il

‘name’ ‘son’ ‘work’

b. /kibœ/ kibœ ~ kibiœ ‘ram’ /sabt/ sabt ~ sabit ‘Saturday’ /nafs/ nafs ~ nafis ‘self’

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0067

Vowel Epenthesis

2

There is controversy over exactly how to analyze the phonotactic requirements that motivate epenthesis. Probably the most popular approach is to assume that epenthesis allows the syllabification of stray consonants (Itô 1989), but Broselow (1982) explores the idea that some epenthesis is simply triggered by particular sequences of consonants, irrespective of syllable structure requirements. Côté (2000) argues that epenthesis is motivated primarily by the need to make consonants perceptible, based on the Licensing by Cue approach of Steriade (1994). For example, one of the main cues that listeners rely on to identify place features of consonants is the formant transitions on neighboring vowels. Hence, a consonant that is not adjacent to a vowel is less easy to identify (see chapter 46: positional effects in consonant clusters). In a case like Lebanese, it might be argued that claiming a structural motivation for vowel epenthesis is circular, given that this optional vowel epenthesis is the only evidence that such clusters are marked in this language. But in some languages, vowel epenthesis is only one of a “conspiracy” of processes removing a particular cluster type. In Welsh, for example, codas with rising sonority are repaired through deletion, as in (2a), lenition (2b), metathesis (2c), or vowel epenthesis (2d), while codas with falling sonority are left intact. (2)

Welsh repair of obstruent–sonorant codas (Awbery 1984) a. b. c. d.

/fenestr/ → kevn > sDvl > /kevn/ →

fe(nest kewn sDlv ke(ven

‘window’ (southern dialect) ‘back’ (Pembrokeshire dialect) ‘stubble’ (north-east dialect) ‘back’ (southern dialect)

The fact that all four processes target the same cluster type supports the idea that this cluster type is marked, and that vowel epenthesis is one of the repairs for the marked structure. A second common reason for epenthesis is to bring a word up to a certain minimal size. Some languages require each lexical word to have a minimum of two moras or two syllables. Often, roots of smaller size are augmented with an epenthetic vowel, as shown in (3a) for Mono (Banda, spoken in Congo). The epenthetic vowels do not appear when the same roots appear in longer compounds, as in (3b). (3)

Mono vowel epenthesis (Olson 2003) a.

b.

/Úc/ /bè/ /mà/ /ndà/ /mà+ndà/

→ → → → →

cÚc èbè àmà àndà màndà

‘tooth’ ‘liver’ ‘mouth’ ‘house’ ‘door’

*àmààndà

Metrical structure above the word level can also affect epenthesis. In Galician, vowels are optionally added at the end of an intonational phrase (Martínez-Gil 1997). This is illustrated in (4), where the word pan ‘bread’ can be pronounced with final [i] only if it directly precedes a prosodic break (a–c), not within an intonational phrase (d).

Nancy Hall

3 (4)

Epenthesis at intonational phrase boundaries in Galician (Martínez-Gil 1997) Ela vai trael-o pan (~ pan[i]). ‘She’s going to bring the bread.’ b. O pan (~ pan[i]), fixo-no onte. ‘(As for) the bread, (s)he made it yesterday.’ c. Dille que traia pan (~ pan[i]), non viño. ‘Tell him/her to bring bread, not wine.’ d. Ela vai trae-lo pan (*pan[i]) que comprou. ‘She’s going to bring the bread that she bought.’ a.

This epenthesis occurs only with words whose final syllable is stressed: words like [’bo] ‘good’ and [ka.’fe] ‘coffee’ have the variants [’bo.i] and [ka.’fe.i], but words with non-final stress like [’la.pis] ‘pencil’ cannot be pronounced *[’la.pis.i]. Martínez-Gil proposes that the function of this epenthesis is to create a well-formed bimoraic trochee at the edge of each intonational phrase. A similar pattern occurs with optional [H]-insertion in Parisian French (Fagyal 2000). A different aspect of phrasal metrical structure affects epenthesis in Dutch. As shown in (5), Dutch has optional schwa epenthesis in coda clusters that consist of a liquid followed by a non-coronal consonant, as well as coda /rn/. (5)

Dutch [H]-epenthesis (Booij 1995) tõlHp helHp herHfst kAlHm

~ ~ ~ ~

tõlp help herfst kAlm

‘tulip’ ‘help’ ‘autumn’ ‘quiet’

Kuijpers and van Donselaar (1997) find that speakers are more likely to insert the schwa if this will create a rhythmic alternation of stressed and unstressed vowels. Epenthesizing a schwa in /tõlp/ changes the word from a single stressed syllable (’q) to a stressed–unstressed sequence (’qq) (see also chapter 40: the foot). This happens significantly more often when the first syllable of the following word is stressed than when it is unstressed, as shown in (6). (6)

Effects of sentence rhythm on epenthesis in monosyllabic words context qq __ ’q ’qq __ q

[H]-epenthesis 50% 35%

[’tõlp] and [’tõlHp] equally preferred [’tõlp] preferred over [’tõlHp]

Metrical structure above the word level only has gradient effects on vowel epenthesis; there do not seem to be cases of obligatory vowel epenthesis for rhythmic purposes, aside from the minimal word requirement discussed above. Perhaps this is because phrase-level metrical structures themselves tend to show much optionality. While most analyses of vowel epenthesis focus on structural motivations, there is a little research examining the effects of epenthesis on perception. Van Donselaar et al. (1999) bring evidence that vowel epenthesis in Dutch enhances the perceptibility of the consonants adjacent to the epenthetic vowel, particularly the preceding liquid. In lexical decision and phoneme identification tasks, subjects

Vowel Epenthesis

4

react faster to forms with epenthesis, like [tõlHp], than to forms without epenthesis, like [tõlp], even though the form without epenthesis is more canonical and closer to the spelling. The authors suggest that speakers epenthesize the vowels to help the listener. Finally, there are some cases where epenthetic vowels (or, at least, vowels widely described as epenthetic) have no apparent function in terms of phonotactics, metrics, or any other structural requirements. This is seen in Scots Gaelic, where epenthetic copy vowels historically arose in sonorant–obstruent coda clusters following short stressed vowels, as in (7). These vowels are widely analyzed as being still epenthetic today. As discussed further in §5, these vowels are phonetically marked by a special pitch and duration pattern, and they have a number of distinguishing phonological characteristics. Speakers are reported to consider these VRVC sequences monosyllabic, in contrast to other VRVC sequences. (7) Scots Gaelic (Borgstrøm 1937, 1940; Oftedal 1956) œalak khen:jep

‘hunting’ ‘hemp’

Interestingly, there are many words where one of the consonants that originally triggered the epenthetic vowel has deleted historically, yet the epenthetic vowel has remained – and retained its unique phonetic and phonological characteristics. In the words in (8), the underlined vowel is one that sounds like an epenthetic vowel in terms of pitch and duration, yet synchronically, there is no consonant cluster present to trigger epenthesis. The epenthetic vowel now precedes a word boundary or another vowel, and hence plays no role in terms of improving phonotactics. In fact, it often creates a V.V sequence, which is cross-linguistically dispreferred. (8)

Unpredictable vowel epenthesis in Scots Gaelic mara.i dQr j i en:je.i

marbhaidh duirgh aithnichidh

‘will kill’ ‘fishing lines’ ‘will recognize’

There are many possible interpretations of such facts. One theory might be that the triggering consonants are present underlyingly and removed through a separate process; another theory is that vowels originally introduced through epenthesis have been reanalyzed as something else (see Hall 2003 for an argument that all “epenthetic” vowels in Scots Gaelic actually reflect a diphthong-like structure in which a vowel and sonorant are phonologically adjoined, and where their articulations overlap so that the same vowel is heard in two pieces). While cases like Scots Gaelic are unusual, they are a reminder that some vowel epenthesis patterns do not seem to have clear structural motivations.

3

What determines the location of an epenthetic vowel?

When vowel epenthesis is used to break up a consonant cluster, there is often more than one location where the vowel could be placed to produce a phonotactically

5

Nancy Hall

acceptable output. For example, if a language has the syllable structure (C)V(C), hence disallowing CC clusters at the beginning of a word, an initial CCV could be broken up by putting a vowel before the consonants (VC.CV) or between the consonants (CV.CV). In a medial CCC cluster, the vowel could occur before the second or third consonant. The choice of epenthesis locations is languagespecific. Arabic dialects, for example, systematically differ in this regard. As shown in (9), “onset” dialects like Egyptian syllabify the second consonant as an onset, meaning that the epenthetic vowel follows the second consonant, while “coda” dialects like Iraqi syllabify the second consonant as a coda, meaning that the epenthetic vowel follows the first consonant (Broselow 1992; Kiparsky 2003; Watson 2007). (9)

Treatment of /CCC/ in Arabic dialects (Itô 1989) Cairene Iraqi

/?ul-t-l-u/ /gil-t-l-a/

?ul.ti.lu gi.lit.la

‘I said to him’ ‘I said to him’

Temiar (Mon-Khmer, Malaysia) has a much-studied pattern of epenthetic vowel placement in long consonant clusters. Temiar allows only CV and CVC syllables. Given an onset of three or four consonants, Temiar inserts epenthetic vowels to form a string of open syllables terminated by a closed syllable. The epenthetic vowel is a schwa in open syllables; [e] in closed syllables. (10)

Temiar syllabification (Itô 1989) /slDg/ /snlDg/ /snglDg/

sHlDg ‘sleep, marry (act perf)’ ‘sleep, marry (act perf nominalized)’ senlDg sHneglDg ‘sleep, marry (act cont nominalized)’

Itô (1989: 241) argues that these patterns of vowel placement can be explained if syllabification is directional. Abstracting away from certain theoretical details, the insight is that languages like Temiar and Iraqi compute maximal syllables starting from the end of the word, while languages like Egyptian compute maximal syllables from the beginning of the word. A stray consonant that could be syllabified more than one way becomes an onset of a following syllable in rightto-left languages, but the coda of a preceding syllable in left-to-right languages, and the placement of the epenthetic vowel varies accordingly. (11)

Directionality in syllabification Left-to-right syllabification Cairene /?ultlu/ ?ul. ?ul.ti. ?ul.ti.lu

Right-to-left Iraqi /giltla/ .la .lit.la gi.lit.la

syllabification Temiar /snglDg/ .lDg .neg.lDg sH.neg.lDg

While directional syllabification works well to explain epenthetic vowel placement in many languages, I will discuss in §8 some cases of loanword adaptation where directional syllabification cannot explain epenthetic vowel placement.

Vowel Epenthesis

4

6

What determines the quality of an epenthetic vowel?

The quality of an epenthetic vowel may be determined in one of two ways: it is either a fixed, default quality (which may, of course, be subject to normal allophonic variation according to the language’s phonology), or else the quality is determined by some part of the phonological context. Lebanese Arabic is an example of a language with fixed-quality epenthetic vowels: the epenthetic vowel is always [i]. Different languages have different qualities for their epenthetic vowels, and some qualities are found more commonly than others. Epenthetic [i] and [H] are especially frequent, but de Lacy (2006: 289) also lists examples of epenthetic [q], [e], and [a]. It is rare for fixed-quality vowels to be [+round], but examples do occur in Quebec French (Martin 1998) and in the Austronesian languages Buol and Kambera (Rice 2008). (There are, of course, also many cases where a basically fixed-quality vowel becomes predictably rounded in some contexts through additional processes such as vowel harmony.) In “copy vowel epenthesis,” the epenthetic vowel must have the same quality as a nearby vowel. In Welsh, for example, final CC clusters are broken with a vowel that is a copy of the preceding vowel. The forms in the left column of (12) illustrate how the epenthetic vowel is absent when a suffix renders the CC cluster non-final. (12)

Copy vowel epenthesis in Welsh (Awbery 1984: 88) gwadne kevne pHdri Døri

gwa(dan ke(ven pu(dur o(øor

‘soles, sole’ ‘backs, back’ ‘to rot, rotten’ ‘to side, side’

The direction of copying varies by language; both right-to-left and left-to-right copying are well attested. In rare cases, the quality may relate to more than one nearby segment. In Scots Gaelic, the quality of the epenthetic vowel depends on both the preceding vowel and the preceding consonant. Sonorants in Scots Gaelic contrast for backness. When epenthesis occurs in a /VRC/ sequence where the vowel and sonorant disagree in backness, the epenthetic vowel shares the backness specification of the sonorant (Clements 1986; Ní Chiosáin 1995; Bosch and de Jong 1998; chapter 75: consonant–vowel place feature interactions). (13)

Incomplete vowel copy in Scots Gaelic (vowel transcription following Ní Chiosáin 1995) færak inqxin:jH bul jik dql jikj mZr jev

‘anger’ ‘brain’ ‘bellows’ ‘sorry’ ‘dead’

There has been controversy over whether the grammatical mechanisms that allow epenthetic vowels to copy other vowels’ quality might be similar to the mechanisms

7

Nancy Hall

involved in reduplication (where a morpheme copies its segmental content from other segments in the base word; chapter 100: reduplication). Kitto and de Lacy (1999) argue for a unified theory of the two processes, in which segments in reduplicants and epenthesized segments both have a “correspondence” relation with another segment elsewhere in the word. Kawahara (2007), however, points out a couple of basic differences between these kinds of copying. First, epenthetic copy vowels always copy a vowel in an adjacent syllable, whereas reduplicants may skip adjacent syllables to copy more distant material. For example, in Nakanai (Oceanic; Johnston 1980), a vowel in a reduplicant copies the most sonorous vowel in the base, regardless of its location. Kawahara finds no cases of epenthetic vowels copying distant vowels in this manner. Secondly, copying in epenthetic vowels (especially in loanwords; see §8) is sometimes blocked when particular kinds of consonants intervene, but blocking effects like this are not found in reduplication, where copying can occur over any type of intervening segment. Kawahara proposes that long-distance, correspondence-based copying is available only for morphological operations like reduplication, and that copying of quality in epenthesis always reflects local feature spreading.

5

Do epenthetic vowels differ phonetically or psycholinguistically from lexical vowels?

5.1 Phonetic characteristics of epenthetic vowels There is evidence that in some languages, epenthetic vowels differ articulatorily and acoustically from lexical vowels, and tests that probe speaker intuitions may also find differences. Since these phonetic or psycholinguistic differences may have implications for phonological questions, I will briefly review the evidence. As shown in (1), Lebanese Arabic optionally inserts an epenthetic vowel in certain CCC or CC# clusters (/mitl/ → [mitil] ‘like’). The epenthetic vowel is normally transcribed as [i], but Haddad (1984b: 61) impressionistically notes that “this representation is rather inadequate since an inserted vowel is more prone to suprasegmental features such as ‘guttural’ and ‘emphatic’ [pharyngealized] than an underlying vowel is.” An acoustic phonetic study by Gouskova and Hall (2009) finds that for some speakers, epenthetic “[i]” is significantly shorter in duration than a lexical [i], and has a lower second formant value. The low F2 indicates that the articulation is relatively back, so that a more appropriate transcription might be [q]. Sometimes the phonetic differences involved in vowel epenthesis are reported to extend over a longer string of the word. The Siouan language Hocank has epenthesis in certain CCV sequences, as in /kre/ → [kere] ‘depart returning’. Although no instrumental study has been done, Susman (1943) and Miner (1979) agree that CVCV sequences resulting from epenthesis are audibly shorter in duration than lexical CVCV. The duration difference appears to involve not only the epenthetic vowel, but also the lexical vowel next to it. Another kind of phonetic difference is reported in Scots Gaelic, where, as shown in (7), epenthesis occurs in certain CC sequences following a short stressed vowel (/tarv/ → [tarav] ‘bull’). These epenthetic vowels are often longer than lexical vowels in the same position (Bosch and de Jong 1997). The pitch of the resulting

Vowel Epenthesis

8

CVCVC sequence is distinctive: although a normal CVCVC disyllable has a rise and fall in pitch, Ladefoged et al. (1998) show that epenthetic CVCVC has only a pitch rise, confirming Oftedal’s (1956) description. Speakers are reported to consider such sequences monosyllabic (Oftedal 1956: 29) or “nearly monosyllabic” (Borgstrøm 1940: 153). Several studies couched in Articulatory Phonology have offered evidence that epenthetic schwa in English differs articulatorily from lexical schwa (see chapter 26: schwa). Davidson and Stone (2003) present an ultrasound study of English speakers pronouncing pseudo-Slavic words that began with consonant clusters that are illegal in English, such as /zgomu/. Subjects frequently inserted an audible epenthetic schwa, producing [zHgomu]. However, when the articulation of schwa was compared to the lexical schwa of similar words like succumb [sHkZm], the tongue position differed significantly. Davidson and Stone suggest that the acoustic schwa does not correspond to a distinct articulatory gesture, but is essentially a transitional sound, the result of a low degree of overlap between the articulatory gestures comprising /z/ and /g/. Smorodinsky (2002) uses EMA to study the epenthetic schwas in English inflectional morphology, and reports differences (though not very robust ones) in tongue position between the epenthetic schwa in cheated [’Œi7Hd] and the lexical schwa in cheetah’d [’Œi7Hd]. Gick and Wilson (2006) give a related analysis of the schwa that many English speakers insert between a high tense vowel and a liquid, as in fire (fa/r ~ fa/Hr). They argue that the schwa sound is not an inserted phonological unit, but an incidental result of the tongue passing through a schwa-like configuration as it transitions between the opposing tongue root positions of the high front vowel and the liquid. As of yet, few examples of epenthetic vowels have been instrumentally studied, so it is not clear whether epenthetic vowels differ phonetically from lexical vowels in every language. There are plenty of cases where epenthetic vowels are impressionistically described as being acoustically identical to lexical vowels (e.g. Mohawk; Michelson 1989: 40, 48). It is also unknown whether the vowels’ phonetic nature correlates with any aspect of their phonological behavior (such as whether the vowel is obligatory or optional, or whether the vowel interacts opaquely with processes like stress assignment). This is likely a rich area for future research.

5.2 Speaker intuitions about epenthetic vowels There are indications that speakers are not always conscious of epenthetic vowels in the same way as lexical vowels. One type of evidence comes from situations where speakers are asked to write their pronunciations phonetically. Pearce (2004: 19) asked speakers of Kera (East Chadic, spoken in Chad, with no tradition of writing) to choose between two possible spellings for acoustically CVCVCV words, where the middle vowel was analyzed as epenthetic. The speakers chose CVCCV spellings, suggesting that the middle vowel was not part of their conscious segmentation of the word. On the other hand, when I have asked Lebanese Arabic speakers to write colloquial pronunciations (which are not usually written, as orthography follows Classical Arabic), they do write in the epenthetic vowels. This suggests that speakers’ consciousness of epenthetic vowels may differ from language to language.

9

Nancy Hall

Van Donselaar et al. (1999) argue that in Dutch, where vowel epenthesis is optional ([tõlp] ~ [tõlHp]), speakers treat the form without epenthesis as canonical. In an experiment, Dutch speakers were asked to perform different language-game-like reversals on monosyllables and disyllables: subjects were to reverse monosyllables segment by segment, changing [tap] to [pat], and reverse disyllables syllable by syllable, changing [hotel] to [telho]. Over 90 percent of words with vowel epenthesis were treated like monosyllables, so that [tõlHp] ‘tulip’ changed to [plõt] rather than [lHptõ]. The authors suggest that speakers have a unitary representation for the forms with and without epenthesis. It might be objected, however, that the experiment is contaminated by orthographic differences between lexical schwa, which is written, and epenthetic schwa, which is not. Another objection, raised by a reviewer, is that [lHptõ] is not a possible word in Dutch, due to its final lax vowel. Speakers may be particularly likely to lack awareness of the kind of weak epenthetic vowels often called “excrescent” (discussed further in §6). For example, Harms (1976) reports that Finnish speakers are unaware of an epenthetic schwa that is easily perceived by some non-native speakers: [melHkein] (melkein) ‘almost’ has essentially the same vowel qualities ([e, H, ei]) and relative durations as the English verb delegate – [delHgeit]. From a descriptive phonetic point of view, the Finnish [epenthetic] schwa and the English reduced-vowel schwa represent very nearly identical classes of vowel sounds; i.e., they vary over a wide central area, with their range of variation conditioned by the preceding and following segments. But here the similarity ends. The schwa in the above Finnish forms is purely transitional in nature. Speakers perceive these forms as containing only two syllables, not three.

Few studies of vowel epenthesis have probed the intuitions of native speakers about the vowels, and it would be useful to have data from more languages on how speakers perceive epenthetic vowels, including how the vowels are written, treated in metrics, and treated in language games (see chapter 96: experimental approaches in theoretical phonology).

6

What distinguishes an “excrescent” vowel?

A number of proposals distinguish a special class of epenthetic vowels often called “excrescent” (Levin 1987) or “intrusive” (Hall 2006). These terms are usually used for vowels that are noticeably phonetically weaker than other vowels. Typically, excrescent vowels are short in duration and centralized in quality. The excrescent vowel may have a quality not present in the language’s lexical vowel system; for example, excrescent schwa may exist in a language that otherwise has no schwas. Excrescent vowels are systematically ignored by other phonological processes. The commonly expressed insight is that excrescent vowels are a kind of phonetic effect, likely a transition between consonant articulations. A classic example of excrescent vowels is the short vowels that occur in consonant clusters in Piro (Arawakan), as shown in (14). Matteson and Pike (1958) note that these vowels differ from the short phonemic vowels of Piro (/i e o a ï/) in several ways. The excrescent vowels are subject to extensive free variation. Their

Vowel Epenthesis

10

quality can be highly variable, as in /hwï/ below, where the excrescent vowel has been recorded with five different qualities. Also, in some cases the presence of the excrescent vowel varies with “syllabification” of a consonant, as in /whene/ below. The vowels cannot bear any kind of stress, and they are of much shorter duration than lexical vowels. In terms of timing, the authors report that “in the rhythm of a phrase, a consonant plus the transition vocoid corresponds in timing to a single consonant rather than to a sequence of consonant plus vowel.” The excrescent vowels fail to block a pattern of co-articulatory rounding that is blocked by other vowels. In Piro orthography, the excrescent vowels are not written. (14)

Excrescent vowels in Piro (Matteson and Pike 1958) /kwalï/ /tkatœi/ /œjo/ /hwï/ /whene/

kH walï ~ ko walï t & kaŒi œ i jo ha wï ~ ho wï ~ hH wï ~ hï wï ~ hu wï \hene ~ w H hene ~ w o hene ~ w ï hene ~ wu hene

‘platform’ ‘sun’ ‘bat’ ‘O.K.’ ‘child’

Based on the vowels’ exceptional phonological and phonetic characteristics, the authors analyze them as “non-phonemic transitional vocoids.” Vowels with similar characteristics occur in Finnish (Harms 1976), Sanskrit (Allen 1953: 173), South Hamburg German (Jannedy 1994), and other languages listed in Hall (2006). Recently, a number of authors have formalized similar ideas about excrescent vowels in an Articulatory Phonology framework. Articulatory Phonology (Browman and Goldstein 1986, 1992) treats abstract articulatory gestures as primitives, and allows the grammar to regulate the timing of articulatory gestures with respect to one another. Vowel-like percepts can be created when two consonant gestures are phased to have a low degree of overlap with one another, leaving a period between the consonant constrictions where the vocal tract is relatively open (Browman and Goldstein 1992). See Gafos (2002) and Hall (2006) for arguments that excrescent vowels lack an independent gesture, and hence are not present as phonological units in the way that lexical vowels (and most epenthetic vowels) are.

7

How does vowel epenthesis interact with other processes?

One of the most interesting characteristics of epenthetic vowels is their tendency to interact opaquely with other phonological processes. It is common for phonological patterns to treat epenthetic vowels as if they were not present. This observation has many theoretical interpretations. Some argue that epenthetic vowels are representationally defective: Piggott (1995), for example, argues that some epenthetic vowels are weightless, lacking a mora. Other approaches handle opaque interactions through rule ordering, with the epenthetic vowels being inserted late in the derivation. Here, I will focus on the empirical issues to be explained, with examples of the kinds of interactions that have been reported.

Nancy Hall

11

7.1 Metrical patterns Syllables whose nuclei are epenthetic vowels frequently fail to count as syllables in patterns such as stress assignment, minimal word requirements, and the conditioning of open syllable lengthening. This section gives an example of epenthesis interacting with each of these processes. In Lebanese Arabic, a closed penult is stressed when it contains a lexical vowel, as in (15a), but not when it contains an epenthetic vowel, as in (15b) (see also chapter 124: word stress in arabic). (15)

Stress–epenthesis interaction in Lebanese Arabic a. b.

/fihim-na/ /fihm-na/

fi.’him.na ’fi.him.na

‘he understood us’ ‘our understanding’

In words without a closed penult, stress normally falls on the final syllable if it is superheavy, i.e. CV(C or CVCC, as in (16a), and on the antepenult otherwise, as in (16b). Again, vowel epenthesis disrupts the pattern. If an epenthetic vowel is inserted into a final CC cluster, breaking up what would otherwise be a final superheavy syllable, stress is assigned to the penult, as in (16c). This is the only case in which a light penult can be stressed. (16)

Lebanese Arabic (Haddad 1984a) a. b. c.

/nazzal-t/ /katab-it/ /katab-t/

naz.’zalt ’ka.ta.bit ka.’ta.bit

‘I brought down’ ‘she wrote’ ‘I wrote’

For all of the patterns above, stress is simply assigned as if the epenthetic vowel were absent. The only exception to this generalization is an epenthetic vowel inserted in an underlying CCCC sequence. In this case alone, the epenthetic vowel is treated the same as a lexical vowel for stress. In (17), the epenthetic vowel falls in a closed penult, and is stressed, as is normal for a heavy penult (cf. (15a)). (17) /katab-t-l-ha/

ka.tab.’til.ha

‘I wrote to her’

Such patterns, where epenthetic vowels are visible to stress under some circumstances but invisible in others, also occur in Mohawk (Michelson 1989) and Selayarese (Broselow 1999). In languages that require words to have a minimal size, epenthetic vowels may not count in determining this size. Mohawk, for example, requires each lexical word to contain two syllables, as in (18a). A verbal stem containing only one syllable is augmented with an epenthetic [i], as in (18b). Mohawk also inserts an epenthetic [e] after the first consonant of certain CC and CCC clusters. This [e] counts for metrical purposes if it is in a closed syllable, but not if it is in an open syllable. Hence, a two-syllable word containing an open epenthetic syllable, as in (18c), is augmented with epenthetic [i] as well. However, a two-syllable word containing epenthetic [e] in a closed syllable is not augmented, as seen in (18d).

Vowel Epenthesis (18)

12

Minimal word augmentation in Mohawk (Michelson 1989) a. b. c. d.

/k-hninu-s/ /k-jZ-s/ /s-riht/ /s-rho-s/

’khni(.nus ’ik.jZs ’i.se.riht ’ser.hos

‘I buy’ ‘I put it’ ‘cook!’ ‘you coat it with something’

This interaction highlights another interesting problem: the fact that there may be multiple vowel epenthesis processes in a single language, which differ in whether they are metrically “visible.” Epenthetic [e] in Mohawk also shows another type of metrical invisibility: it fails to trigger a rule by which stressed vowels lengthen in an open syllable. In (19a) we see this rule apply normally. In (19b), it appears that the stressed [i] is an open syllable, since the following epenthetic vowel has syllabified [r] as an onset; yet the stressed syllable fails to lengthen. (19)

Stressed vowel lengthening in Mohawk a. b.

/wak-ashet-u/ /s-riht/

wa.kas.’he(.tu ’i.se.riht

‘I have counted it’ ‘cook!’

In sum, although epenthetic vowels are usually added in order to syllabify stray consonants, the syllables they form do not necessarily count as syllables for other aspects of the phonology.

7.2 Segmental processes In some cases, epenthetic vowels fail to condition other segmental processes, such as deletion or allophonic variation, in the same way that lexical vowels condition them. In Dutch, for example, underlying /Hn/ is optionally reduced to [H], as in (20a). Yet when schwa epenthesis occurs before /n/, as in (20b), the epenthetic schwa does not condition deletion of the following [n]. Some speakers thus eliminate underlying /Hn/, yet create surface [Hn] through epenthesis. (20)

Dutch [n]-deletion (Booij 1995; Hall 2006) a. b.

regen horen hoorn

/re:Hn/ → /horHn/ → /horn/ →

re:Hn ~ re:H horHn ~ horH horn ~ horHn

*horH

‘rain’ ‘to hear’ ‘horn’

Similarly, Herzallah (1990) describes a Palestinian Arabic dialect in which a pharyngealized [r# ] loses its pharyngealization before lexical [i], but not before epenthetic [i] (chapter 25: pharyngeals). Just as different epenthetic vowels within a single language may show different metrical behavior, they may also differ in whether they condition other segmental processes. For example, in Tiberian Hebrew, one kind of epenthetic vowel does condition spirantization in following stops, and another does not. Normally a stop becomes a fricative after vowels, as in (21a). One type of epenthetic vowel, which splits up final CC clusters in non-derived words, also causes spirantization. In (21b), we see /b/ spirantize to [ß] following the epenthetic [e]. But another epenthetic

Nancy Hall

13

vowel, which occurs in final clusters of a guttural and a following consonant, does not condition spirantization. In (21c), the /t/ following the epenthetic vowel is realized as [t] rather than [.]. (21)

Tiberian Hebrew spirantization (McCarthy 1979) a. b. c.

/katab+t/ /kelb/ /œalap+t/

→ → →

ka.aßt keleß œalapat

‘you (fem sg) wrote’ ‘dog’ ‘you (fem sg) sent’

Thus, there is variation both within and between languages in how vowel epenthesis interacts with other processes.

8

How does epenthesis happen in loanwords?

Typological studies of vowel epenthesis frequently consider loanword data side by side with cases of epenthesis within languages, under the assumption that similar phonological mechanisms produce both (e.g. Broselow 1982; Kitto and de Lacy 1999; among many others). Since vowel epenthesis is particularly common in loanwords, loanword data have played a large role in theorizing on epenthesis, probably more than most other phenomena. However, I would like to argue that conflating loanword and native-language epenthesis is a serious methodological mistake. A growing body of evidence suggests that epenthesis in loanwords differs from epenthesis within languages in its formal characteristics, and may have different causes and functions. For this reason, facts about loanword epenthesis are reviewed here separately from within-language epenthesis, to highlight some likely empirical differences between the two kinds of epenthesis. I will also include some references to epenthesis in “interlanguage,” which is the language produced by second language learners. While interlanguage and loanwords are not the same thing, they are related in the sense of both involving language contact, and many loanwords may arise historically from interlanguage forms (see also chapter 95: loanword phonology).

8.1 Perceptual origin? There is considerable debate over whether epenthesis in loanwords happens through perceptual errors by speakers of the borrowing language. Traditionally, it was assumed that a speaker of the borrowing language (likely a bilingual) would hear a foreign word, construct some reasonably accurate representation of the way the word was pronounced in the foreign language, and then alter that representation to fit the phonotactics of the borrower’s native language. But Peperkamp and Dupoux (2002) argue that the borrower is likely to perceive the foreign word incorrectly, and that these perceptual errors are the main source of phonological alterations in loanwords (see also chapter 98: speech perception and phonology and chapter 95: loanword phonology for further discussion). One piece of evidence for this view comes from Japanese, which inserts an epenthetic vowel to remove illegal codas in loanwords (only a nasal or the first half of a geminate can be a coda). The epenthetic vowel is [o] after [d] and [t], and [Q] elsewhere.

Vowel Epenthesis (22)

14

Japanese loanwords from English (Itô and Mester 1995) faito fesQtibarQ sQfiIkQsQ

‘fight’ ‘festival’ ‘sphinx’

Dupoux et al. (1999) argue that Japanese speakers actually believe they hear this [Q] in the pronunciation of foreign CC clusters. In a perception experiment, Japanese and French listeners were asked to judge whether a middle vowel was present in nonsense words like [ebzo] and [ebQzo]. For words like [ebzo], where no middle vowel was acoustically present, most Japanese listeners reported hearing a vowel, while most French listeners did not. Japanese listeners also had great difficulty in discriminating between tokens like [ebzo] and [ebQzo] in an ABX discrimination test. Dupoux et al. point out that Japanese [Q] is frequently devoiced and shortened, and shows considerable allophonic variation. Knowing this may make listeners likely to fill in an illusory [Q] when they hear consonants with no vowel between them. The idea that epenthesis in loanwords has a perceptual origin is controversial; see Rose and Demuth (2006), Smith (2006), and Uffmann (2007) for arguments that perceptual factors cannot account for all facets of loanword adaptation. Nevertheless, we will see below several additional arguments that perceptual factors play a special role in loanword vowel epenthesis.

8.2 Function of vowel epenthesis For within-language phonology, epenthesis usually occurs to repair an input that does not meet the language’s phonotactic or metrical requirements. In most cases, epenthesis in loanwords can be analyzed as having the same function, like the Japanese examples in (22). Yet surprisingly, there is at least one case where speakers add epenthetic vowels to loanwords that were phonotactically permissible in the borrowing language without the vowel. Korean (Kang 2003) frequently epenthesizes a final vowel after English loanwords ending in a stop, as in the examples below. (23)

English loanwords in Korean (Kang 2003: 223) gag pat tube

→ → →

kækq phætq t h jupq

There is no phonotactic need to add vowels to these words. The consonants /k t p/ are among the acceptable codas of Korean, occurring in native words such as [kæk] ‘guest’, so epenthesis cannot be explained as a means of syllabifying stray consonants. Kang argues that the purpose of the vowel is to maximize perceptual similarity between the English word and the Korean word. English has more release of final stops than Korean does, and Kang claims that to Korean listeners, the release of a final stop of an English word sounds vocalic. She shows that final vowel insertion in loanwords from English is most common in precisely the environments where final stop release is most common in English, such as after voiced stops and

15

Nancy Hall

when the preceding vowel is tense. Thus, epenthesis may be a means of preserving phonetic details of the source language, rather than a repair.

8.3 Relation to native phonology The epenthetic vowel used in loanwords often differs from any vowel epenthesis process that exists in the native phonology, and epenthesis may be used in loanwords in contexts where other repairs would be used in the native phonology. In Japanese, for example, consonant clusters that arise through morpheme concatenation in the native language are repaired through deletion of one of the consonants, as shown in (24). Yet consonant clusters in loanwords are repaired with vowel epenthesis, as in (22). (24)

Deletion in Japanese native phonology (McCawley 1968; Smith 2006) non-past /-7Q/ /jom-7Q/ jo.mQ /tob-7Q/ to.bQ

causative /-sase/ /jom-sase/ jo.ma.se /tob-sase/ to.ba.se

‘read’ ‘fly’

Karimi (1987) reports a similar case for Farsi: CCC clusters are subject to consonant deletion in the native phonology, but repaired through epenthesis in loanwords and interlanguage. In general, vowel epenthesis seems to be a heavily favored repair type in loan adaptation, more than in native phonologies. Uffmann (2007) surveys case studies of loanword adaptation and concludes that consonant deletion is a marginal phenomenon, compared to epenthesis. Paradis and LaCharité (1997) invoke the “Preservation Principle,” which states that segmental material is maximally preserved (see also chapter 76: structure preservation: the resilience of distinctive information). Hence, adding extra segments is less undesirable than deleting segments from the source word. It is possible that the prevalence of vowel epenthesis in loanwords is related to its prevalence in interlanguage. Jenkins (2000) observes, based on a corpus of conversations between non-native speakers of English, that more misunderstandings are caused by deletion of consonants than by addition of vowels. If bilinguals are aware of this fact and therefore favor vowel epenthesis in their interlanguage pronunciations, then any loanwords based on these interlanguage forms would also tend to favor vowel epenthesis.

8.4 Quality As in native language phonology, epenthetic vowels in loanwords may have a default quality or copy their quality from nearby consonants or vowels. However, the patterns of vowel quality in loanwords are often strikingly complex in ways that are not common (and perhaps not attested at all) in native language epenthesis. Consider the patterns of epenthetic vowel place in words borrowed from English or Afrikaans into the southern Bantu language Sotho, as described in Rose and Demuth (2006). This study examines only the front–back dimension of epenthetic vowel place. In word-initial CC clusters, the epenthetic vowel is back when it follows a labial (25a), and front when it follows a coronal (25b). When the initial

Vowel Epenthesis

16

C is velar, the epenthetic vowel copies the place of the following vowel, as in (25c). In word-medial or word-final /CC/ clusters, usually the vowel copies its place from the preceding vowel, as in (25d). (A few further sub-patterns are ignored here. Only epenthetic vowels discussed in the text are underlined.) (25)

Epenthesis in loanwords in Sotho (Rose and Demuth 2006) source word a. bl/k b. t7uwn c. xöA(f d. hibruw

borrowed form bÁleke t/ron/ khArAfu heberu

‘tin can, dish’ ‘throne’ ‘spade’ ‘Hebrew’

Sotho also shows epenthesis for minimal word purposes within the native vocabulary, but in this case, the epenthetic vowel is always [/], regardless of context. Sotho is not the only case where vowel epenthesis in loanwords follows such a complex pattern; Uffmann (2007) analyzes similarly complicated rules for epenthetic vowel quality in Shona, Sranan, Nyarwanda, and Samoan, each of which shows an interplay between copying the features of consonants, copying the features of vowels, and insertion of default features. An informal survey of the literature gives the impression that such complex effects of phonological context on vowel quality are more or less confined to loanword epenthesis. Within languages, it is far more common to find epenthetic vowels of default quality, as in Arabic, or relatively simple kinds of copying, such as always copying in one direction, as in the Welsh pattern in (12). An extensive typological comparison of the formal qualities of vowel epenthesis in loanwords and non-loanwords would be a valuable contribution to understanding the difference between them. Another important difference between loanword and native language epenthesis is that epenthesis in loanwords is often not fully predictable. As we saw in the Korean examples in (23), epenthesis in a given location may be optional, and in languages like Shona and Sotho, “rules” for epenthetic vowel quality in loanwords are not exceptionless. Uffmann (2007: 9–13) argues that loanword epenthesis needs to be studied by looking for statistical patterns in large corpora of loanwords, because incorrect generalizations are easily reached from impressionistic or limited data. Both the complexity and unpredictability of some loanword epenthesis patterns may indicate that these patterns have not been internalized by speakers as true phonological “rules” – again, an argument for not considering them side by side with language-internal epenthesis.

8.5 Vowel placement The problem of where to place an epenthetic vowel arises in loanword phonology in the same way as in native language phonology: initial CC clusters, or medial CCC clusters, can potentially be split in two ways. In some cases, epenthesis location in loanwords or interlanguage appears to follow the same placement pattern as the borrowing language shows in its native epenthesis patterns. For example, we saw in (11) that Iraqi and Egyptian Arabic

Nancy Hall

17

differ in how they break up word-medial CCC clusters in the native phonology: Iraqi puts the epenthetic vowel after the first consonant, and Egyptian after the second. These dialects differ in exactly the same way in how they epenthesize into CCC clusters in interlanguage phonology, as seen in (26). This pattern can be explained by the same mechanism, directionality of syllabification, that is commonly used to explain vowel placement in the native phonologies of these languages. (26)

Iraqi vs. Egyptian epenthesis in CCC clusters (Broselow 1987) native language interlanguage

/kitab+t+l+V/ children

→ →

Iraqi ki.ta.bit.la chilidren

Egyptian ki.tab.ti.lu childiren

Yet in other cases the placement of the epenthetic vowel is not explainable as a transfer of native language epenthesis rules, and cannot be analyzed through directional syllabification alone. Fleischhacker (2001) presents a typological study of epenthesis in initial CC(C) clusters in loanwords and interlanguage, focusing on the question of whether the vowel precedes the cluster (VCC) or breaks up the cluster (CVC). She shows that in many languages, the placement of the vowel depends on what kind of consonants are in the cluster, as in the Egyptian Arabic examples in (27). In word-initial clusters consisting of a voiceless sibilant plus a stop, it is cross-linguistically more common to insert a vowel before the first consonant, as in (27a), while in word-initial clusters of an obstruent and sonorant, it is more common to place the vowel between the consonants, as in (27b). (27) Egyptian Arabic epenthesis in interlanguage (Broselow 1987) a.

b.

study special ski sweater slide

→ → → → →

istadi izbasjal iski siwetar silaid

Fleischhacker argues that the reason for this pattern is that epenthetic vowels are inserted where they will cause the least perceptual difference between the foreign word and the epenthesized adaptation (a theory which follows the P-map hypothesis of Steriade 2003). She presents an experiment in which English listeners were asked to judge auditory similarity between English words and modifications of those words with epenthetic vowels in different locations. Words beginning with sibilant–stop clusters, like spar, were judged more similar to versions with epenthesis before the cluster ([HspAr]) than to versions with epenthesis within the cluster ([sHpAr]). Words beginning with obstruent–sonorant clusters, like flit, were judged more similar to versions with epenthesis within the cluster ([fHl/t]) than to versions with epenthesis before the cluster ([Hfl/t]). The results of the perception experiment thus match the cross-linguistic tendencies in epenthetic vowel placement, and add to the body of arguments that perceptual factors have a special role in loanword epenthesis.

Vowel Epenthesis

9

18

Conclusion and suggested directions for future research

In the discussion above, I have tried to highlight some of the main empirical questions about vowel epenthesis, and to show that vowel epenthesis processes are greatly heterogeneous. A better understanding of vowel epenthesis will require work on two dimensions. One is detailed case studies of individual languages, in particular studies that combine the traditional, structural description of vowel epenthesis with attention to the acoustics, articulation, and perception of the epenthetic vowels, and also probe speaker intuitions about the vowels. Epenthetic vowels in Dutch are probably currently the best-studied in this regard, and it would be useful to have similar experiments done with epenthetic vowels in other languages. It would be interesting to examine whether the phonetic nature of an epenthetic vowel (for example, whether it is acoustically identical to a lexical vowel) correlates with any aspect of its phonological behavior (for example, whether it is visible to other phonological processes in the same way that lexical vowels are). The second area is typological work that looks for correlations between different characteristics of epenthetic vowels. Often, typological studies that focus on one variable, such as vowel quality, have lumped together vowel epenthesis processes that differ on other important parameters, such as whether the epenthesis occurs in native words or loanwords, whether the vowels are excrescent or not, whether they are morphologically conditioned, etc. However, it is possible that there may be relations between these variables. For example, it would be interesting to see more systematic comparisons of epenthesis in loanwords vs. native phonology, given the growing evidence that these processes may work differently.

REFERENCES Abdul-Karim, Kamal. 1980. Aspects of the phonology of Lebanese Arabic. Ph.D. dissertation, University of Illinois at Urbana-Champaign. Allen, W. Sidney. 1953. Phonetics in Ancient India. London: Oxford University Press. Awbery, Gwenllian M. 1984. Phonotactic constraints in Welsh. In Martin Ball & Glyn E. Jones (eds.) Welsh phonology, 65–104. Cardiff: University of Wales Press. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Borgstrøm, Carl H. 1937. The dialect of Barra in the Outer Hebrides. Norsk Tidsskrift for Sprogvidenskap 8. 71–242. Borgstrøm, Carl H. 1940. A linguistic survey of the Gaelic dialects of Scotland, vol. 1: The dialects of the Outer Hebrides. Oslo: Aschehoug. Bosch, Anna & Kenneth J. de Jong. 1997. The prosody of Barra Gaelic epenthetic vowels. Studies in the Linguistic Sciences 27(1). 2–15. Bosch, Anna & Kenneth J. de Jong. 1998. Syllables and supersyllables: Evidence for low level phonological domains. Texas Linguistic Forum 41. 1–14. Broselow, Ellen. 1982. On predicting the interaction of stress and epenthesis. Glossa 16. 115–131. Broselow, Ellen. 1987. Non-obvious transfer: on predicting epenthesis errors. In Ioup & Weinberger (1987), 292–304. Broselow, Ellen. 1992. Parametric variation in Arabic dialect phonology. In Ellen Broselow, Mushira Eid & John J. McCarthy (eds.) Perspectives on Arabic linguistics IV, 7–45. Amsterdam & Philadelphia: John Benjamins.

19

Nancy Hall

Broselow, Ellen. 1999. Stress, epenthesis, and segment transformation in Selayarese loans. Proceedings of the Annual Meeting, Berkeley Linguistics Society 25. 311–325. Browman, Catherine P. & Louis Goldstein. 1986. Towards an articulatory phonology. Phonology Yearbook 3. 219 –252. Browman, Catherine P. & Louis Goldstein. 1992. “Targetless” schwa: An articulatory analysis. In Gerard J. Docherty & D. Robert Ladd (eds.) Papers in laboratory phonology II: Gesture, segment, prosody, 26 –56. Cambridge: Cambridge University Press. Clements, G. N. 1986. Syllabification and epenthesis in the Barra dialect of Gaelic. In Koen Bogers, Harry van der Hulst & Martin Mous (eds.) The phonological representation of suprasegmentals, 317–336. Dordrecht: Foris. Côté, Marie-Hélène. 2000. Consonant cluster phonotactics: A perceptual approach. Ph.D. dissertation, MIT. Davidson, Lisa & Maureen Stone. 2003. Epenthesis versus gestural mistiming in consonant cluster production: An ultrasound study. Proceedings of the West Coast Conference on Formal Linguistics 22. 165 –178. de Lacy, Paul. 2006. Markedness: Reduction and preservation in phonology. Cambridge: Cambridge University Press. Donselaar, Wilma van, Cecile Kuijpers & Anne Cutler. 1999. Facilitatory effects of vowel epenthesis on word processing in Dutch. Journal of Memory and Language 41. 59–77. Dupoux, Emmanuel, Kazuhiko Kakehi, Yuki Hirose, Christophe Pallier & Jacques Mehler. 1999. Epenthetic vowels in Japanese: A perceptual illusion? Journal of Experimental Psychology: Human Perception and Performance 25. 1568–1578. Fagyal, Szuszanna. 2000. Le retour du e final en français parisien: Changement phonétique conditionné par la prosodie. In Actes du XIIe Congrès International de Linguistique et Philologie Romane, vol. 3: Vivacité et diversité de la variation linguistique, 151–160. Tübingen: Narr. Fleischhacker, Heidi. 2001. Cluster-dependent epenthesis asymmetries. UCLA Working Papers in Linguistics 7, Papers in Phonology 5. 71–116. Gafos, Adamantios I. 2002. A grammar of gestural coordination. Natural Language and Linguistic Theory 20. 269 –337. Gick, Bryan & Ian Wilson. 2006. Excrescent schwa and vowel laxing: Crosslinguistic responses to conflicting articulatory targets. In Louis M. Goldstein, Douglas Whalen & Catherine T. Best (eds.) Laboratory phonology 8, 635–659. Berlin & New York: Mouton de Gruyter. Gouskova, Maria & Nancy Hall. 2009. Acoustics of epenthetic vowels in Lebanese Arabic. In Steve Parker (ed.) Phonological argumentation: Essays on evidence and motivation, 203–225. London: Equinox. Haddad, Ghassan. 1984a. Problems and issues in the phonology of Lebanese Arabic. Ph.D. dissertation, University of Illinois at Urbana-Champaign. Haddad, Ghassan. 1984b. Epenthesis and sonority in Lebanese Arabic. Studies in the Linguistic Sciences 14. 57–88. Hall, Nancy. 2003. Gestures and segments: Vowel intrusion as overlap. Ph.D. dissertation, University of Massachusetts, Amherst. Hall, Nancy. 2006. Cross-linguistic patterns of vowel intrusion. Phonology 23. 387–429. Harms, Robert T. 1976. The segmentalization of Finnish “nonrules.” Texas Linguistic Forum 5. 73 –88. Herzallah, Rukayyah. 1990. Aspects of Palestinian Arabic phonology: A non-linear approach. Ph.D. dissertation, Cornell University. Ioup, Georgette & Steven H. Weinberger (eds.) 1987. Interlanguage phonology: The acquisition of a second language sound system. Cambridge: Newbury House. Itô, Junko. 1989. A prosodic theory of epenthesis. Natural Language and Linguistic Theory 7. 217–259.

Vowel Epenthesis

20

Itô, Junko & Armin Mester. 1995. Japanese phonology. In John A. Goldsmith (ed.) The handbook of phonological theory, 817–838. Cambridge, MA & Oxford: Blackwell. Jannedy, Stefanie. 1994. Rate effects on German unstressed syllables. OSU Working Papers in Linguistics 44. 105 –124. Jenkins, Jennifer. 2000. The phonology of English as an international language. Oxford: Oxford University Press. Johnston, Raymond Leslie. 1980. Nakanai of New Britain: The grammar of an Oceanic language. Canberra: Australian National University. Kang, Yoonjung. 2003. Perceptual similarity in loanword adaptation: English postvocalic word-final stops in Korean. Phonology 20. 219 –273. Karimi, Simin. 1987. Farsi speakers and the initial consonant cluster in English. In Ioup & Weinberger (1987), 305–318. Kawahara, Shigeto. 2007. Spreading and copying in phonological theory: Evidence from echo epenthesis. In Leah Bateman, Michael O’Keefe, Ehren Reilly & Adam Werle (eds.) Papers in Optimality Theory III, 111–143 Amherst, MA: GLSA. Kiparsky, Paul. 2003. Syllables and moras in Arabic. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 147–182. Cambridge: Cambridge University Press. Kitto, Catherine & Paul de Lacy. 1999. Correspondence and epenthetic quality. In Catherine Kitto & Carolyn Smallwood (eds.) Proceedings of AFLA VI: The Sixth Meeting of the Austronesian Formal Linguistics Association, 181–200. Toronto: Department of Linguistics, University of Toronto. Kuijpers, Cecile & Wilma van Donselaar. 1997. The influence of rhythmic context on schwa epenthesis and schwa deletion in Dutch. Language and Speech 41. 87–108. Ladefoged, Peter, Jenny Ladefoged, Alice Turk, Kevin Hind & St. John Skilton. 1998. Phonetic structures of Scottish Gaelic. Journal of the International Phonetic Association 28. 1– 41. Levin, Juliette. 1987. Between epenthetic and excrescent vowels. Proceedings of the West Coast Conference on Formal Linguistics 6. 187–201. Martin, Pierre. 1998. À Québec, a-t-on l’schwa? In Yves Duhoux (ed.) Langue et langues: Hommage à Albert Maniet, 163 –180. Leuven: Peeters. Martínez-Gil, Fernando. 1997. Word-final epenthesis in Galician. In Fernando MartínezGil & Alfonso Morales-Front (eds.) Issues in the phonology and morphology of the major Iberian languages, 270 –340. Washington, DC: Georgetown University Press. Matteson, Esther & Kenneth L. Pike. 1958. Non-phonemic transition vocoids in Piro (Arawak). Miscellanea Phonetica 3. 22–30. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCawley, James D. 1968. The phonological component of a grammar of Japanese. The Hague & Paris: Mouton. Michelson, Karin. 1989. Invisibility: Vowels without a timing slot in Mohawk. In Donna Gerdts & Karin Michelson (eds.) Theoretical perspectives on Native American languages, 38–69. New York: SUNY Press. Miner, Kenneth L. 1979. Dorsey’s Law in Winnebago-Chiwere and Winnebago accent. International Journal of American Linguistics 45. 25 –33. Ní Chiosáin, Máire. 1995. Barra Gaelic vowel copy and (non-)constituent spreading. Proceedings of the West Coast Conference on Formal Linguistics 13. 3 –15. Oftedal, Magne. 1956. A linguistic survey of the Gaelic dialects of Scotland, vol. 3: The Gaelic of Leurbost, Isle of Lewis. Oslo: Aschehoug. Olson, Kenneth S. 2003. Word minimality and non-surface-apparent opacity in Mono. Paper presented at the 4th World Conference of African Linguistics, Rutgers University. Paradis, Carole & Darlene LaCharité. 1997. Preservation and minimality in loanword adaptation. Journal of Linguistics 33. 379 –430. Pearce, Mary. 2004. Kera foot structure. Unpublished ms., University College London.

21

Nancy Hall

Peperkamp, Sharon & Emmanuel Dupoux. 2002. A typological study of stress “deafness.” In Carlos Gussenhoven & Natasha Warner (eds.) Laboratory phonology 7, 203–240. Berlin & New York: Mouton de Gruyter. Piggott, Glyne L. 1995. Epenthesis and syllable weight. Natural Language and Linguistic Theory 13. 283 –326. Rice, Keren. 2008. Review of de Lacy (2006). Phonology 25. 361–371. Rose, Yvan & Katherine Demuth. 2006. Vowel epenthesis in loanword adaptation: Representational and phonetic considerations. Lingua 116. 1112–1139. Smith, Jennifer. 2006. Loan phonology is not all perception: Evidence from Japanese loan doublets. In Timothy J. Vance & Kimberly A. Jones (eds.) Japanese/Korean Linguistics 14, 63 –74. Palo Alto: CLSI. Smorodinsky, Iris. 2002. Schwas with and without active control. Ph.D. dissertation, Yale University. Steriade, Donca. 1994. Licensing by cue. Unpublished ms., University of California, Los Angeles. Steriade, Donca. 2003. The phonology of perceptibility effects: The P-map and its consequences for constraint organization. Unpublished ms., University of California, Los Angeles. Susman, Amelia L. 1943. The accentual system of Winnebago. Ph.D. dissertation, Columbia University. Uffmann, Christian. 2007. Vowel epenthesis in loanword adaptation. Tübingen: Max Niemeyer Verlag. Watson, Janet C. E. 2007. Syllabification patterns in Arabic dialects: Long segments and mora sharing. Phonology 24. 335 –356.

68

Deletion John Harris

1

Introduction

In language, we often come across situations where a morpheme shows up in two alternating phonological shapes, one of which contains a vowel or consonant segment that is missing from the other. For example, in Samoan the root meaning ‘twist’ has two alternants: [milos] and [milo]. The form ending in [s] is found before a vowel-initial suffix (as in [milos-ia] ‘be twisted’), while the form lacking the [s] is found when the root falls at the end of a word. In such situations, the two alternants usually stem from a single historical source, and we can attribute the alternation between segment and zero to the action of some sound change. The question then is whether or not the historical form contained the segment that alternates in the present-day forms. Either the segment was absent from the original form and has since been inserted in certain phonological contexts, or it was present and has since been deleted in certain contexts. Which of these scenarios is correct depends on whether the segment’s occurrence is phonologically predictable or not. In the case of Samoan, the [s] must have been part of the original form of the root meaning ‘twist’, because its presence is unpredictable. There is no regular sound change that could have inserted the consonant without also incorrectly inserting it in other morphemes. The unpredictability of the root-final consonant is confirmed by observing roots that show alternations with consonants other than [s], such as [oso] – [osof-ia] ‘jump’, [tau] – [taul-ia] ‘cost’. The conclusion then is that Samoan has undergone a change that deleted consonants at the ends of words. Whole-segment deletion is a pervasive phenomenon in the world’s languages. Much of the terminology phonologists use to describe it – including the term deletion itself – dates from nineteenth-century philology. Although the terms were initially applied to historical deletion processes, they have subsequently been extended into synchronic phonology. This is largely due to a well-established tradition of assuming that when a sound change affects a grammar it can remain there as an active phonological process. According to this model of grammar, a synchronically live process allows regularly alternating forms to be derived from a single underlying form that strongly resembles the historical form (chapter 93: sound change). In our Samoan example, this means that the form [milo] is derived The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0068

John Harris

2

from underlying /milos/ through the operation of a synchronic process that deletes a consonant when it is final in the word. General terms used to describe whole-segment deletion include elision, loss, drop, and truncation. Although these terms continue to prove useful for descriptive purposes, they retain a strong flavor of the philological tradition within which they were conceived. There are at least two connotations that more recent research has shown should not be allowed to determine how we approach synchronic deletion. First, there is a procedural flavor to the terminology: deletion might suggest that a phonological form is derivationally altered by the irretrievable elimination of a sound. Second, there is an implication that what gets deleted is a phonemesized unit – an impression undoubtedly reinforced by the practice of using alphabetic transcription to present the relevant data. As we’ll see, neither of these connotations accurately reflects how deletion is treated in modern phonological theory. This chapter is laid out as follows. §2 catalogues the main types of vowel and consonant deletion, sticking as far as possible to theory-neutral descriptions. §3 discusses different approaches to how deletion effects are represented in phonological grammars. The next two sections examine the phonological conditions under which deletion occurs. §4 addresses the issue of what causes consonant deletion and reviews claims that it is driven by an imperative to simplify syllable structure. §5 focuses on vowel deletion and evaluates the assumption that it inevitably triggers resyllabification. The chapter concludes in §6 by considering an alternative to traditional derivational approaches to deletion.

2 2.1

Segmental deletion Consonant deletion

The following forms expand the Samoan example we started with (data originally from Pratt 1911, cited in Bloomfield 1933): (1)

Samoan a.

b.

Simple olo aIa tau api sopo milo oso tau asu Ialo

Perfective oloia aIaia tauia apitia sopo?ia milosia osofia taulia asuIia Ialomia

‘rub’ ‘face’ ‘repay’ ‘be lodged’ ‘go across’ ‘twist’ ‘jump’ ‘cost’ ‘smoke’ ‘forget’

The form of the perfective suffix, [-ia], is evident from the examples containing vowel-final roots in (1a). The examples in (1b) contain consonants that appear in the perfective but not in simple forms. Since the alternating consonants vary unpredictably from one word to another, we can conclude that each belongs to a root

Deletion

3

rather than the suffix. The phonological process responsible for the alternation can be summarized as follows: a consonant is deleted at the end of a word. The result is that root-final consonants are elided when word final but survive when prevocalic. The effect of deletion in Samoan has been to bar any consonant from appearing word finally. In a more restricted version of the process, final deletion only targets consonants of a certain type. In Lardil, for example, the only type of consonant permitted finally is apical. Stem-final consonants, which show up before a suffix vowel, delete finally if not apical; compare (2a) with (2b) (Hale 1973). (2)

Lardil a. b.

Bare noun pirIen kentapal Ialu taIku murkuni

Accusative pirIen-in kentapal-in Ialuk-in taIkuI-in murkunim-an

‘woman’ ‘dugout’ ‘story’ ‘oyster’ ‘nullah’

Lardil illustrates another restricted form of final deletion, where the targeted consonant is preceded by another consonant. The effect of simplifying final consonant clusters in this way can be seen in (3), where the second of two stem-final consonants drops when word final. The process is fed by an independent process of final vowel deletion (on which more below) and itself feeds the deletion of final non-apical consonants exemplified in (2). (3)

Lardil Input /jukarpa/ /wuluIka/ /kantu-kantu/

Output jukar wulun kantukan

‘husband’ ‘fruit’ ‘red’

As the form [kantukan] in (3) attests, cluster simplification in Lardil affects any type of consonant, including apicals. In other languages, a more restricted version of this process shows sensitivity to the type of consonant involved. In Catalan, for example, final cluster simplification targets coronals but not other places; compare (4a) with (4b) (Mascaró 1983). (4)

Catalan a.

b.

Masculine Hskerp orp ljark al for ber san prufun blaI

Feminine HskerpH orßH ljar:H altH fortH berÏH santH prufundH blaIkH

‘shy’ ‘blind’ ‘long’ ‘tall’ ‘strong’ ‘green’ ‘saint’ ‘deep’ ‘white’

John Harris

4

Cluster simplification can also target non-final consonants. Here, which member of a cluster drops varies in a way that is often attributed to differences in syllabification (on which more below). Pali illustrates the case where the second of two consonants deletes; as shown in (5), historical liquids (evident in the cognate Sanskrit forms) have been lost post-consonantally (Zec 1995). (5)

Sanskrit prati traana kramati

Pali paÍi taana kamati

‘against’ ‘protection’ ‘walks’

In syllabic terms, the deletion exemplified in (5) simplifies an onset cluster. The reverse situation, where it is the first of two non-final consonants that drops, is typically described as coda deletion. Consider the example of Diola Fogny, where the only permitted type of coda–onset cluster is a partial geminate consisting either of a nasal plus homorganic obstruent or of a liquid plus coronal obstruent (Sapir 1965; Itô 1986). As illustrated in (6a), this sequence can arise through morphological derivation, in which case it survives in output (with appropriate adjustments for homorganicity). (6)

Diola Fogny a.

b.

Input /ni-gam-gam/ /ku-boJ-boJ/ /na-tiiI-tiiI/ /let-ku-jaw/ /ujuk-ja/ /kob-kob-en/

Output nigaIgam kubomboJ natiintiiI lekujaw ujuja kokoben

‘I judge’ ‘they sent’ ‘he cut through’ ‘they won’t go’ ‘if you see’ ‘yearn’

If, however, the juxtaposition of two morphemes creates a consonant sequence other than a partial geminate, the first consonant is deleted. This is illustrated by the elision of the stops in the examples in (6b). The deletion of one consonant before another is often accompanied by compensatory lengthening (see chapter 64: compensatory lengthening), where one segment lengthens to make up for the loss of a neighbor. The compensation can be undertaken by either the following consonant or the preceding vowel. The first scenario is illustrated by the development of earlier Romance to later Italian [nokte] > [not(e] ‘night’, the second by earlier to later English [nixt] > [ni(t] (‘night’, subsequently diphthongized to [najt]).1

2.2

Vowel deletion

Vowel sequences lacking an intervening consonant are cross-linguistically dispreferred. Whenever morpheme concatenation threatens to create a hiatus configuration of this sort, languages can take various measures to resolve it 1

Singleton consonants are sometimes observed to elide intervocalically, as in Turkish inek ‘cow (nom)’ – ine-i ‘cow (poss)’. In this environment, deletion is almost always the final stage of historical lenition (see below and chapter 66: lenition).

Deletion

5

(chapter 61: hiatus resolution). One of the most favored of these is to delete one of the vowels, either the first, as in French (see (7a)), or the second, as in Karok (see (7b); Bright 1957). Input

(7) a.

b.

c.

French /lH ami/ /la ami/ Karok /ni-axjar/ /ni-uksup/ Ganda /ba-ezi/ /ba-ogezi/

Output lami lami

‘the friend (masc)’ ‘the friend (fem)’

nixjar nikœup

‘fill (1sg)’ ‘point (1sg)’

be(zi bo(gezi

‘sweepers’ ‘speakers’

As with consonant cluster simplification, vowel deletion under hiatus can, depending on the language, be accompanied by compensatory lengthening. We see this in the Luganda examples in (7c), where a stem-initial vowel lengthens to make up for the loss of a prefix vowel (Clements 1986). Vowel deletion can also occur between consonants. Syncope, as this process is called, is typically sensitive to stress or to the vowel’s position relative to a word’s edge. The examples in (8) illustrate two forms of stress-sensitive syncope in English (syncope-prone vowels underlined). (8)

English a. b.

potato, parade, career opera, factory, chocolate, reference, family, camera

Syncope in English, which is both lexically and phonetically variable, targets unstressed syllables in two environments (Bybee 2001): (a) a word-initial unfooted syllable (as in (8a)) and (b) between a stressed and an unstressed syllable where the consonant following the targeted vowel is a sonorant and more sonorous that the consonant preceding (as in (8b)). The effect of the second pattern is to contract a trisyllabic sequence into a bisyllabic trochaic foot. One type of positionally conditioned syncope targets the middle syllable of a trisyllabic sequence located at either the left or the right edge of a word. The leftedge scenario is illustrated in (9a) by Tonkawa (Hoijer 1946), the right-edge one in (9b) by Tagalog (Kenstowicz and Kisseberth 1979). (9)

a.

b.

Tonkawa Input /pi[ena-n-o?/ /we-pi[ena-n-o?/ Tagalog Bare root bukas kapit laman

Output pi[nano? wep[enano?

‘he is cutting it’ ‘he is cutting them’

Patient buksin kaptin lamnin

‘open’ ‘embrace’ ‘fill’

6

John Harris

In Tonkawa, it is the second vowel in a word that syncopates. As shown in (9a), this means that the first vowel of a root (here, the /i/ of input /picena/) shows up when it is also the first vowel in the word but is suppressed when the root is preceded by a prefix vowel. Meanwhile, the second root vowel (/e/ in /picena/) is suppressed when the root is unprefixed but shows up under prefixation. In Tagalog, the last vowel in a root shows up when it is also the last vowel in the word but is suppressed when a suffix vowel follows. A similar effect is seen in Turkish and Hindi. Deletion can also target vowels at the absolute edges of words, usually when the affected syllable is unstressed or in some other way non-prominent. We have already seen a word-final example (apocope) in Lardil (see (3)). Less common is the word-initial equivalent (aphaeresis), as in colloquial English ’bout, ’lectric.

3 3.1

Modeling segmental deletion Linear-segmental analyses of deletion

The first basic question a model of segmental deletion needs to provide an answer to is this: how is an alternant that retains a segment related to one that lacks it? The best-established approach to this question can be broadly defined as derivational: the alternants are derived from a single underlying form that contains the segment, which is then removed under certain phonological conditions by some mechanism that typically recapitulates historical deletion. For several generations, this basic assumption has remained largely unchallenged in mainstream phonological theory. The derivational approach raises a second basic question: what impact does the deletion mechanism have on the representations it operates on? Over the years, there have been quite radical changes in how phonologists answer this question. Interpreted derivationally, the term deletion might suggest a scenario where a segment disappears without trace. Certainly that is one of the readings implicit in the tradition of representing segments as phoneme-sized units strung together linear-fashion in phonological forms. The reading is reinforced by modeling the deletion mechanism as a rule that transforms an underlying or input form containing a given phoneme into a surface or output form that lacks it (for a textbook exposition of how deletion is treated in linear-derivational theory, see Kenstowicz and Kisseberth 1979). Combining these linear and derivational assumptions yields an analysis of Samoan along the lines of (10). (10)

Input form

/milos/

‘twist’

/milos-ia/

‘twist (pass)’

Rule C → / __ # Output form

[milo]

[milosia]

This type of derivation can be described as destructive: the rule (“delete a wordfinal consonant”) destroys information that is present in the input. A question that immediately arises is whether this information is recoverable. That is, can a language learner or listener reconstruct an underlying segment on the evidence of an output form from which it has been removed? In the example in (10), could

Deletion

7

a learner/listener retrieve the underlying /s/ in Samoan /milos/ on the basis of encountering surface [milo]? If the output form [milo] were the only available evidence, the answer would almost certainly have to be no.2 On the face of it, no information is present in [milo] that would alert the learner/listener to the presence of /s/ in the adult talker’s underlying form. In declarative theories of phonology, this lack of recoverability is enough to disqualify deletion as a derivational device (see e.g. Bird 1995; Scobbie et al. 1996). For most phonologists, however, this definition of recoverability is too narrow. There is other evidence the learner/listener can potentially draw on to reconstruct an input segment that fails to appear in the output. In cases such as Samoan, the most obvious alternative source of evidence lies in the availability of alternants that preserve the deletion-prone segment in the output. According to a broader interpretation of recoverability, exposure to the [s] in a suffixed form such as [milosia] allows the learner to construct an underlying form /milos/, which can then be accessed whenever occurrences of the alternant without [s] are encountered. (For a summary of how this line of argumentation was deployed in earlier generative phonology, see again Kenstowicz and Kisseberth 1979.)

3.2

Non-linear analyses of deletion

Alternations are not the only source of evidence that can be exploited to recover deleted segments. A segment can sometimes leave its mark on output forms even in its absence. Historically, this situation can arise when a segment exerts some influence on a neighboring segment before being deleted. Crucially in such cases, deletion of the segment does not completely undo the effects of its influence. By way of illustration, consider again the examples of hiatus-breaking vowel deletion exemplified by the three languages in (7). The situation in French (7a) and Karok (7b) is just as we would expect if deletion is viewed as targeting phonemes: one vowel vacates the representation, and the other segments simply shuffle up, leaving no evidence that the vowel was ever present in the input. In Ganda, on the other hand, deletion is accompanied by compensatory lengthening of the remaining vowel (7c). This suggests that, although the vowel-quality properties of the affected segment have been removed, its position within the word has been preserved. The deletion is thus only partial. A similar effect is witnessed in tonal stability, where a tone that is cut loose by the deletion of the vowel to which it is initially associated survives by attaching itself to a neighboring vowel (see chapter 45: the representation of tone). Stable weight and tone effects of this type are just part of a large body of evidence indicating that it is wrong to think of segments as indissoluble phonemic units. Rather, they are composites of non-linearly linked properties, each of which can be independently targeted by phonological processes, including deletion. Embracing this insight opens up different perspectives on deletion than that offered 2

The caveat “almost certainly” is necessary here in light of recent studies showing that a speaker’s production of a given form can be influenced in phonetically fine-grained ways by the existence of another, morphologically related form (see, for example, Ernestus and Baayen 2007). If this were true of the Samoan case, it would mean for example that the [o]s of the output forms [milo] and [Ialo] would be phonetically distinct; each would bear some phonetic trace of the deleted root-final consonant that appears in the alternant found in the passive forms [milos-ia] and [Ialom-ia], respectively. As far as I know, this effect has never been reported for Samoan deletion.

John Harris

8

by linear-phonemic theory. In particular, the representational technology of nonlinear theory has a significant impact on how we conceive of the notion of segment in deletion processes. In a non-linear representation, a sharp distinction is drawn between the feature content that specifies a segment’s phonetic quality and the position the segment occupies in syllable structure (chapter 14: autosegments; chapter 54: the skeleton). In principle, deletion could target either of these entities or both simultaneously. Simultaneous targeting, equivalent to the phonemic conception of deletion, is what is required for the type of situation illustrated by French in (7a) and Karok in (7b). Compensatory lengthening, such as we see in Ganda in (7c), requires an operation that removes the targeted segment’s feature content but leaves its syllabic position intact. The usual non-linear way of implementing this is along the lines of the derivation /ba-ezi/ → [be(zi] in (11) (from Clements 1986; see also Kavitskaya 2002; chapter 64: compensatory lengthening; and the papers in Wetzels and Sezer 1986). The x-slots represent positions in syllable structure (the syllabification details are not relevant at this point); the alphabetic symbols are shorthand for complexes of feature specifications; and the lines indicate associations between feature complexes on the one hand and syllabic positions on the other. (11)

Input x

x

Delinking −

x

x

x

x

x

x

Spreading x

x

a

e

z

i

x

x

x

x

b

a

e

z

i



→ b

x

b

a

e

z

i

Deletion is implemented by an operation that delinks the feature complex specifying a from its syllabic position. Compensatory lengthening consists in the insertion of a new link between e and the position vacated by a. Representing deletion as in (11) relies on the principle that a segment must be linked to a syllabic position in order to be phonetically expressed (Goldsmith’s 1990 Linkage Condition). Based on this principle, regular alternations involving deletion can be modeled as the attachment vs. non-attachment of a segment to a position. There are two main ways in which this notion has been implemented. One is illustrated by the delinking step in (11): the alternating segment has an underlying association to a position but loses it under certain phonological conditions. The other approach is to assume that the segment lacks an underlying association but acquires one under the complementary set of conditions (for applications of this analysis to liaison consonants in French, see for example Encrevé 1988 and Charette 1991). Applied to our Samoan example, the latter analysis would posit an unassociated s at the end of /milos/. Before a vowel, this “floating” segment attaches to a position and thus surfaces (as in [milos-ia]); elsewhere it is left stranded and thus phonetically unrealized (as in [milo]).

3.3

Stray erasure

There is a consensus in the literature that unsyllabified or “stray” segments get erased and thus fail to be phonetically realized (McCarthy 1979; Steriade 1982;

Deletion

9

Itô 1986). However, there is less than general agreement about where this erasure occurs – whether within the phonological grammar or in phonetics. These two views can be summarized as follows: (12)

a. b.

Phonological erasure Unsyllabified segments are absent from phonological output. Phonetic erasure (Containment) All input segments are present in phonological output; unsyllabified segments are phonetically erased.

The notion in (12b) that all input segments are “contained” in output, including those that remain unsyllabified, was adopted by early OT (McCarthy and Prince 1993; Prince and Smolensky 1993). Although the notion has since been abandoned in favor of phonology-internal erasure (12a) in most OT work since Correspondence Theory (McCarthy and Prince 1995), it continues to figure in some current output-oriented approaches (see for example van Oostendorp 2004, 2007). The two principles in (12) are empirically distinguishable, at least if we limit ourselves to a consideration of the effect each can have on phonological output. On its own, phonological erasure makes the prediction that a deleted segment will leave no trace of itself in an output form. Containment, on the other hand, predicts that, since an unlinked segment is still present in output, it should be capable of influencing or even triggering processes that affect linked segments. There is at least one type of evidence that, it is generally agreed, can be seen as conforming to the prediction made by Containment. It involves floating tones, which are tones that lack an association to a vowel, often as a result of their original host vowel being deleted (see chapter 45: the representation of tone). Where such a tone remains unassociated in output, it can usually be seen to influence the pitch of a following tone, either raising it (upstep, indicating a floating high) or lowering it (downstep, a floating low). This suggests an analysis under which tones can be delinked without being erased. More controversial is the claim that similar evidence can be found to support the notion that non-tonal features or feature complexes can also be delinked without being phonologically erased. Most of the relevant evidence involves derivational opacity. A derivation is said to be opaque if it produces forms showing the effects of processes applying in specific ways not predicted by regular phonological conditions: either a process fails to occur where it would be expected to (underapplication), or it occurs where it would not be expected to (overapplication). It is worth considering this issue in this chapter, since deletion processes figure very prominently in the literature on opacity. This is because of their inherent potential to eliminate segments that trigger other processes. As an illustration of opacity, consider the case of fortition in Cypriot Greek (Newton 1972; Coutsougera 2002). The input to the process is a glide resulting from a general Greek process that desyllabifies i before a vowel, as in /psumi-u/ → [psumj-u] ‘bread (gen)’ (cf. [psumi] ‘bread (nom)’). The basic fortition pattern in Cypriot Greek is illustrated in (13a) and (13b): the glide hardens to a velar stop after r and to a palatal stop after other oral consonants.

10 (13)

John Harris Cypriot Greek Input a. /œeri-a/ /teri-azo/ b. /mmati-a/ /pulluÏi-a/ c. /ÏonÏi-a/ /vasti-ete/ d. /xarti-a/ /karÏi-a/

Output œerka terkazo mmahca pulluhca ÏoJca vascete xarca karca

‘hands’ ‘I match’ ‘eyes’ ‘little birds’ ‘teeth’ ‘he is held’ ‘papers’ ‘heart’

As shown in (13b), the clusters produced by gliding and hardening are also subject to independent processes of spirantization and voice assimilation. The opacity arises in forms that are affected by another independent process, illustrated in (13c), which simplifies input three-consonant clusters by deleting the medial segment, as in /vasti-ete/ → [vascete] (*[vastcete]). The opaque examples appear in (13d). They contain clusters where r is followed by a palatal stop rather than the velar variant that the regular conditions on hardening would lead us to expect (cf. (13a)). Looking at the input forms, we can see where the source of the opacity lies: r is separated from the gliding/hardening site by a consonant. The intervening consonant fails to appear in output as a result of being deleted by the cluster simplification process. Expressed in traditional serial-derivation terms, hardening must precede deletion (essentially following in the steps of historical sound changes). An ordered-rule derivation of [xarca], for example, runs something like this: /xarti-a/ → xartja → xartca → [xarca]. At the point in the derivation where hardening applies, the consonant adjacent to the target segment is not r but t, which determines that the hardened consonant is palatal in output. There are two main output-oriented approaches to opacity involving deletion, distinguishable on the basis of whether they subscribe to the phonological or the phonetic version of stray erasure in (12). In the version of OT assuming phonological erasure, the basic approach to opacity is to allow the grammar to select a winning output candidate by crossreferring to a losing candidate that would figure as an intermediate form in an ordered-rule analysis. For example, in Cypriot Greek the attested output form [xarca] would be judged a more optimal output than *[xarka] on the grounds that it is the more similar of the two to non-surfacing *[xartca]. (For discussion of different approaches to how this cross-referencing can be achieved in OT, see McCarthy 2007.) Containment approaches to deletion are equipped to treat opacity in a less obviously serial manner. Continuing with our Cypriot Greek example, consider the following analysis of [œerka] vs. [xarca]. (14) input

a. [œerka] ‘hands’ /œeri-a/

output

q

b.

[xarca] ‘papers’ /xarti-a/ q

q

q

x

x

x

x

x

x

x

x

œ

e

r

k

a

x

a

r

t

x

x

c

a

Deletion

11

This analysis is of a type that was standard in early OT. It incorporates the widely held assumption that syllable structure is not present in lexical representation and is supplied by Gen, the mechanism that generates the set of candidate output forms. In (14a), all of the input segments are parsed into syllable structure in output. The k of [œerka] is the hardened output of i, the velarity reflecting its adjacency to r. In (14b), we see the effect of cluster simplification, driven by a high-ranked constraint banning complex codas (on which more later). The input t fails to be syllabified in output. In line with the Linkage Condition, a segment that is “underparsed” in this way is not phonetically realized. Nevertheless, in line with Containment, it remains in the output representation, where it can influence segments that fail to syllabify. In this example, t is in a position not only to trigger hardening but also to block r from causing the hardened output to be velar.

4 4.1

Conditions on consonant deletion Consonant deletion and syllabification

What are the phonological conditions that favor whole-segment deletion? In answering this question, we need also to take into account processes of obstruent devoicing and consonant weakening, since these occur under similar conditions (see chapter 69: final devoicing and final laryngeal neutralization and chapter 66: lenition). In fact, deletion figures prominently in what is perhaps the most reliable historical definition of phonological strength: segment A is weaker than segment B if A passes through a B stage on its way to deletion (Hyman 1975; Harris 1994). There is a well-established tradition of approaching the question from the viewpoint of syllable structure. This is based on two main assumptions: certain syllabic positions are particularly favorable to deletion, and deletion changes the syllabification of the phonological forms it targets. These assumptions are themselves founded on a widely accepted model of syllabification that can be summarized as follows: (15)

Standard syllabification model a. b.

Sonority Syllable nuclei always correspond to sonority peaks (typically vowels). Word edges i. A word-initial consonant forms a syllable onset. ii. A word-final consonant forms a syllable coda.

These assumptions represent what can be considered the “standard” view of syllabification (see Vennemann 1972 and the literature summarized in Blevins 1995 and Zec 2007; see also chapter 33: syllable-internal structure, and chapter 55: onsets). However, it has increasingly been called into question, and this inevitably impacts on the validity of syllable-based analyses of deletion.

4.2

Consonant deletion and syllabic markedness

There are certain phonological contexts where consonants are especially vulnerable to deletion, particularly in clusters or at the end of a word. From the perspective of the standard model in (15), deleting a consonant in these contexts pushes

John Harris

12

syllable structure towards a less marked state. Removing a consonant from a cluster can open a previously closed syllable, simplify an onset, or reduce the size of a complex coda. Deleting a singleton consonant at the end of a word can be understood as having the same effect. On the other hand, there are phonological contexts that are resistant to deletion. This is especially true of consonants at the beginning of a domain such as the word, stem, or foot, and to a lesser extent of word-internal prevocalic consonants. Here too we can detect a syllabic preference, in this case for syllables to contain at least one onset consonant. There is a long tradition of interpreting these patterns to mean that consonant deletion is actively driven by a preference for less marked syllable structure. The tradition has been updated in OT by formulating the preferences in terms of markedness constraints (see chapter 63: markedness and faithfulness constraints; also Prince and Smolensky 1993, Zec 2007, and the papers in Féry and van de Vijver 2003). In a given language, deletion can occur if any of these constraints outrank countervailing constraints that call for input consonants to be faithfully preserved in output. In what follows, this is the format I will employ to present the standard syllable-based approach to deletion, focusing on the following constraints: (16)

a.

b.

4.3

Syllabic markedness constraints Codas NoCoda A syllable must be open. NoComplexCoda A coda must contain no more than one consonant. Onsets Onset A syllable must have an onset consonant. NoComplexOnset An onset must contain no more than one consonant. Segmental faithfulness constraint MaxC An input consonant must be preserved in output.

Onset simplification

(17) compares the grammars of two language types, one with complex onsets, the other without. As illustrated in (5), Sanskrit represents type (17a), while Pali represents (17b). The table shows two different output analyses of a schematic input form /CCV/, where CC potentially syllabifies as a complex onset. In grammar (17a), NoComplexOnset is outranked by MaxC, which allows the two input consonants to show up as an onset cluster. The reverse ranking in grammar (17b) forces onsets to be simplex: one input consonant takes up the only slot available in the onset, and the other drops (symbolized by ;.) (17)

Input /CCV/ Language Output Structure a .CCV Complex onset b .C;V Simplex onset

Constraint ranking MaxC >> NoComplexOns NoComplexOns >> MaxC

Deletion

13

A similar simplification effect is widely reported in first language acquisition. A child acquiring a language with complex onsets typically starts out by deleting the more sonorous of two onset consonants (as in bûing, b$ue). The deletion evidently reflects a developmental stage where the only type of onset in the child’s syllabic inventory is simplex. As is also typical of early phonology, the normally developing child nevertheless perceives the adult distinction between simple and complex onsets (as in boo vs. blue). This suggests that the liquid in consonant–liquid clusters is present in the child’s lexical representations, but is phonologically erased in output as result of failing to find a berth in the child’s simplified onset (see Bernhardt and Stemberger 1998 and the literature reviewed there).

4.4

Coda deletion

The syllable coda is widely regarded as the consonant deletion site par excellence. Under the standard syllabification view in (15), the coda subsumes consonants in three phonological contexts: (a) a word-final singleton consonant, (b) any member of a word-final cluster, and (c) the first member of a word-internal heterosyllabic cluster. In an OT grammar, deletion of final singleton consonants (illustrated above by Samoan (1) and Lardil (2b)) results from MaxC being outranked by NoCoda. The reverse ranking defines languages that permit final consonants. This is shown in (18). (18)

Input /VC]/ Language Output Structure a VC.] Final closed q b V.;] Final open q

Constraint ranking MaxC >> NoCoda NoCoda >> MaxC

This basic treatment extends to consonant deletion in non-final codas, such as occurs in Diola Fogny (6). As to the deletion of absolute word-final consonants in clusters (illustrated by Lardil in (3) and Catalan in (4)), the standard syllable-driven account is that it is motivated by a markedness imperative to reduce the complexity of codas. In OT terms, final cluster simplification occurs in languages of the type shown in (19b), where NoComplexCoda outranks MaxC (Prince and Smolensky 1993). The reverse ranking yields languages of type (19a), those with final clusters. The fact that words can be consonant final at all indicates that NoCoda is ranked relatively low in both of these grammars. (19)

Input /VCC]/ Language a b

Output VCC.] VC.;]

Structure complex coda simplex coda

Constraint ranking MaxC >> NoComplexCoda, NoCoda NoComplexCoda >> NoCoda, MaxC

Word-internal cluster simplification in Diola Fogny is selective in the type of coda it targets, deleting only those that do not form part of a partial geminate (see (6)). On their own, the syllabic markedness constraints in (16) are not enough to derive this selective behavior. What is required is an additional constraint that bans a coda from bearing or “licensing” certain feature specifications unless they are assimilated

John Harris

14

from a following onset. (The formulation of constraints of this type is based to a large extent on proposals by Itô 1986 and Goldsmith 1989.) With this constraint ranked high, a faithful output candidate such as *[u.juk.ja] is non-optimal, since the coda [k] bears its own place and manner specifications. Under these circumstances, the lower-ranked constraint NoCoda asserts itself, and the optimal candidate is the coda-less form [u.ju.ja] (an example of what in OT is called The Emergence of the Unmarked; see chapter 58: the emergence of the unmarked).

4.5

Rethinking coda deletion

According to the standard view in (15b.ii), a word-final consonant has the same syllabic status as a word-internal coda. This makes the strong prediction that whenever consonant deletion targets codas it should strike both of these positions simultaneously (chapter 36: final consonants). There are certainly languages that bear this prediction out. In Lardil, for example, the set of non-apical singleton consonants that drop finally (as in (2b)) is largely the same as the set of consonants that are excluded from internal codas. In Samoan, the word-final deletion of root-final consonants shown in (1) coincides with an absence of internal closed syllables. Another example is provided by non-rhoticity in various languages, where historical r drops in exactly the combination of environments defined by the final-coda analysis. For example, in non-rhotic English, constricted r is suppressed both in an internal coda (as in caûnal) and word finally (as in caû). Examples such as these represent some of the core evidence in favor of the assumption that consonants in internal codas and word-final positions are syllabified the same (again see Blevins 1995 for a summary of the relevant literature). However, alongside these examples we find deletion evidence that is difficult to square with the assumption. On the one hand are languages that preserve internal codas but either delete or lack word-final consonants; examples include Italian, colloquial Tamil, and Pali. On the other are languages that lack internal codas but have final consonants; examples include Kejaman (Strickland 1995), Yapese (Piggott 1999), and Central Sentani (Hartzler 1976). Where historical final consonants can be reconstructed for the first of these syllabically “hybrid” types of language, they have either been deleted or survive as onsets followed by an epenthesized vowel. In Pali, for example, deletion has wiped out historically word-final consonants, which can be reconstructed on the basis of a comparison with cognate forms in Sanskrit (see (20a)) (Zec 1995). However, as shown in (20b), the process has not deleted internal codas, which survive as the first position of a partial or full geminate. (20) a.

b.

Sanskrit

Pali

tatas punar pra(patat danta sapta karka valka kara

tato puno/puna papata danta satta kakka vakka kaKKa

‘therefrom’ ‘again’ ‘hurled down’ ‘tamed’ ‘seven’ ‘precious stone’ ‘tree bark’ ‘ear’

Deletion

15

It is true that internal codas in Pali have lost much of their historically contrastive feature content. However, the coda position itself has remained in place, picking up most or all of its feature interpretation from the following onset. The existence of syllabically hybrid languages of the Pali type is inconsistent with the prediction that deletion will target internal codas and final consonants simultaneously. It could be further interpreted as undermining the assumption that final consonant deletion is motivated by syllabic structure independently of word structure. Does a final consonant delete because of its position in the syllable or because of its position within the word? Of course we could just say that a final consonant is a sub-type of coda, one that is particularly susceptible to deletion. But this is at best unparsimonious: it would be simpler just to say that deletion targets a consonant that is final in the word rather than having to say that it is final in both the word and the syllable. In any event, the coda analysis faces additional problems, some of which become evident when we look more closely at consonant deletion in word-final clusters.

4.6

Final cluster simplification

According to the analysis outlined in §4.4, final cluster simplification results from the operation of the constraint NoComplexCoda in (16a). Two main objections have been leveled at this description. One has to do with the actual number of consonants permitted finally compared to internally, and the other with the phonotactics of final compared to internal clusters. Cross-linguistically, there is a clear numerical asymmetry between final clusters and internal codas. Many languages, including English, which allow two or more consonants in final position only allow up to one consonant in an internal coda (Harris 1994; Duanmu 2008). It is hard to come up with clear examples of languages where the two positions exhibit equal complexity (still less languages where internal codas are actually more complex than final clusters). Moreover, in languages showing the numerical mismatch between the two positions, it is noteworthy that the phonotactics of final clusters typically mimic those of initial or internal clusters containing onsets. Two basic patterns are attested in languages of this type. In one pattern, found in English, French, and Irish, for example, final clusters with a falling or level sonority profile (mp, lt, pt, etc.) share their phonotactics with internal coda–onset clusters (chapter 53: syllable contact). In the other – found in French, Polish, and Icelandic, for example – rising-sonority clusters (pl, gl, dr, etc.), share their phonotactics with internal or initial complex onsets (Harris 1994; Dell 1995; Harris and Gussmann 2002). In some languages, the two patterns overlap in three-consonant clusters; in French, for example, we find [rkl] both medially (as in [serkle] ‘circle (vb)’) and finally (as in [serkl] ‘circle (n)’). If final clusters are treated as complex codas, the problem posed by cluster evidence from languages of this type is that the same set of phonotactic restrictions has to be assigned to two or even three different syllabic conditions. There have been two different responses to this problem, both of which challenge the claim that consonant deletion in final clusters is driven by some need to simplify complex codas. One persists with the notion that phonotactics are syllabically conditioned but proposes an alternative syllabification of final clusters to the standard one.

16

John Harris

The other denies that cluster phonotactics are conditioned by syllable structure in the first place. According to the first proposal, the simplest analysis of final clusters is to syllabify them in the same way as the internal clusters with which they show phonotactic parallels (Harris 1994; Dell 1995; Harris and Gussmann 2002). That is, final clusters of the non-rising sonority type (mp, lt, etc.) take the form coda plus onset, while those of the rising-sonority type (pl, gr, etc.) are complex onsets. While this captures the phonotactic parallels between final and internal clusters, it is questionable whether it provides a syllabic motivation for final cluster simplification. This is because there is a cross-linguistically strong preference for the second consonant to be targeted, regardless of whether the cluster is of the rising-sonority type or the non-rising (see the language surveys in Blevins 1995 and Côté 2004). French provides us with an example of final deletion in rising-sonority clusters. In some varieties of the language the liquid is deleted in final clusters of this type, as in [povü] ‘poor’, [ministü] ‘minister’ (Laks 1977; Côté 2004). Deletion in final clusters of non-rising sonority is illustrated by the Catalan examples in (4). Feminine forms such as [HskerpH] and [fortH] show internal coda–onset clusters of falling sonority. The same clusters occurred historically in final position in masculine forms (Badía Margarit 1951). Of these final clusters, only those that are heterorganic survive into present-day Catalan, as in [Hskerp] (see (4a)). The rest – those consisting of partial geminates – have been subject to simplification through deletion of the final consonant, as in /fort/ → [for] (see (4b)). One conclusion that has been drawn from facts such as these is that the context for final cluster simplification is best stated in terms of word-finality without reference to syllable structure at all (as argued by Côté 2004, for example). A further reason for reaching this conclusion is the fact that the deletion that targets final clusters does not also automatically target the same clusters when they occur elsewhere in the word. For example, the dropping of the liquid in French final rising-sonority clusters does not also target initial and internal onsets. Final liquid deletion in French is thus not symptomatic of some more general simplification of complex onsets, such as has occurred in Pali for example (see (5)). By the same token, dropping the obstruent in Catalan final homorganic clusters, as in (4b), does not also target the same cluster when it occurs word internally. Acknowledging that final cluster simplification might be best described without reference to syllable structure naturally raises the question of whether the same might be said of deletion in other phonological contexts. This line of argumentation has led some phonologists to claim that all patterns of consonant distribution should be expressed in a strictly linear fashion by referring exclusively to immediately adjacent segments and word boundaries (see e.g. Steriade 1999; Blevins 2003). At issue here is the fundamental question of why certain phonological positions promote deletion in the first place.

4.7

What causes consonant deletion?

What is it about consonants in word-final and pre-consonantal positions that makes them especially vulnerable to deletion? The answer is likely to be tied to the fact that these positions are also favorable sites for neutralization. Deletion is just one of a collection of process types that target consonants in these positions, and all of these processes tend to have the effect of neutralizing segmental contrasts. For

Deletion

17

example, these are the preferred contexts where we find that obstruent devoicing neutralizes laryngeal contrasts, debuccalization neutralizes place contrasts (e.g. /p t k/ → [?]), and vocalization neutralizes manner contrasts (e.g. /l r/ → [j]) (see chapter 69: final devoicing and final laryngeal neutralization). Deletion, too, can be seen as neutralizing, if it is understood as suspending the contrast between the presence and absence of a consonant, i.e. “merger with zero,” as historical linguists put it (see Campbell 2004). According to one proposal, the neutralizing tendency of certain positions can be attributed to the fact that they provide weaker auditory-perceptual cues to consonants than other positions (see e.g. Steriade 1999; Wright 2004). This point can be illustrated by comparing the cues projected by oral stops in different positions. The offset or release phase of a plosive provides more robust cues to its identity than does its approach phase (Bladon 1986; Ohala 1990). Moreover, the offset is most robustly cued when it is released onto a following vowel. Before another consonant or at the end of words, offset cues can be attenuated or suppressed altogether. In the first instance, they may be masked by the closure phase of the following consonant. In the second, since the end of a word often coincides with the end of an utterance, a final stop often offsets into silence. Weakened cueing potential reduces the reliability with which listeners are able to detect consonant contrasts in particular positions, thereby increasing the likelihood that these contrasts will be eroded over time (see Ohala 1981, 1990). This would explain why preconsonantal and word-final positions are the most favorable sites for the neutralization of voice, place, and manner contrasts. Deletion, it can be argued, is an extreme manifestation of this overall effect (Côté 2004). The question of whether consonant distribution and deletion are motivated by cueing potential or syllabic position continues to be debated. (For critiques of the “licensing-by-cue” approach, see for example Gerfen 2001 and Kochetov 2006.) It is not clear whether the two notions are in fact incompatible: the positions across which cueing potential is differentially distributed can in principle be stated in syllabic rather than linear terms. On the face of it, neutralizing processes in general and deletion in particular seem to be communicatively dysfunctional, in that they suppress information that might otherwise be used to help keep words distinct from one another. However, the particular location of favored neutralizing positions within the word means that the impact of such processes on lexical distinctiveness is not as deleterious as it might otherwise have been. In lexical access, listeners rely much more heavily on phonological information at the beginning of words than towards the end (cf. Nooteboom 1981; Hawkins and Cutler 1988). It is no surprise, then, that of all phonological positions word-initial is the most resistant to deletion.

5 5.1

Conditions on vowel deletion Syllabification and vowel deletion

If it is clear why there is a strong tradition of viewing consonant deletion as being driven by a pressure to simplify syllable structure, it is also easy to understand why no parallel tradition exists for vowel deletion. Deleting a vowel almost always increases syllabic markedness, at least according to the standard syllabification

John Harris

18

account. Apocope, illustrated by Lardil in (3), creates a final closed syllable (V.CV.] > VC.]). Syncope, illustrated by English in (8) and by Tonkawa and Tagalog in (9), creates a closed syllable and thus also a consonant cluster (V.CV.CV > VC.CV). There is probably only one pattern of vowel deletion that can be straightforwardly viewed as reducing syllabic markedness: the type of hiatus-resolving elision seen in (7). The second vowel in a hiatus configuration occupies a syllable without an onset, a marked situation (acknowledged in OT by the Onset constraint in (16a)). Although apocope and syncope may not be syllabically motivated, they can generally be seen to be subject to other kinds of prosodic conditioning, specifically involving metrical or word structure or some combination of both. Moreover, the positions targeted by the two types of deletion can be broadly identified as prosodically weak or non-prominent. The processes are not always sensitive to stress, but when they are they typically target unstressed vowels. In stress-conditioned apocope, for example, the targeted vowel occurs either in the weak position of a foot or in an unfooted syllable. The emergence of word-final consonants in Catalan (see (4)) and certain other Romance languages is due to historical apocope of this type (Badía Margarit 1951; Lief 2006).

5.2

Resyllabification?

According to the sonority-driven model of syllabification in (15a), syncope and apocope necessarily have a much more profound impact on syllabification than consonant deletion: a vowel forms a local sonority peak and thus projects its own syllable nucleus, so removing it inevitably unleashes resyllabification. For example, by removing a final sonority peak, apocope forces a preceding consonant, originally an onset, into the coda of the preceding syllable. This account of apocope predicts that the resyllabified consonant should take on the kind of coda-like behavior it did not exhibit when it was an onset. For example, it should now be able to trigger closed-syllable shortening in the preceding vowel. The prediction is not generally borne out. This is not surprising, since for stress purposes a final consonant typically behaves extrametrically: unlike an internal coda, it does not contribute to the weight of the syllable occupied by the preceding vowel (see Hayes 1995; chapter 43: extrametricality and non-finality). Let us briefly consider two examples where developments accompanying apocope have not followed the path predicted by the final-coda analysis. Modern English bears the marks of a limited form of historical closed syllable shortening, as a result of which certain consonants (very broadly speaking, noncoronals) can now appear in a word-internal coda only after a short vowel (see, for example, Myers 1987; Harris 1994). As the alternations in (21a) show, there is no parallel restriction when the corresponding consonants are word final. (21)

English a. b.

perceptive perceive reduction reduce hDpH > hD(p > ho(p > howp bækH > bæ(k > be(k > bejk

hope bake

Deletion

19

In Middle English, apocope of unstressed schwa was accompanied by lengthening of the stressed vowel in a preceding open syllable (Minkova 1991). If consonants rendered word final by apocope had resyllabified as codas, the non-coronals amongst them would have been expected either to prevent or to undo lengthening of a preceding vowel. As the examples in (21b) illustrate, that did not happen. Apocope in English thus failed to disturb the general Germanic pattern whereby a singleton word-final consonant, unlike an internal coda, has no influence on the length of a preceding vowel. In Sesotho, final i has undergone apocope under certain quite specific phonological conditions. The effect of the change is evident in the locative suffix [-eI], which derives historically from [-eni] (the form still attested in some of Sesotho’s Bantu sister languages) (Doke and Mofokeng 1957). There is clear evidence that apocope has not resulted in the nasal being resyllabified as a coda. One indication involves the widespread Bantu process that lengthens penultimate vowels in phrases of a certain type (with accompanying tonal effects not relevant here; see chapter 114: bantu tone). The examples in (22a) illustrate this process in Sesotho (where vowel length is not lexically contrastive). (22)

Sesotho a. b.

hase mÁ(thÁ k>a( tla re teI lape(I ba ile sedibe(I

‘It’s not a person’ ‘I am coming’ ‘We are at home’ *la(peI ‘They have gone to the well’ *sedi(beI

If the final [I] of the locative were a coda, penultimate lengthening would be predicted to target the vowel immediately preceding the suffix. However, as the forms in (22b) show, this is incorrect: the extra length falls instead on the suffix vowel itself. This suggests that the apocope that produced [-eI] has left the original bisyllabic structure of [-eni] undisturbed. The more general conclusion we might draw from the evidence represented in (21) and (22) is that apocope has little or no impact on syllable structure. In fact it is just the sort of evidence that has led some phonologists to conclude that a word-final consonant is not a coda but the onset of a syllable with a phonetically unexpressed nucleus (see e.g. Giegerich 1985; Kaye 1990; Burzio 1994; Harris 1994; Piggott 1999; Harris and Gussmann 2002; Scheer 2004). According to this account, apocope only targets the feature content of a vowel while leaving its nuclear position untouched. This is schematized in (23) (where we abstract away from the issue of how the suppression of feature content is best represented). (23)

a.

Pre-apocope q

b.

Post-apocope

q

q

q

x

x

x]

x

x

V

C

V

V

C

x]

(A similar analysis has been applied word internally to syncope; see for example Charette’s 1991 treatment of schwa in French.)

20

John Harris

In (23), a consonant exposed to the right edge of a word by apocope remains syllabified as an onset. Since at no point does the consonant become a coda under this analysis, it is predicted not to trigger closed-syllable shortening. This is consistent with the scenario exemplified by English in (21). Similar reasoning would explain why penultimate lengthening in Sesotho targets the vowel immediately preceding word-final [I] rather than the vowel before that, as shown in (22). The vowel before [I] counts as the penultimate nucleus in the phrase, because there is another to its right, namely the final empty nucleus heading the syllable containing the nasal (as in [lape(IØ]). Bearing on the question of whether apocope triggers resyllabification is the fact that the process is often reported to be phonetically continuous along the dimensions of duration and periodicity (see e.g. Silva 1994; Gordon 1998; Myers 2005). Similarly continuous effects are found with syncope; examples include Japanese (Beckman 1996), and the two types of syncope in English shown in (9) (Bybee 2001). Phonetically continuous vowel deletion raises an awkward question for the standard model of syllabification: at what point do we decide that a fleeting vowel stops projecting a local sonority peak and causes a preceding consonant to resyllabify into a coda? This is particularly problematic where the gradience occurs within the speech of individual speakers. The sonority model suggests an implausible scenario in which a speaker’s output flickers between one syllabification and the other. Under a stable-nucleus analysis, syllabification remains unaffected by vowel deletion, regardless of whether it is phonetically continuous or not. In the case of gradience, what varies is the manner in which the affected nucleus is phonetically expressed. A further difference between the stable-nucleus and sonority-driven approaches to syllabification is that they offer empirically distinct perspectives on the relation between consonants flanking a syncope site. Under a sonority-driven analysis, the consonants start out as separate onsets but become syllabically adjacent after syncope. Newly formed clusters that happen to conform to existing phonotactic restrictions are wrongly predicted to be phonetically indistinguishable from clusters already existing outside the syncope context. In English, for example, the liquids in pairs such as p%rade – prayed and p&lite – plight show differences in duration and voice onset time that listeners are able to utilize in word discrimination (Price 1980). One conclusion that might be drawn from this is that p and r are not phonotactically adjacent in p%rade in the way that they are in prayed. This is consistent with the view that, rather than triggering resyllabification, syncope of a vowel leaves the flanking consonants in phonotactically independent onsets separated by a stable but variably expressed nucleus.

6

Conclusion

Over the years, there has been a significant shift in the way phonologists model phonological processes in general and whole-segment deletion in particular. Previously, deletion processes were conceived of as rules that remove segments from linear strings of input phonemes. That view gave way to one according to which deletion rules selectively target elements within non-linear representations. Later, input-oriented rules were largely abandoned in favor of the notion that deletion results from the operation of output-oriented constraints.

Deletion

21

For all the differences amongst the various approaches reviewed here, there remains an important shared assumption: that morphologically related forms showing a regular alternation between a segment and zero should be derived from a single lexical source. However, this idea too has been increasingly called into question. Under the standard derivational analysis of deletion in Samoan, recall, the historical consonant-final form of an alternating root such as [milo] – [milos(-ia)] is preserved in a single underlying representation and is deleted in final position by a synchronic analogue of sound change (/milos/ → [milo]). The main argument in favor of this analysis is economy: speakers need memorize only one lexical form of each alternating root instead of two. This reasoning was already being questioned as early as 1973 by Hale, citing evidence from Maori, a Polynesian relative of Samoan that shares the same historical deletion of word-final consonants. The evidence strongly suggests that speakers of present-day Maori have re-analyzed the original morphology of alternating forms by treating the formerly root-final consonant as now belonging to the suffix. This yields historical reparsings such as [awhit-ia] > [awhi-tia] ‘embrace (pass)’ and [hopuk-ia] > [hopu-kia] ‘catch (pass)’. If this is correct – and it is corroborated by more recent evidence adduced by Eliasson (1990) – it indicates that the segment–zero alternation in this case is now a matter of allomorphy rather than regular phonology (chapter 99: phonologically conditioned allomorph selection): for each root, speakers simply memorize the appropriate consonantinitial suffix. (In fact, one of the re-analyzed suffix forms, -tia, has a much wider lexical distribution than the others and is the one now used by default in loanwords, neologisms, and code-switching; Eliasson 1990.) Similar historical restructurings involving consonant deletion have been reported for other languages (see e.g. the survey of the Sulawesi group of Western Malayo-Polynesian languages in Sneddon 1993). Evidence from historical restructurings of this type presents an obvious challenge to any derivational treatment of segment deletion, be it formulated in terms of input-oriented rules or output-oriented constraints. In any case, the economy argument in favor of unique underlying forms is no longer as persuasive as it might have once seemed, since there are now known to be practically no limits on the storage capacity of lexical memory (Braine 1974; Landauer 1986). Since Hale’s (1973) paper, models of lexical storage and access have emerged that allow us to capture phonological connections between morphologically related forms without necessarily deriving them from a single lexical source. Part of the process of auditory word recognition involves a lexical search that leads to a specific neighborhood containing a number of forms with similar phonological characteristics (see the literature review in McQueen and Cutler 1997). Even if two alternants of the same morpheme are stored as separate lexical entries, they are thus likely to show up in the same search, depending on how phonologically similar they are to one another. In the Samoan deletion case, related forms such as milo and milosia can have separate addresses in the lexicon (which would account for their allomorphic behavior) and yet still be close neighbors and thus be accessed in unison. The fact that one form lacks a consonant that is present in the other is of course a source of dissimilarity. However, this is mitigated by the location of the dissimilarity – away from the initial portion of the forms that is known to provide the most valuable information in word recognition.

22

John Harris

ACKNOWLEDGMENTS Thanks to the following for their valuable comments on earlier drafts of this chapter: Beth Hume, Marc van Oostendorp, Shanti Ulfsbjörninn, and two anonymous reviewers.

REFERENCES Badía Margarit, Antonio. 1951. Gramática histórica catalana. Barcelona: Noguer. Beckman, Mary E. 1996. When is a syllable not a syllable? In Takashi Otake & Anne Cutler (eds.) Phonological structure and language processing: Cross-linguistic studies, 95 –123. Berlin & New York: Mouton de Gruyter. Bernhardt, Barbara H. & Joseph P. Stemberger. 1998. Handbook of phonological development from the perspective of constraint-based nonlinear phonology. San Diego: Academic Press. Bird, Steven. 1995. Computational phonology: A constraint-based approach. Cambridge: Cambridge University Press. Bladon, Anthony. 1986. Phonetics for hearers. In Graham McGregor (ed.) Language for hearers, 1–14. Oxford: Pergamon Press. Blevins, Juliette. 1995. The syllable in phonological theory. In John A. Goldsmith (ed.) The handbook of phonological theory, 206 –244. Cambridge, MA & Oxford: Blackwell. Blevins, Juliette. 2003. The independent nature of phonotactic constraints: An alternative to syllable-based approaches. In Féry & van de Vijver (2003), 375–403. Bloomfield, Leonard. 1933. Language. New York: Holt. Braine, Martin D. S. 1974. On what might constitute learnable phonology. Language 50. 270–299. Bright, William. 1957. The Karok language. Berkeley & Los Angeles: University of California Press. Burzio, Luigi. 1994. Principles of English stress. Cambridge: Cambridge University Press. Bybee, Joan. 2001. Phonology and language use. Cambridge: Cambridge University Press. Campbell, Lyle. 2004. Historical linguistics: An introduction. 2nd edn. Cambridge, MA: MIT Press. Charette, Monik. 1991. Conditions on phonological government. Cambridge: Cambridge University Press. Clements, G. N. 1986. Compensatory lengthening and consonant gemination in LuGanda. In Wetzels & Sezer (1986), 37–77. Côté, Marie-Hélène. 2004. Consonant cluster simplification in Quebec French. Probus 16. 151–201. Coutsougera, Photini. 2002. The semivowel and its reflexes in Cypriot Greek. Ph.D. dissertation, University of Reading. Dell, François. 1995. Consonant clusters and phonological syllables in French. Lingua 95. 5–26. Doke, Clement M. & S. Machabe Mofokeng. 1957. Textbook of Southern Sotho grammar. Cape Town: Longman. Duanmu, San. 2008. Syllable structure: The limits of variation. Oxford: Oxford University Press. Eliasson, Stig. 1990. English–Maori language contact: Code-switching and the free morpheme constraint. In Rudolf Filipovio & Maja Bratanio (eds.) Languages in contact: Proceedings of the 12th International Conference of Anthropological and Ethnological Sciences, Zagreb 1988, 33–49. Zagreb: Institute of Linguistics. Encrevé, Pierre. 1988. La liaison avec et sans enchaînement: Phonologie tridimensionnelle et usages du français. Paris: Éditions du Seuil. Ernestus, Mirjam & R. Harald Baayen. 2007. Paradigmatic effects in auditory word recognition: The case of alternating voice in Dutch. Language and Cognitive Processes 22. 1–24.

Deletion

23

Féry, Caroline & Ruben van de Vijver (eds.) 2003. The syllable in Optimality Theory. Cambridge: Cambridge University Press. Gerfen, Chip. 2001. A critical view of licensing by cue: Codas and obstruents in Eastern Andalusian Spanish. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory, 183–205. Cambridge: Cambridge University Press. Giegerich, Heinz J. 1985. Metrical phonology and phonological structure: German and English. Cambridge: Cambridge University Press. Goldsmith, John A. 1989. Licensing, inalterability and harmonic rule application. Papers from the Annual Regional Meeting, Chicago Linguistic Society 25. 145 –156. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Gordon, Matthew. 1998. The phonetics and phonology of non-modal vowels: A crosslinguistic perspective. Proceedings of the Annual Meeting, Berkeley Linguistics Society 24. 93–105. Hale, Kenneth. 1973. Deep–surface canonical disparities in relation to analysis and change: An Australian example. In Thomas A. Sebeok (ed.) Current trends in linguistics, vol. 11: Diachronic, areal, and typological linguistics, 401– 458. The Hague: Mouton. Harris, John. 1994. English sound structure. Oxford: Blackwell. Harris, John & Edmund Gussmann. 2002. Word-final onsets. UCL Working Papers in Linguistics 14. 1– 42. Hartzler, Margaret. 1976. Central Sentani phonology. Irian: Bulletin of Irian Jaya Development 5. 66 –81. Hawkins, John A. & Anne Cutler (1988). Psycholinguistic factors in morphological asymmetry. In John A. Hawkins (ed.) Explaining language universals, 280–317. Oxford: Blackwell. Hayes, Bruce. 1995. Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press. Hoijer, Harry. 1946. Tonkawa. In Harry Hoijer, L. Bloomfield, M. R. Haas, A. M. Halpern, F. K. Li, S. S. Newman, M. Swadesh, G. L. Trager, C. F. Voegelin & B. L. Whorf (eds.) Linguistic structures of native America, 289 –311. New York: Viking Fund. Hyman, Larry M. 1975. Phonology: Theory and analysis. New York: Holt, Rinehart & Winston. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts. Published 1988, New York: Garland. Kavitskaya, Darya. 2002. Compensatory lengthening: Phonetics, phonology, diachrony. London & New York: Routledge. Kaye, Jonathan. 1990. “Coda” licensing. Phonology 7. 301–330. Kenstowicz, Michael & Charles W. Kisseberth. 1979. Generative phonology: Description and theory. New York: Academic Press. Kochetov, Alexei. 2006. Testing licensing by cue: A case of Russian palatalized coronals. Phonetica 63. 113–148. Laks, Bernard. 1977. Contribution empirique à l’analyse socio-différentielle de la chute de /r/ dans les groupes consonantiques finals. Langue française 34. 109 –125. Landauer, Thomas K. 1986. How much do people remember? Some estimates of the quantity of learned information in long-term memory. Cognitive Science 10. 477–493. Lief, Eric A. 2006. Syncope in Spanish and Portuguese: The diachrony of HispanoRomance phonotactics. Ph.D. dissertation, Cornell University. Mascaró, Joan. 1983. La fonologia catalana i el cicle fonològic. Bellaterra: Universitat Autònoma de Barcelona. McCarthy, John J. 1979. Formal problems in Semitic morphology and phonology. Ph.D. dissertation, MIT. Published 1985, New York: Garland. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox.

24

John Harris

McCarthy, John J. & Alan Prince. 1993. Prosodic morphology I: Constraint interaction and satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249 –384. Amherst: GLSA. McQueen, James M. & Anne Cutler. 1997. Cognitive processes in spoken-word recognition. In W. J. Hardcastle & John Laver (eds.) The handbook of phonetic sciences, 566–585. Oxford: Blackwell. Minkova, Donka. 1991. The history of final vowels in English: The sound of muting. Berlin & New York: Mouton de Gruyter. Myers, Scott. 1987. Vowel shortening in English. Natural Language and Linguistic Theory 5. 485–518. Myers, Scott. 2005. Vowel duration and neutralization of vowel length contrasts in Kinyarwanda. Journal of Phonetics 33. 427–446 Newton, Brian. 1972. Cypriot Greek: Its phonology and inflections. The Hague: Mouton. Nooteboom, Sieb G. 1981. Lexical retrieval from fragments of spoken words: Beginnings vs. endings. Journal of Phonetics 9. 407– 424. Ohala, John J. 1981. The listener as a source of sound change. Papers from the Annual Regional Meeting, Chicago Linguistic Society 17(2). 178–203. Ohala, John J. 1990. The phonetics and phonology of aspects of assimilation. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 258–275. Cambridge: Cambridge University Press. Oostendorp, Marc van. 2004. Phonological recoverability in dialects of Dutch. Unpublished ms., Meertens Institute, Amsterdam (ROA-657). Oostendorp, Marc van. 2007. Derived environment effects and Consistency of Exponence. In Sylvia Blaho, Patrik Bye & Martin Krämer (eds.) Freedom of analysis?, 123–148. Berlin & New York: Mouton de Gruyter. Piggott, Glyne L. 1999. At the right edge of words. The Linguistic Review 16. 143–185. Pratt, George. 1911. Pratt’s grammar dictionary and Samoan language. Reprinted 1977, Apia: Malua Printing Press. Price, P. J. 1980. Sonority and syllabicity: Acoustic correlates of perception. Phonetica 37. 327–343. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Sapir, J. David. 1965. A grammar of Diola-Fogny. Cambridge: Cambridge University Press. Scheer, Tobias. 2004. A lateral theory of phonology, vol. 1: What is CVCV, and why should it be? Berlin & New York: Mouton de Gruyter. Scobbie, James M., John Coleman & Steven Bird. 1996. Key aspects of declarative phonology. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, vol. 2, 685 –709. Salford: ESRI. Silva, David J. 1994. The variable elision of unstressed vowels in European Portuguese: A case study. University of Texas at Arlington Working Papers in Linguistics 1. 79 –94. Sneddon, J. N. 1993. The drift towards final open syllables in Sulawesi languages. Oceanic Linguistics 32. 1– 44. Steriade, Donca. 1982. Greek prosodies and the nature of syllabification. Ph.D. dissertation, MIT. Steriade, Donca. 1999. Alternatives to syllable-based accounts of consonantal phonotactics. In Osamu Fujimura, Brian D. Joseph & Bohumil Palek (eds.) Item order in language and speech, 205 –245. Prague: Karolinum Press. Strickland, Simon. 1995. Materials for the study of Kejaman-Sekapan oral tradition. Kuching: Sarawak Museum.

Deletion

25

Vennemann, Theo. 1972. On the theory of syllabic phonology. Linguistische Berichte 18. 1–18. Wetzels, W. Leo & Engin Sezer (eds.) 1986. Studies in compensatory lengthening. Dordrecht: Foris. Wright, Richard. 2004. A review of perceptual cues and cue robustness. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 34–57. Cambridge: Cambridge University Press. Zec, Draga. 1995. The role of moraic structure in the distribution of segments within syllables. In Jacques Durand & Francis Katamba (eds.) Frontiers of phonology: Atoms, structures, derivations, 149 –179. London & New York: Longman. Zec, Draga. 2007. The syllable. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 161–194. Cambridge: Cambridge University Press.

69 Final Devoicing and Final Laryngeal Neutralization Gregory K. Iverson Joseph C. Salmons

1

Introduction

In this chapter, we survey a set of phenomena that have traditionally been given the simple rubric “final devoicing.” This name, however, clearly conflates a number of different phonological phenomena – deletion of other laryngeal features, even feature addition – and the relevant general pattern is better characterized as “final laryngeal neutralization.” Widely attested across the languages of the world, final laryngeal neutralization represents a prototypical positional merger of phonological contrasts. Nonetheless, the attested patterns vary along several dimensions; they provide challenges to current phonological frameworks, on the one hand, and allow good testing grounds for them, on the other. In particular, the topic is highly relevant to the ongoing debates over the relationship between universal grammar and language change in shaping sound systems, such as Blevins (2004, 2006) vs. Kiparsky (2006, 2008) (see also chapter 93: sound change). In §2, after briefly reviewing some basic data, we provide a cross-linguistic survey of attested patterns in which (phonological) laryngeal features neutralize at right edges of prosodic constituents, while §3 introduces some aspects of the phonetics of final laryngeal neutralization with an eye toward what those aspects mean for the phonology of such processes. In §4, we then give an overview of the role of prosodic domains along which such patterns arise and generalize in sound change. In §5, we explore the major question in current phonology connected with this issue: the relationship between historical development and Universal Grammar. §6 summarizes and concludes. We focus on laryngeal features, leaving aside broader neutralizations, such as loss of length distinctions in final position (e.g. Kümmel 2007: 133–136), although it is important to note that these are often connected (see Trubetzkoy 1977: 74). We also restrict discussion to processes affecting obstruents, although final vowel devoicing and other such phenomena are attested, as surveyed by Barnes (2002: ch. 3). Finally, we focus on “dynamic,” alternation-supported neutralizations, leaving aside the “static” absence of contrasts, such as those in Thai and varieties of Quechua, in which laryngeal contrasts found in onsets (or word-initial position, The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0069

Final Devoicing and Final Laryngeal Neutralization

2

etc.) are merely missing in codas (or word-final position, etc.), without morphophonemic alternation.

2 2.1

The phonological typology of final laryngeal neutralization Some basic data

In many languages of the world, underlyingly voiced speech sounds do not show glottal pulsing at the ends of words, with the result that they are largely or often entirely indistinguishable from voiceless sounds. This occurs frequently with all obstruents, as in Catalan (Hualde 1992) or as illustrated here by nominal alternations from Polish (Rubach 1997: 553): (1)

nom sg

nom pl

klub majonez staw kandelabr

[p] [s] [f] [pr]

klub-y majonez-y staw-y kandelabr-y

[b] [z] [v] [br]

‘club’ ‘mayonnaise’ ‘pond’ ‘lamp’

As the last example demonstrates, Polish shows a variety of wrinkles in devoicing. Phonologically, extrasyllabic consonants, /r/ in this case, do not block devoicing (see also chapter 36: final consonants). As Tieszen (1997) shows, Polish manifests complex regional and acoustic patterns, which include evidence for incomplete neutralization for at least some speakers. Targets of this neutralization vary considerably across languages. Sometimes, not all obstruents alternate. In Turkish (Kopkallı 1993; Becker et al. 2008) most stops surface as voiceless finally (and with a degree of aspiration as well, per Vaux and Samuels 2005: 418), but may alternate with a voiced counterpart in suffixed word forms (data from Kopkallı 1993: 29; cf. also Nicolae and Nevins 2009; Feizollahi 2010): (2)

Turkish final stop neutralization /kab/ /kanad/

nom sg [kap] [kanat]

acc sg [kabQ] [kanadQ]

‘container’ ‘wing’

This pattern, however, does not extend to fricatives: (3)

Turkish final fricative voicing alternation [af] [kas]

‘pardon’ ≠ ‘muscle’ ≠

[av] [kaz]

‘hunting’ ‘goose’

While our focus is on obstruent neutralizations, we note that, in other languages, final sonorants (chapter 8: sonorants) may be subject to devoicing, as in Kaqchikel (Campbell 1998: 41):

3 (4)

Gregory K. Iverson & Joseph C. Salmons Kaqchikel sonorant devoicing /kar/ /kow/

[kaã] ‘fish’ [ko•] ‘hard’

We will not review the various technical approaches to laryngeal alternations among obstruents that have been offered in the past, such as Trubetzkoy’s (1977: 71) analysis of German final neutralization in terms of archiphonemes, which treats the neutralized obstruents as identical neither with the underlying Media (lenis/ voiced) nor Tenuis (fortis/voiceless). In modern work, these processes have been characterized as rules deleting the feature [voice] (or sometimes imposing [−voice]) or as constraints prohibiting that feature in final position, and that is our point of departure here as well. But final laryngeal neutralization can also consist in the addition of a contrastive property, not just the loss of one exemplified in the cases involving removal of [voice] adduced above. In particular, the work of Vaux and Samuels (2005: 418 –422) has shown that the addition of aspiration to stops in final position is cross-linguistically rather more common than has been appreciated. As they review for Kashmiri, contrasts between plain and aspirated final voiceless stops are merged in favor of the aspirated series, presumably via a rule that accrues the privative feature [spread glottis] (or [+spread glottis], in a binary system) to voiceless stops at the end of the word.1 (5)

Final aspiration in Kashmiri (Vaux and Samuels 2005: 420, citing Syeed 1978) nom sg /wat/ [wat h] /kat h/ [kat h]

dat pl [watan] [kat han]

agent pl [watau] ‘way’ [kat hau] ‘story’

Similarly, in Klamath a three-way contrast among voiceless aspirated, ejective, and plain stops neutralizes to the aspirated series in word-final position. Such patterns clearly suggest feature addition. (6)

Final aspiration in Klamath (Vaux and Samuels 2005: 421, citing Blevins 1993) /n’eph/ /nŒ’ek’/ /nkak/

[n’eph] [nŒ’ekh] [nkakh]

‘hand’ ‘in little bits’ ‘turtle (sp.)’

cf. [n’ephe(?a] [nŒ’ek’a(ni] [nkakam]

‘puts on a glove’ ‘small, little’ ‘turtle (poss)’

Vaux and Samuels take the existence of patterns such as these to justify the interpretation of aspiration as “unmarked” in languages that add this feature finally, attempting to salvage the widely subscribed view that neutralization in final position regularly entails merger to the unmarked member of a contrast (chapter 2:

1

Sadaf Munshi (personal communication) indicates that there are systematic exceptions to final aspiration in Kashmiri, apparently connected with patterns of historical apocope, e.g. [op] ‘person who can’t keep a secret’; [mot] ‘madman’. She further points out (Munshi 2006: 58ff., personal communication) that, in Burushaski, the three-way voiced/aspirated/voiceless unaspirated contrast is regularly neutralized to voiceless unaspirated in final position, affecting stops, affricates, and fricatives, even in loanwords, whereas in Kashmiri final (aspirating) neutralization affects only stops.

Final Devoicing and Final Laryngeal Neutralization

4

contrast; chapter 4: markedness), albeit now defined on a language-specific rather than universal basis. More generally, however, aspiration patterns such as Vaux and Samuels reveal have been simply disregarded, with final laryngeal neutralizations often all being treated without further differentiation as “final devoicing.” The idea that such non-assimilatory neutralization involves feature loss rather than addition has been particularly strong: Lombardi (2001: 13, passim) starts from the position that “The laryngeal distinctions of voicing, aspiration, and glottalization are often neutralized to plain voiceless in coda position,” exemplified by what she analyzes as the removal of [voice] in German codas. More recently, Kiparsky (2008: 46) uses the term “devoicing” to characterize the Korean process of final neutralization across three series – usually treated as lenis (laryngeally unmarked), aspirated, and tense. But the phonetic feature [voice] plays no phonological role in Korean on most views (cf. Avery and Idsardi 2001; Ahn and Iverson 2004), appearing only allophonically in the otherwise voiceless lenis series in intersonorant contexts because this is a position favorable to passive voicing. This aside, however, the characterization of final laryngeal neutralization generally as final devoicing has far-reaching implications for the nature of the phonological component in human grammar. Thus, Kiparsky (2006: 222) argues forcefully that “marked feature values are suppressed in ‘weak’ prosodic positions.” On the question of how to formalize this, he writes: The right way to do it in my opinion is that constraints can single out marked feature values (but not unmarked feature values). From these, with certain additional assumptions, we can build a system of constraints that asymmetrically prohibit marked feature values in weak positions. In processual terms, it predicts the existence of coda devoicing (coda depalatalization, debuccalization, deaspiration, etc.) and excludes coda voicing (coda palatalization, buccalization, aspiration, etc.).

On this view, ex nihilo feature insertion or addition is impossible, prohibited by the “design of language.” In the next section, however, we develop a typology of final neutralizations that includes a broad set of counterexamples to the claim that final neutralization invariably entails feature loss, or merger to the unmarked, but rather may involve motivated neutralization to marked feature values as conventionally (i.e. universally) construed.

2.2

Overview

We begin this section with an overview of laryngeal features, so as to provide a framework for discussing which ones participate in final neutralization and how. But it is clear at the outset that right edges of prosodic constituents are frequent loci of neutralization and reduction of many kinds. In this spirit, final laryngeal neutralization has often been regarded as a “subtype of final weakening” (Hock 1999: 19; Harris 2009; Honeybone, forthcoming), and thus related to lenition (chapter 66: lenition) and final consonant loss (chapter 68: deletion). Indeed, the view expressed just above (Kiparsky 2006) is that “weak positions” are actually governed by this directionality, so that final weakening in various forms is to be expected, but not final strengthening. We shall see, however, that a full typology of final laryngeal neutralization must also recognize the occurrence of final strengthening, or, as we shall refer to it, final fortition.

Gregory K. Iverson & Joseph C. Salmons

5

2.3

Laryngeal features

Now common in discussions of neutralization of “voicing” and other distinctions involving glottal states is the perspective known as “laryngeal realism” (cf. Iverson and Salmons 1995, 2003, 2006, 2007, 2009, the name taken from Honeybone 2005), which we will adopt here. On this view of laryngeal phonology, three privative features are considered to be sufficient to represent the known relevant contrasts in languages: [voice], [spread glottis] (henceforth [spread]), and [constricted glottis] ([constricted]) (see also chapter 17: distinctive features). In two-way systems, a “voice” language such as Spanish distinguishes marked voiced stops ([voice]) from unmarked voiceless unaspirated stops ([ ], using a blank space to indicate the absence of a phonological specification), whereas an “aspiration” language like English distinguishes marked aspirated or fortis stops ([spread]) from unmarked lenis (albeit often passively voiced) stops ([ ]). And a “glottalic” language like K’ekchi distinguishes marked ejectives ([constricted]) from unmarked, typically voiceless unaspirated stops ([ ]). Combinations of these possibilities also exist to make up three-way contrasts, as in the aspirated–voiced–plain system of Thai ([spread], [voice], [ ]), the aspirated–ejective–plain system of Klamath ([spread], [constricted], [ ]) or the aspirated–implosive–plain system of Vietnamese ([spread], [constricted & voice], [ ]). Four-, five-, and even six-way systems are also attested, as laid out first by Ladefoged (1973) and charted under the present feature system by Iverson and Salmons (1995). For example, the four-way system of Hindi adds murmured (breathy or voiced aspirated) stops to the three types of distinctions found in Thai, via paradigmatic as well as syntagmatic combination of [voice] with [spread] ([voice], [spread], [voice & spread], [ ]). The thrust of this minimalist representation is thus to reconcile not only the phonetics but also the phonological, historical, and acquisitional behavior of speech sounds with their featural characterization. Perhaps the most familiar neutralization of laryngeal contrasts among obstruents at the right edge of a prosodic constituent is the final neutralization process in German, documented comprehensively by Brockhaus (1995) and interpreted in the light of laryngeal realism by Iverson and Salmons (2007). Taking the realist perspective, German is an “aspiration language” in the sense described above, meaning that the laryngeal merger that takes place between fortis (aspirated) and lenis (passively voiced) obstruents syllable-finally is in fact final fortition (as implied by the descriptive German grammatical term, Auslautverhärtung), not final devoicing. (The domain varies regionally and stylistically between syllable-final and word-final.) In contrast to its sister Dutch, then, where final neutralization is devoicing (given that Dutch is a “voice language” in the sense described above), German neutralizes final laryngeal distinctions through feature addition rather than loss. The two types are illustrated below: (7)

a.

Final devoicing: /d/ → [t] (Dutch) d ]q = [voice]

Phonemic contrast:

/d/

/t/

[voice]

[ ]

Final Devoicing and Final Laryngeal Neutralization b.

6

Final fortition: /-/ → [th] (German) ]

Phonemic contrast:

[spread]

/ /

/t h/

[ ]

[spread]

In some languages, lack of required final release or other factors may make it difficult to distinguish between feature addition and feature removal, but the former pattern is securely and robustly attested.2 For instance, in his treatment of Kashaya (Pomoan), Buckley (1994: 87–88) illustrates the rule of “Coda Aspiration” with respect to the palatal stop /c/: (8)

/s’uwac-i/ /s’uwac-me-?/

→ →

s’uwaci ‘dry it! (sg)’ s’uwachme? ‘dry it! (formal)’

In this language, an underlyingly “plain” or laryngeally unmarked stop at the end of a word-internal syllable becomes “aspirated phonologically, and not simply subject to some rule of obligatory final release at the phonetic level” (Buckley 1994: 88). The aspiration rule also applies to word-final stops in loanwords, such as /’cajnikh/ ‘teapot’ (cf. Russian /Œajnik/) and /œaki’taqh/ ‘puffin’ (cf. Alutiiq /œaki’ta-q/) (1994: 100). Kashaya, in fact, shows a three-way contrast among plain, aspirated, and glottalic stops, but other processes besides coda aspiration contribute to the nonoccurrence of plain finals. Thus, a suffix in a morphological category like the assertive (/=?/) combines with a stem-final plain stop to yield a word-final glottalic (see Fallon 2002 on the notion of “fusion”): (9)

/qahmat=?/

[qahmát’]

‘he’s angry’

cf. /qahmat/

[qahmá?] ‘angry’

At the same time, non-verbs ending in a glottalic retain that ejective articulation (as in [hosiq’] ‘screech owl’), whereas word-final plain stops in the native vocabulary, analyzed as laryngeally empty, debuccalize to glottal stop ([qahmá?] ← /qahmat/). In other words, the language aspirates plain stops in native wordinternal codas (and word finally in loanwords), retains word-final phonemically aspirated and glottalic stops, but debuccalizes remaining word-final stops, with the result that final plain stops are phonetically absent. As already noted above, a collection of similar cases, including Kashmiri and Klamath, has been adduced by Vaux and Samuels (2005). Ejectives and plain stops in Klamath neutralize to aspirates word finally (Blevins 1993), and Yu (2008, personal communication) reports the same pattern for Washo. In Kashaya, however, plain voiced stops do not undergo the neutralization to aspirates that affects the language’s ejectives, just as in Kashmiri the phonemically voiced stops escape the final aspiration to which plain stops are subjected (Vaux and Samuels 2005: 419). The range of these phenomena suggests that final fortition, in addition to the expression in (7b), may be accompanied by the loss of all marked laryngeal content, not just the removal of [voice], as expressed in (7a): 2

Throughout, we will see complex interaction between release features and laryngeal neutralization. Like Rice (2009: 316), we understand even release features to be phonologically relevant.

7 (10)

Gregory K. Iverson & Joseph C. Salmons Final laryngeal delinking (/d t h t’ dh . . . / → [t]) [obst]q = Laryngeal

Contrasts: /t/

/d/

/t h/

/t’/

/dh/ . . .

Lar

Lar

Lar

Lar

Lar

[ ]

[voice]

[spread]

[constr]

[voice] [spread]

The operation in (10) thus accounts for final devoicing, as in Dutch or Polish, but also for final deaspiration, as in Korean; and the effect of this generalized delinking for languages, which combines voicing with aspiration (e.g. Sanskrit) or other properties, is to neutralize all contrasting laryngeal manner types to the plain, voiceless unaspirated type. On this interpretation, then, final neutralization of the most common kind is effected by delinking of the Laryngeal node, with the consequence that the loss of any one of the contrasting laryngeal features in the system implies loss of all the others, too, if any (chapter 27: the organization of features). This appears to be accurate, as final devoicing in more complex systems goes hand in hand with the removal of other contrastive laryngeal gestures as well. Final fortition, on the other hand, consists in imposition of the feature [spread], as per (7b), but appears to affect only laryngeally unmarked obstruents. Thus, aspiration in Kashaya accrues to final plain (unmarked) stops, but not to final ejectives (marked as [constricted]), and aspiration in Kashmiri similarly accrues to final plain stops, but not to phonemically voiced ones. Final fortition, in sum, affects the class of laryngeally empty obstruent configurations as per the refinement of (7b) given in (11), attracting the feature [spread] to a final Laryngeal node that is otherwise empty, or unmarked. The situation in Klamath or Washo then falls into place as a combination of final laryngeal delinking (10) feeding into final fortition (11). (11)

Final fortition (/t -/ → [t h] (Kashmiri, Klamath, German) [obst]q Laryngeal [spread]

As noted for Kashmiri above, neutralization to [spread] often leaves other, already laryngeally specified series untouched. For instance, Koyukon, Hupa, and Tlingit appear to aspirate unmarked stops finally, neutralizing with the [spread] series, but in each language the ejective series does not participate (Vaux and Samuels 2005: 418–441). Final neutralization via the addition of other features appears to be far less common, according to the literature of which we are aware. Neutralization to a phonologically voiced member of an opposition may occur, but appears to be very rare, and controversial, as reviewed in recent discussion about the analysis

Final Devoicing and Final Laryngeal Neutralization

8

of Lezgian (Yu 2004; Blevins 2006, drawing data from Haspelmath 1993). This Nakh-Daghestanian language possesses a four-way laryngeal distinction in onsets (voiceless, aspirated, voiced, ejective). In some monosyllabic noun classes, ejective and voiceless obstruents neutralize to voiced word-finally or after consonants, with suffixed forms showing the underlying obstruent and unsuffixed stems the neutralized realization (data from Yu 2004): (12)

pab rug q’eb t’ib

pap-a rukw-adi q’ep’-ini t h jp’-er

‘wife’ ‘dust’ ‘cradle’ ‘owl’

As Blevins (2004, 2006) emphasizes, there is no natural phonetic or aerodynamic reason to make stops voiced in final position, where modal voicing in obstruents is difficult to maintain as well as to perceive; accordingly, the regular occurrence of final voicing, although in principle learnable, is expectedly uncommon. In this instance, the phenomenon arose coincidentally as the product of separate but converging historical events. Specifically, the synchronic pattern in Lezgian is the result of “natural” sound changes involving old voiced obstruents in both medial and final position: the medials underwent gemination (and became voiceless) and subsequently degeminated, leaving the historically voiced word-finals to stand in alternation with now voiceless word-medial stem-finals. Blevins (2004, 2006) suggests other candidates for synchronic final voicing, perhaps most clearly in Somali, where historically a neutralizing medial voicing process was followed by final vowel loss, thus creating a pattern of apparent final voicing. Kiparsky (2006: 225) reanalyzes Somali (and some other cases), however, as an aspiration language in our sense, rather than a voice language. That is, Somali final “voiced” stops are phonetically lenis and unaspirated, contrasting with aspirated stops marked by [spread]. This view appears to be consistent with available descriptions of Somali obstruent phonetics, most explicitly that of Orwin (1993), although the possibility remains open that some dialects of the language may employ [voice] rather than [spread]. Utterance-final position provides a potential universal starting point or trigger for anticipatory devoicing (to the following silence), whereas no such starting point exists to trigger utterance-final voicing. Phonologically, as a nonassimilatory addition of the feature [voice], final voicing is poorly motivated to begin with inasmuch as feature additions are almost always sourced in assimilation. On the other hand, in the structurally parallel case of final fortition as described above, non-assimilatory addition of [spread glottis] appears to serve a prosodic edge-marking function in association with the greater acoustic salience of fortis/aspirated over lenis/voiced consonants. This function is not compatible with final voicing, and would appear to be irrelevant, too, to the other laryngeal feature, which in principle could be added finally, [constricted glottis]. Thus, neutralization per se to glottalic consonants is not securely attested in the literature, although superimposition of a glottal stop on final voiceless oral stops is seemingly common. In the terminology of Michaud (2004: 120), three different kinds of glottal gestures (glottal stop, glottal constriction, and creaky voice/ laryngealization):

9

Gregory K. Iverson & Joseph C. Salmons can be characterized phonetically as follows: (1) Glottal stop is a gesture of closure that has limited coarticulatory effects on the voice quality of the surrounding segments. (2) Glottal constriction (also referred to here as glottal interrupt) is a tense gesture of adduction of the vocal folds that extends over the whole of a syllable rhyme. (3) Laryngealization (i.e. lapse into creaky voice), resulting in irregular vocal fold vibration, is not tense in itself. Glottalization is used as a cover term for laryngealization and glottal constriction . . . .

Final neutralization directly via any of these three gestures is not known to us, nor are we familiar with final glottalization in the sense of Michaud’s (2) and (3). At the same time, a familiar allophonic and optional instance of (1) is found in many varieties of English, e.g. in bat [+æt?], perhaps as a prosodic right-edge marker appearing on fortis stops. This pattern of “glottal reinforcement” then sets the scene for the loss of supralaryngeal stops in some dialects, especially with /t/, e.g. bat [+æ?], bottle [+a?ü]. Under the Dimensional Theory of feature representation advanced by Avery and Idsardi (2001, forthcoming), in fact, the gesture [constricted glottis] (which characterizes glottal closure, among other phenomena) is implemented via the dimension of Glottal Width, as a complement to the contrary gesture [spread glottis]. On this approach to feature organization, a relationship between these two contradictory gestures is thus predicted, so that an aspiration language such as English or German (with contrastive [spread glottis]) naturally gravitates toward implementation of its Glottal Width dimension in final position as either aspirated, when released (as in bat [+æth]), or as glottalically closed, when unreleased (as in bat [+æt?]). As noted, many American speakers realize such codas simply as a glottal stop, with no supralaryngeal occlusion. A striking illustration of the complementary relationship between aspiration and glottalization is found in McFarland’s (2007) description of Filomeno Mata Totonac: “Glottal consonants [h] or [?], or spread/constricted glottis features are required at certain domain edges in Filomeno Mata Totonac, and are disallowed domain-internally.” The process is thus not neutralizing, as the language makes no laryngeal contrasts, but the differing manifestation of Glottal Width in domain-edge consonants is entirely predictable: glottalization in sonorants, aspiration in obstruents. Thai – with its three-way laryngeal contrast of voiceless, voiced, and aspirated – shows a pattern of glottalization similar to English, albeit with a difference in frequency. As Esling et al. (2005: 388) describe it: “In English, unreleased final glottal reinforced oral stops [?p], [?t], and [?k] are optional allophonic variants, but in Thai they are the norm.” Michaud (2004) reports that dialects of Chinese that retain Sino-Tibetan historical final stops (Fujian and Cantonese) typically accompany these with a glottal stop, too, and this co-articulation is taken as a step on the historical path toward loss of final oral stops altogether, first via debuccalization leaving only the glottal stop, as in modern Burmese and Min and Hakka Chinese, then loss of all trace (save tone) of the original stops, as in modern Mandarin. Central to accounting for the attested patterns of featural addition in neutralization is doubtless the often-noted (e.g. Blevins 2004: 98–99, 2006: 138) tendency of languages not to require or allow final release of stops, thus partially or entirely obscuring distinctions carried by release features. One superficially exceptionallooking pattern underscores the role of release in such neutralizations: in Chong

Final Devoicing and Final Laryngeal Neutralization

10

(Mon-Khmer; Silverman 2006: 79–80), final stops are unreleased, but the language nonetheless maintains a distinction between root-final (= word-final, since the language has no suffixes) glottalized and non-glottalized stops. This is suggested by Silverman to correlate with the timing of glottalization: Chong has pre- rather than post-glottalization. That is, because of the timing of the gesture, the distinction between phonologically glottalized and non-glottalized stops is not dependent on final release.3 The literature contains a number of other similar instances from the languages of the world, where some set of final obstruents adds glottalization, without leading to neutralization; see Gurevich (2004: 137ff., 151ff.) on optional glottalization of final voiced stops in Lahaul (Pattani) and glottalization of voiceless stops before consonants or juncture in Maidu. In all these cases, glottal stop is present in the language, but the languages lack contrastive glottalization in the obstruent system. Similarly, the phoneme inventory of Dumi (Tibeto-Burman; van Driem 1993: 52–59) includes both /h/ and /?/, the former restricted to onsets and the latter to codas. The language has three stop series: voiceless unaspirated, voiced, and breathy or murmured. The voiceless series appears finally unreleased with “simultaneous glottal stop,” e.g. /leIghok/ [leIgD?k¬] ‘throat’. The other two series do not appear in codas in native words, so that the effect of this glottalization is not neutralizing either. The related language Limbu (van Driem 1985: 7–16) shows similar patterns of glottalization of voiceless stops in codas without neutralization. Debated as a possible case of final voicing (Blevins 2006; Kiparsky 2006), Tundra Nenets reinforces all consonants prepausally with a glottal stop (Salminen 1997: 31–32; see also Janhunen 1986: 81–83), which appears variably in word-internal codas as well. Other less secure cases are closely parallel, like the glottalization reported for all three voiced stops (/b d g/) in syllable codas in Kamassian (an extinct South Samoyedic language), from Kümmel (2007: 187–188): (13)

b → ?b ~ ? / __ #

As in all other languages we know to have final glottalization, this reinforcement is not contrastive in Kamassian. Like Chong, this involves pre- rather than post-glottalization and, like English, it appears to be connected with facultative loss of final stops for some speakers, as noted above for forms like [+æ?] bat. Kümmel in fact suggests that loss of the original coda consonant may have triggered the glottalization. On the other hand, Barnes (2002: 210ff.) follows Hyman (1988) in arguing that “at least in many cases the epenthetic final glottal stop so common in the languages of the world is ultimately the phonologization of allophonic phrase-final creak.” In any case, the occurrence of word- or phrase-final glottalic gestures does not itself lead directly to obstruent neutralization, as far as we can tell, but rather does so only indirectly through concomitant loss of oral gestures. The full possible typology of final laryngeal neutralization, then, includes the patterns shown in (14). 3

If correct, note that this suggests how phonetics can shape phonology: the gestural and timing patterns used to realize a feature (see Henton et al. 1992) appear to correlate with what does or does not happen phonologically, but see also Howe and Pulleyblank (2001).

Gregory K. Iverson & Joseph C. Salmons

11 (14)

Typology of possible final laryngeal neutralizations a. b. c. d. e. f. g. h. i. j.

deletion of [voice] (Polish, Dutch, Catalan) deletion of [spread] (Korean) deletion of [constricted] (Hup, as discussed below) deletion of [voice] and [spread] (Sanskrit, Burushaski) deletion of [voice] and [constricted] (no clear cases) insertion of [voice] (probably Lezgian, possibly Somali dialects) insertion of [spread] (Kashmiri, Eastern Armenian, Kashaya, German) insertion of [constricted] (final glottalization; no clear neutralizing cases) insertion of [voice] and [spread] (final murmuring; no clear cases) insertion of [voice] and [constricted] (final laryngealization; no clear neutralizing cases)

Of these types, deletion of [voice] (a) and insertion of [spread] (g) are widely and securely attested.4 While Kiparsky (2008: 46) asserts that coda neutralizations in general “typically” go to unmarked values, he concludes that “the direction of voicing neutralization [is] universal” (2008: 53). As we have seen, however, at least in the particular instance of laryngeal neutralizations, the strong interpretation of this claim is simply false. Indeed, it is unclear to us whether (a) is even significantly more common among voice languages than (g) is among aspiration languages. Neutralization via the deletion of other contrasting laryngeal features is exemplified in the deaspiration of (b) and the combination of deaspiration with devoicing in (d). Deglottalization via the deletion of [constricted], on the other hand, appears to be rare, but it is found in Hup (see below; on the view that laryngeal features are privative, thus ruling out the insertion of [−voice] or [−spread], see also chapter 7: feature specification and underspecification). And the devoicing in (a) need not necessarily lead to the loss of [constricted]: in Dhaasanac (Tosco 2001: 19–20; Blevins 2006: 143), for instance, word-final obstruents devoice but do not lose contrastive [constricted], so that the implosive series surfaces in final position as voiceless glottalized stops, [?p ?t ?k]. Fallon (2002) further shows that [constricted] often functions independently of other laryngeal features in other settings. While the insertion specifically of [voice] (f) is perhaps only marginally instantiated, the deletion of [spread] (b) appears to be tied to the more general removal of all marked laryngeal features. Rather commonplace, however, are the opposites of these: the deletion of [voice] and the insertion of [spread]. Like deletion of [constricted] (c), insertion of contrastive [constricted] (h), (i) is not securely attested, although the literature on this question is more limited to date. Notable is that 4

Indeed, the association between [voice] in our sense and devoicing is strong enough that van Rooy and Wissing (2001: 326) raise the possibility that phonemic [voice] might imply final devoicing. For [voice] languages that do not appear to have devoicing, like French and Ukrainian, they write that “the phonetics and phonology of these languages need careful investigation, as it appears from this paper that much of what appears to be variable from a phonological perspective can be explained from a phonetic perspective.” While final devoicing in [voice] languages may be underreported, the widespread tendency for devoicing could also be the result of a strong historical bias toward deletion of [voice], which would find support in a variety of phonetic factors, many discussed below.

Final Devoicing and Final Laryngeal Neutralization

12

non-contrastive glottalization via insertion of [constricted], or laryngealization via insertion of simultaneous [constricted & voice], is a frequent optional process across a set of languages, whereas final murmuring via insertion of [voice & spread] is not known to us either as a neutralization or as an allophonic enhancement. In complex laryngeal systems, with three-way or more contrasts, it appears often that not all series neutralize in a merger context. In the Amazonian language Hup, the glottalized series of stops (variably but non-contrastively voiced elsewhere) merges in final position with the plain voiceless series, although the neutralization is perhaps not always complete in nasal contexts (Epps 2008, personal communication). In the same environment, however, the language’s phonemically voiced plain stops do not devoice (and so do not merge with voiceless and deglottalized finals), but rather post-nasalize, making them phonetically even more distinct from the other series. (Obstruent post-nasalization is an enhancement suggestive of “hypervoicing” in the sense of Henton et al. 1992.) In summary, then, (a), (b), (d), and (g) are relatively well-attested neutralizations; (c), (e), and (f) seem reasonable, but are not widely attested; (h) is marginal (but learnable); and (i) and (j) appear to be unattested and are perhaps impossible, as we know of no solid cases of neutralization to compound features. Below, in §4, we consider the diachronic paths that may have given rise to some of these asymmetries.

3

How the phonetics of final neutralization informs the phonology

Phonetically, final devoicing in particular has invited appeals to the aerodynamics of speech, according to which utterance or breath group-final edges are produced with reduced pulmonary pressure; others have attributed final devoicing to a phonological assimilation to the following silence (cf. Hock 1999 on both points). And for stops, in particular, it has been argued that modal voicing is difficult to maintain in general; see Gamkrelidze (1975), although Westbury and Keating (1986) advise that solid data on this point are limited. While neutralization is typically categorical phonologically, the phonetic cues to laryngeal distinctions, including those in final position, have proven to be remarkably complex. With respect to the ostensibly simple issue of English “voicing,” Lisker (1986) alone catalogues 16 distinct cues for stops in medial position, including glottal pulsing, consonant and vowel duration, and changes in fundamental frequency and in the first formant. Lisker notes, however, that the list is hardly exhaustive, and more recent work shows that still other factors, including amplitude, also play a role. The function of these cues varies of course by prosodic context, and while stressed word-initial position may be captured relatively straightforwardly by measurement of Voice Onset Time delay, final position proves particularly elusive. In fact, Rodgers et al. (2010) show that for a set of speakers from the American Upper Midwest, a different range of acoustic characteristics provides the best correlation for distinguishing word-final /t/ from /d/ in both frame sentences and running speech. These include RMS amplitude, rate of change of RMS amplitude, and, even more specifically, amplitude of individual harmonics and formants. In short, the full acoustic picture of final laryngeal distinctions is far from clear, even for a well-studied language like English.

13

Gregory K. Iverson & Joseph C. Salmons

We lay out in the next paragraphs some implications that this phonetic complexity carries for the phonology of final laryngeal neutralization. First, we argue that a successful account must pay attention to the role of multiple cues and trading relations in producing distinctions (§3.1). Second, we review briefly the possibility that final laryngeal distinctions undergo at most “incomplete neutralization,” concluding that complete phonological neutralization is attested (§3.2). This leads to a third point reaching beyond phonetics, namely the range of other effects that come into play in neutralization (§3.3).

3.1

Multiple cues

While an extensive body of research shows that final neutralization can be complete (see below for references), the presence of multiple possible and actual phonetic cues to laryngeal distinctions raises questions about the mapping between the phonetics of such cues and phonological contrasts. (For one view on the topic, see Kingston et al. 2008.) Evidence from variation in American English points to the importance of “trading relations” among cues. Purnell et al. (2005a, 2005b) show that, over several generations of real- and apparent-time speech from eastern Wisconsin, speakers have systematically changed how they realize these contrasts finally, from exploiting actual glottal pulsing early on to later relying on duration of the preceding vowel.5 A perception test showed that listeners who were speakers of other varieties of American English did not, in general, have difficulty interpreting either set of cues to laryngeal distinctions. That is, the phonetic realization of the distinction has changed over time in this region, but the phonological distinction, even for outsiders, has remained stable.

3.2

Incomplete neutralization in production

Although many cases of final laryngeal neutralization have been described as effecting complete merger, a long thread of work has argued that neutralization is sometimes not truly complete, often in the service of challenging the notion of phonological contrasts (e.g. Dinnsen and Charles-Luce 1984; Port and O’Dell 1985; Charles-Luce and Dinnsen 1987). Fourakis and Iverson (1984) attribute the incompleteness effect reported for German to a laboratory artifact introduced by awareness on the part of participants (who presumably also spoke English, a language with a final distinction) as to the purpose of the experimentation, thus resulting in partially hypercorrect pronunciations of finals with traces of their morphophonemically lenis properties rather than as fully neutralized fortis obstruents. But under conditions in which the real purpose of the experiment is concealed in the guise of a morphological exercise (strong verb conjugation) rather than presented as an (apparently intimidating) evaluation of pronunciation, participants neutralized German final obstruents completely. Jassem and Richter (1989) found that final neutralization under conditions such as these is also complete in Polish – contra Slowiaczek and Dinnsen (1985) – and Kim and Jongman (1996) report complete neutralization of final manner contrasts in Korean. 5

In contrast to the productions of earlier generations, the youngest group of speakers appears to be developing final laryngeal neutralization. Evidence of devoicing in northwestern Indiana can be found in José (2009), with reference to a range of other studies. The pattern appears to be stable over time in that community.

Final Devoicing and Final Laryngeal Neutralization

3.3

14

Incomplete neutralization in perception

Studies seeking to establish perceptually incomplete neutralization ultimately face challenges beyond what listeners glean from the acoustic signal, which is not, it turns out, the only clue that listeners have to whether even nonce forms belong to one or another class (chapter 98: speech perception and phonology). Ernestus and Baayen (2003: 6), for instance, hypothesize that “Speakers recognize that there is neutralization and base their choice for the underlying representation on the distribution of the underlying representations among existing morphemes, serving as exemplars.” They show that the Dutch lexicon has clear asymmetries with regard to the phonotactics of final underlying voice (cf. also Ernestus and Baayen 2007 and other papers in van de Weijer and van der Torre 2007). The distribution of word-final labial stops in Dutch, for example, is heavily biased toward underlyingly voiceless, but labial fricatives are even more heavily biased toward voiced.6 A production experiment showed that speakers treated novel forms in line with those patterns, leading Ernestus and Baayen (2003: 31) to conclude: First, our data show that the underlying [voice] specification of final obstruents in Dutch is predictable to a far greater extent than has generally been assumed. It is predictable not only for linguists having computerized statistical techniques at their disposal, but also for naive speakers, since they use this predictability in language production. Second, we see that the predictability is based on the similarity structure in the lexicon.

Moreover, it has proven challenging for even well-designed experiments to control for issues like orthography. Warner et al. (2004, 2006) report a set of very low-level acoustic differences between underlying voiced and voiceless obstruents in Dutch, but then later follow up with new experiments showing that “incomplete neutralization may be entirely caused by orthographic differences” (Warner et al. 2006: 292).7 Indeed, Fourakis and Iverson (1984) noted that previous laboratory investigations of German final neutralization had found acoustic traces of lenis articulation in the orthographic d of und ‘and’ and the g of weg ‘away’. Both of these forms represent non-alternating and therefore uncontroversially phonemically fortis stops in modern German that happen only for historical reasons to be spelled with the lenis graphemes d and g rather than fortis t and k, respectively. Aside from extra-phonological influences such as these, studies of ordinary speech, including present-day Wisconsin English and many cases cited by Blevins and others, show that early stages in the historical development of final laryngeal neutralization are prone to being both variable and partial, parallel in some ways to patterns familiar from vocalic “near-mergers” (Di Paolo 1988; Labov 1994). Moreover, even complete neutralizations are often recoverable perceptually from the pragmatic context (morphological differences aside, German Rat ‘advice’ and

6

This is perhaps surprising, given a heavy cross-linguistic bias for languages to have voiceless fricatives and no voiced ones (Maddieson 1984: 52ff.). That is, frequency in cross-linguistic inventories patterns quite differently from the lexical patterns found within Dutch. 7 For a fuller range of evidence and discussion of the complex devoicing and other laryngeal phonology of Dutch, see van de Weijer and van der Torre (2007).

15

Gregory K. Iverson & Joseph C. Salmons

Rad ‘wheel’ are unlikely to be confusable even if they occur in the same discussion) or based on phonological gaps in lexical distribution. Thus, German morphophonemically fortis stops freely occur after lax vowels (Ecke ‘corner’, Beck ‘brook’), but lenis stops rarely do (Ebbe ‘ebb tide’, gib ‘give! (sg imp)’),8 and there are numerous underlying sequences of stem-final labial or velar lax stop preceded by tense vowel (Lieb- ‘love’), but hardly any with underlying fortis stop in this context. These kinds of patterns allow Piroth and Janker (2004: 99–100) to observe that, “Due to lexical and morphological structure there are only very few minimal pairs of alternating paradigms with underlying voiceless vs. underlying voiced final obstruents.” That is, the phonological contrast differentiates few homophones, so that there is little lexical competition in the sense of Blevins and Wedel (2009: esp. 169).

3.4

Summary

This section has surveyed some issues in the phonetics of final laryngeal distinctions, drawing especially on English, which maintains a distinction in almost all varieties and most contexts, and German, in which most varieties do not. First, we have argued here that phonetic cues to final laryngeal distinctions show remarkable complexity, and can change even as phonological contrasts remain stable in perception and production. Second, this informs the question of whether final neutralization is always complete or can be incomplete. At least when realized by feature deletion, it appears that complete phonological neutralization in ordinary speech is observable both acoustically and perceptually, but recovery of the neutralization is enhanced by considerations of pragmatics, skewed phonotactic distributions, spelling conventions, and lexical limitations on homophony.

4

The domains of final laryngeal neutralization and paths of change

Whether accomplished via feature removal or addition, final laryngeal neutralization is widely attested at all levels of the prosodic hierarchy, from utterance-final to phrase-final to word-final to syllable-coda position. In fact, the major route by which languages develop final laryngeal neutralization has been seen as running along that hierarchy, beginning with large units and moving downwards (e.g. Hock 1999; Blevins 2004, 2006). For feature-removing neutralization, universal physical and acoustic motivations make voicing particularly challenging at the end of breath groups. By definition, at the end of breath groups pulmonary pressure reaches its minimum for that stretch of speech, but that can also be controlled and enhanced as part of speech. Lieberman (1967: 104) found that American English speakers lower subglottal air pressure during the last 150–200 msecs of sentences, with acoustic effects including falling fundamental frequency. Decreased pressure exacerbates the inherent difficulties of voicing stops in particular, with further biases by place of articulation, as noted above. 8

In fact, speakers vary in having a tense and lax vowel in gib, and the standard reference work on standard pronunciation gives [gi(p] for this form (Mangold 2005).

Final Devoicing and Final Laryngeal Neutralization

16

While neutralization at the end of longer stretches of speech is directly rooted in the physiology and physics of speech (pulmonary pressure and the aerodynamics of vocal fold vibration), movement down the prosodic hierarchy involves steadily broadening generalizations made by learners and speakers over generations, as argued by Iverson and Salmons (2009), Salmons (2010), and many others. Blevins (2006: 140–143) presents broad evidence from a wide range of families on historical paths of development of final laryngeal neutralization. “Early stages” of devoicing co-occur with prepausal or phrase-final position, and they may be variable, gradient, and sensitive to aerodynamic properties (like a preference for devoicing /g/), in addition to occurring only at the right edges of phrases (or presumably other longer stretches of speech). She posits an implicational hierarchy of such patterns, according to which languages may neutralize at the right edges of larger prosodic units and not at smaller ones, but never vice versa. For instance, numerous languages (Dhaasanac, Maltese, and some varieties of German) neutralize at the ends of words (and larger units), but not of syllables.9 Of course the reverse pattern would be inherently odd structurally: word-final is coda position and phrase-final is word-final, for instance, so that coda neutralization should apply to higher levels. As observed above, the historical development of glottalization is reported to parallel this path over prosodic domains closely, as discussed above, but remains laryngeally non-neutralizing and often leads to the loss of final oral stops. Overall, these patterns reflect the historical paths of development laid out in the theory of Evolutionary Phonology (Blevins 2004, 2006): laryngeal neutralization by means of feature addition, especially if accompanied by mandatory release, appears to function as an edge marker, arising via release of final stops in salient prosodic positions, whereas neutralization via feature loss appears driven by mandatory or facultative absence of release. Even if the motivation is different, edge marking shows distributions similar to those of neutralization by feature removal. For example, in the Tundra Nenets case discussed above, glottal reinforcement occurs with all consonants prepausally. Some varieties of American English may be starting down this path today. As already noted above, young speakers of Upper Midwestern English show nascent word-final devoicing. Purnell et al. (2009) present evidence that may reflect an earlier stage of this process: 2008 Republican vice-presidential candidate Sarah Palin showed variable final neutralization, with a preference for phrase-final position; i.e. what looks like a slightly earlier stage of development than that found now in Wisconsin. Palin was raised in an Alaskan community settled overwhelmingly by 1930s emigrants from northern Wisconsin, Michigan, and Minnesota, and this colonial variety of Upper Midwestern English may preserve the earlier patterns.

9

Blevins’s treatment does not involve detailed language histories, but see Mihm (2004) and Iverson and Salmons (2007) for the beginnings of a case study of German.

17

5

Gregory K. Iverson & Joseph C. Salmons

History and Universal Grammar in final laryngeal neutralization

In the preceding sections, we laid out two contrasting positions with regard to how to explain final laryngeal neutralization. Following the discussion in Kiparsky (2008), one position favors the view that grammatical structure constrains language change and the other favors the view that language change is the primary shaper of grammar. The former is the classic generative position, illustrated on this issue by the work of Kiparsky, and the latter is associated with various approaches, mostly recently Evolutionary Phonology, illustrated by the work of Blevins, both discussed above. Kiparsky (2008: 52) concludes that: The two programs can coexist without contradiction or circularity as long as we can make a principled separation between true universals, which constrain both synchronic grammars and language change, and typological generalizations, which are simply the results of typical paths of change.

We share that ecumenical spirit, and note that it can be difficult to find the seam between such true universals and typological generalizations. We have argued above that Kiparsky has proposed a “true universal” which does not ultimately hold up empirically. This suggests that Universal Grammar is leaner than has often been claimed, in line with many views emerging in the field today. The preceding sections aim to develop generalizations, some of which may prove to be “true universals,” while others will clearly be “typological generalizations.” In contrast to Kiparsky’s approach, Blevins (2008: 107) concludes that “Within the phonological realm, there appear to be few, if any, substantive universals,” with specific rejection of distinctive features as “substantive phonological universals,” treating them instead as “emergent properties” (see also chapter 17: distinctive features). Full justification would go well beyond our assigned task here, but we are not prepared to abandon the core substance of phonology. Above, we have relied on abstract featural characterizations, but argued that they must be considered in the context of phonetic variability and an array of psycholinguistic factors. History, internal and external, shapes contrasts and features through that context.

6

Summary and conclusions

In the foregoing, we started from a synthetic survey of what is known at present about final devoicing and laryngeal neutralization generally. Along the way, we have identified a number of phonological patterns based on the currently available evidence. Assuming privative laryngeal contrasts, neutralization can occur toward either a marked or an unmarked feature configuration, that is, by feature removal or insertion. Another form of right-edge marking, glottalization, appears to occur in languages for which [constricted] is not contrastive, thus without triggering laryngeal neutralization. These points all bear on current phonological discussions, i.e. about featural representations, the role of prosody and history in synchronic phonology, and the nature of neutralization in “weak” positions. All of these discussions represent areas of potential progress.

Final Devoicing and Final Laryngeal Neutralization

18

While we have kept a focus on phonological theory, we have done so in the context of phonetics (especially perception), sound change, and prosody, all of which, we argue, are critical to a full understanding of final neutralization. Several conclusions follow, including these: (i)

(ii)

(iii)

Featural characterization matters: The deletion of [voice], or final devoicing sensu stricto, is pervasive although not ubiquitous cross-linguistically. Addition of [voice] is rare to the point that some doubt its existence. The addition of [spread], or final fortition, is well attested, if potentially less common than devoicing. Deletion of [spread] (often with other features) is relatively common. Laryngeal realism, we have suggested, provides a robust typological generalization that would be impossible on the traditional interpretation of phonemically lenis stops with [+voice], and of phonemically aspirated stops with [−voice]. Phonetics, phonotactics, and other patterns matter: Laryngeal distinctions are carried by a wide range of phonetic cues, even within a single variety or even speaker. Still, long-standing efforts to argue for “incomplete neutralization” in German or Dutch may reflect nothing about the acoustic signal, but much about the generalizations speakers/listeners are able to make based on their knowledge of phonotactics and lexical frequency. This is especially important because the phonetics of final laryngeal distinctions is particularly complex, a fact that has left open the possibility that some cue to the distinction may survive neutralization. Phonologically, however, evidence indicates that final neutralization in languages like Dutch and German is typically complete. Both structure and history matter: Both the “design” of language and historical forces play significant roles in synchronic sound patterns. In the particular example at hand, some claims about the role of language design have proven to be overstated, but such refinements are the work of healthy science.

As noted in §2, numerous scholars have treated “final devoicing” and its relatives as particular forms of final weakening, synchronic or diachronic. Research to date on final laryngeal neutralization reveals widespread patterns of feature addition and fortition. As with laryngeal realism itself, we would suggest that final neutralizations can come about by either weakening – through feature loss – or strengthening – through feature addition, including glottal reinforcement where it is not contrastive in any of the languages surveyed. This presentation raises a number of new questions, especially on the typological front. The data we are aware of suggest some striking and as yet unexplored patterns, like the tendency of [spread] to delete finally together with other laryngeal features (Sanskrit), while [voice] appears able to delete independently (Dhaasanac). The answers to questions about such patterns will doubtless sharpen our understanding of the subtle interactions between the human linguistic endowment and the historical patterns we are presented with as learners.

ACKNOWLEDGMENTS We thank the editors for the invitation to contribute to this project and an anonymous reviewer for helpful comments. Pattie Epps, Beth Hume, José Hualde, Sadaf Munshi, Marc

19

Gregory K. Iverson & Joseph C. Salmons

van Oostendorp, Tom Purnell, Eric Raimy, Blake Rodgers, and Alan Yu all provided helpful discussions on this topic and in some cases comments on earlier versions of this chapter. The usual disclaimers apply.

REFERENCES Ahn, Sang-Cheol & Gregory K. Iverson. 2004. Dimensions in Korean laryngeal phonology. Journal of East Asian Linguistics 13. 345–379. Avery, Peter & William J. Idsardi. 2001. Laryngeal dimensions, completion and enhancement. In Hall (2001), 41–70. Avery, Peter & William J. Idsardi. Forthcoming. Laryngeal phonology. Cambridge: Cambridge University Press. Barnes, Jonathan. 2002. Positional neutralization: A phonologization approach to typological patterns. Ph.D. dissertation, University of California, Berkeley. Becker, Michael, Nihan Ketrez & Andrew Nevins. 2008. The surfeit of the stimulus: Analytic biases filter lexical statistics in Turkish devoicing neutralization. Unpublished ms., Reed College, Yale University & Harvard University (ROA-1001). Blevins, Juliette. 1993. Klamath laryngeal phonology. International Journal of American Linguistics 59. 237–279. Blevins, Juliette. 2004. Evolutionary Phonology: The emergence of sound patterns. Cambridge: Cambridge University Press. Blevins, Juliette. 2006. A theoretical synopsis of Evolutionary Phonology. Theoretical Linguistics 32. 117–166. Blevins, Juliette. 2008. Consonant epenthesis: Natural and unnatural histories. In Good (2008), 79–107. Blevins, Juliette & Andrew Wedel. 2009. Inhibited sound change: An evolutionary approach to lexical competition. Diachronica 26. 143–183. Brockhaus, Wiebke. 1995. Final devoicing in the phonology of German. Tübingen: Niemeyer. Buckley, Eugene. 1994. Theoretical aspects of Kashaya phonology and morphology. Stanford: CSLI. Campbell, Lyle. 1998. Historical linguistics: An introduction. Cambridge, MA: MIT Press. Charles-Luce, Jan & Daniel A. Dinnsen. 1987. A reanalysis of Catalan devoicing. Journal of Phonetics 15. 187–190. Di Paolo, Marianne. 1988. Pronunciation and categorization in sound change. In Kathleen Ferrara, Becky Brown, Keith Walters & John Baugh (eds.) Linguistic change and contact: Proceedings of the 16th Annual Conference on New Ways of Analyzing Variation in Language, 84–92. Austin: University of Texas. Dinnsen, Daniel A. & Jan Charles-Luce. 1984. Phonological neutralization, phonetic implementation and individual differences. Journal of Phonetics 12. 49–60. Driem, George van. 1985. A grammar of Limbu. Berlin: Mouton de Gruyter. Driem, George van. 1993. A grammar of Dumi. Berlin: Mouton de Gruyter. Epps, Patience. 2008. A grammar of Hup. Berlin & New York: Mouton de Gruyter. Ernestus, Mirjam & R. Harald Baayen. 2003. Predicting the unpredictable: Interpreting neutralized segments in Dutch. Language 79. 5–38. Ernestus, Mirjam & R. Harald Baayen. 2007. Intraparadigmatic effects on the perception of voice. In van de Weijer & van der Torre (2007), 153–172. Esling, John H., Katherine E. Fraser & Jimmy G. Harris. 2005. Glottal stop, glottalized resonants, and pharyngeals: A reinterpretation with evidence from a laryngoscopic study of Nuuchahnulth (Nootka). Journal of Phonetics 33. 383–410. Fallon, Paul D. 2002. The synchronic and diachronic phonology of ejectives. New York: Routledge. Feizollahi, Zhaleh. 2010. Does Turkish implement a two-way voicing contrast as prevoiced vs. voiceless aspirated? Paper presented at the 84th Annual Meeting of the Linguistic Society of America, Baltimore.

Final Devoicing and Final Laryngeal Neutralization

20

Fourakis, Marios & Gregory K. Iverson. 1984. On the “incomplete neutralization” of German final obstruents. Phonetica 41. 140–149. Gamkrelidze, Thomas V. 1975. On the correlation of stops and fricatives in a phonological system. Lingua 35. 231–361. Good, Jeff (ed.) 2008. Linguistic universals and language change. Oxford: Oxford University Press. Gurevich, Naomi. 2004. Lenition and contrast: The functional consequences of certain phonetically conditioned sound changes. New York & London: Routledge. Hall, T. A. (ed.) 2001. Distinctive feature theory. Berlin & New York: Mouton de Gruyter. Harris, John. 2009. Why final obstruent devoicing is weakening. In Kuniya Nasukawa & Phillip Backley (ed.) Strength relations in phonology, 9–46. Berlin & New York: Mouton de Gruyter. Haspelmath, Martin. 1993. A grammar of Lezgian. Berlin: Mouton de Gruyter. Henton, Caroline, Peter Ladefoged & Ian Maddieson. 1992. Stops in the world’s languages. Phonetica 49. 65–101. Hock, Hans Henrich. 1999. Finality, prosody, and change. In Osamu Fujimura, Brian D. Joseph & Bohumil Palek (eds.) Proceedings of LP ’98: Item order in language and speech, 15–30. Prague: Karolinum Press. Honeybone, Patrick. 2005. Sharing makes us stronger: Process inhibition and segmental structure. In Philip Carr, Jacques Durand & Colin J. Ewen (ed.) Headhood, elements, specification and contrastivity, 167–192. Amsterdam & Philadelphia: John Benjamins. Honeybone, Patrick. Forthcoming. Theoretical historical phonology: Lenition, laryngeal realism, and Germanic obstruent shifts. Oxford: Oxford University Press. Howe, Darin & Edward Pulleyblank. 2001. Patterns and timing of glottalization. Phonology 18. 45–80. Hualde, José Ignacio. 1992. Catalan. London & New York: Routledge. Hyman, Larry M. 1988. The phonology of final glottal stops. Proceedings of the Western Conference on Linguistics 1. 113–130. Iverson, Gregory K. & Joseph C. Salmons. 1995. Aspiration and laryngeal representation in Germanic. Phonology 12. 369–396. Iverson, Gregory K. & Joseph C. Salmons. 2003. Legacy specification in the laryngeal phonology of Dutch. Journal of Germanic Linguistics 15. 1–26. Iverson, Gregory K. & Joseph C. Salmons. 2006. On the typology of final laryngeal neutralization: Evolutionary Phonology and laryngeal realism. Theoretical Linguistics 32. 205–216. Iverson, Gregory K. & Joseph C. Salmons. 2007. Domains and directionality in the evolution of German final fortition. Phonology 24. 121–145. Iverson, Gregory K. & Joseph C. Salmons. 2009. Naturalness and the lifecycle of sound change. In Patrick Steinkrüger & Manfred Krifka (ed.) On inflection: In memory of Wolfgang U. Wurzel, 89 –105. Berlin: Mouton de Gruyter. Janhunen, Juha. 1986. Glottal stop in Nenets. Helsinki: Suomalais-Ugrilainen Suera. Jassem, Wiktor & Lutoslawa Richter. 1989. Neutralization of voicing in Polish obstruents. Journal of Phonetics 17. 317–325. José, Brian D. 2009. Testing the apparent time construct in a young community: Steel City speech in and around Gary, Indiana on its 100th birthday. Ph.D. dissertation, Indiana University. Kim, Hyunsoon & Allard Jongman. 1996. Acoustic and perceptual evidence for complete neutralization of manner of articulation in Korean. Journal of Phonetics 24. 295–312. Kingston, John, Randy L. Diehl, Cecilia J. Kirk & Wendy A. Castleman. 2008. On the internal perceptual structure of distinctive features: The [voice] contrast. Journal of Phonetics 36. 28–54. Kiparsky, Paul. 2006. The amphichronic program vs. evolutionary phonology. Theoretical Linguistics 32. 217–236.

21

Gregory K. Iverson & Joseph C. Salmons

Kiparsky, Paul. 2008. Universals constrain change; change results in typological generalizations. In Good (2008), 23–53. Kopkallı, Handan. 1993. A phonetic and phonological analysis of final devoicing in Turkish. Ph.D. dissertation, University of Michigan. Kümmel, Martin Joachim. 2007. Konsonantenwandel: Bausteine zu einer Typologie des Lautwandels und ihre Konsequenzen für die vergleichende Rekonstruktion. Wiesbaden: Reichert. Labov, William. 1994. Principles of linguistic change, vol. 1: Internal factors. Oxford: Blackwell. Ladefoged, Peter. 1973. The features of the larynx. Journal of Phonetics 1. 73–84. Lieberman, Philip. 1967. Intonation, perception and language. Cambridge, MA: MIT Press. Lisker, Leigh. 1986. “Voicing” in English: A catalogue of acoustic features signaling /b/ versus /p/ in trochees. Language and Speech 29. 3–11. Lombardi, Linda. 2001. Why Place and Voice are different: Constraint-specific alternations in Optimality Theory. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 13–45. Cambridge: Cambridge University Press. Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University Press. Mangold, Max. 2005. Duden Aussprachewörterbuch. 6th edn. Mannheim: Duden. McFarland, Teresa. 2007. Glottal epenthesis at domain edges in Filomeno Mata Totonac. Paper presented at the International Conference on Totonac-Tepehua Languages, Banff. Available (June 2010) at www.arts.ualberta.ca/~totonaco/ICTTL.html. Michaud, Alexis. 2004. Final consonants and glottalization: New perspectives from Hanoi Vietnamese. Phonetica 61. 119–146. Mihm, Arend. 2004. Zur Geschichte der Auslautverhärtung und ihrer Erforschung. Sprachwissenschaft 29. 133–206. Munshi, Sadaf. 2006. Jammu and Kashmir Burushaski: Language, language contact, and change. Ph.D. dissertation, University of Texas, Austin. Nicolae, Andreea & Andrew Nevins. 2009. The phonetics and phonology of fricative non-neutralization in Turkish. Paper presented at the 40th Annual Meeting of the North East Linguistic Society, MIT. Orwin, Martin. 1993. Phonation in Somali phonology. In Mohamed Abdi (ed.) Anthropologie Somalienne, 251–257. Besançon: University of Besançon. Piroth, Hans Georg & Peter M. Janker. 2004. Speaker-dependent differences in voicing and devoicing of German obstruents. Journal of Phonetics 32. 81–109. Port, Robert F. & Michael O’Dell. 1985. Neutralization of syllable-final voicing in German. Journal of Phonetics 13. 455–471. Purnell, Thomas C., Eric Raimy & Joseph C. Salmons. 2009. Defining dialect, perceiving dialect and new dialect formation: Sarah Palin’s speech. Journal of English Linguistics 37. 331–355. Purnell, Thomas C., Joseph C. Salmons & Dilara Tepeli. 2005a. German substrate effects in Wisconsin English: Evidence for final fortition. American Speech 80. 135–164. Purnell, Thomas C., Joseph C. Salmons, Dilara Tepeli & Jennifer Mercer. 2005b. Structured heterogeneity and change in laryngeal phonetics: Upper Midwestern final obstruents. Journal of English Linguistics 33. 307–338. Rice, Keren. 2009. Nuancing markedness: A place for contrast. In Eric Raimy & Charles Cairns (eds.) Contemporary views on architecture and representations in phonology, 311–321. Cambridge, MA: MIT Press. Rodgers, Blake, Thomas C. Purnell & Joseph C. Salmons. 2010. Harmonic energy at vowel offset as a cue to post-vocalic VOICING contrasts in American English. Unpublished ms., University of Wisconsin, Madison. Rooy, Bertus van & Daan Wissing. 2001. Distinctive [voice] implies regressive voicing assimilation. In Hall (2001), 295–334.

Final Devoicing and Final Laryngeal Neutralization

22

Rubach, Jerzy. 1997. Extrasyllabic consonants in Polish: Derivational Optimality Theory. In Iggy Roca (ed.) Derivations and constraints in phonology, 551–581. Oxford: Clarendon Press. Salminen, Tapani. 1997. Tundra Nenets inflection. Helsinki: Suomalais-Ugrilainen Suera. Salmons, Joseph C. 2010. Segmental phonological change. In Vit Bubenik & Silvia Luraghi (ed.) A companion to historical linguistics, 89–105. London & New York: Continuum. Silverman, Daniel. 2006. A critical introduction to phonology: Of sound, mind, and body. London & New York: Continuum. Slowiaczek, Louisa M. & Daniel A. Dinnsen. 1985. On the neutralizing status of Polish wordfinal devoicing. Journal of Phonetics 13. 325–341. Syeed, Syed Mohammad. 1978. The Himalayan way of breathing the last: On the neutralizing status of the word-final aspiration in Kashmiri. The Eastern Anthropologist 31. 531–541. Tieszen, Bozena. 1997. Final stop devoicing in Polish: An acoustic and historical account for incomplete neutralization. Ph.D. dissertation, University of Wisconsin, Madison. Tosco, Mauro. 2001. The Dhaanasac language: Grammar, texts, vocabulary of a Cushitic language of Ethiopia. Cologne: Rüdiger Köppe Verlag. Trubetzkoy, Nikolai S. 1977. Grundzüge der Phonologie. 7th edn. Göttingen: Vandenhoeck & Ruprecht. Vaux, Bert & Bridget Samuels. 2005. Laryngeal markedness and aspiration. Phonology 22. 395–436. Warner, Natasha, Erin Good, Allard Jongman & Joan A. Sereno. 2006. Orthographic vs. morphological incomplete neutralization effects. Journal of Phonetics 34. 285–293. Warner, Natasha, Allard Jongman, Joan A. Sereno & Rachèl Kemps. 2004. Incomplete neutralization and other sub-phonemic durational differences in production and perception: Evidence from Dutch. Journal of Phonetics 32. 251–276. Weijer, Jeroen van de & Erik Jan van der Torre (eds.) 2007. Voicing in Dutch: (De)voicing – phonology, phonetics, and psycholinguistics. Amsterdam & Philadelphia: John Benjamins. Westbury, John R. & Patricia Keating. 1986. On the naturalness of stop consonant voicing. Journal of Linguistics 22. 145–166. Yu, Alan C. L. 2004. Explaining final obstruent voicing in Lezgian: Phonetics and history. Language 80. 73–97. Yu, Alan C. L. 2008. The phonetics of quantity alternation in Washo. Journal of Phonetics 36. 508–520.

70

Conspiracies Charles W. Kisseberth

1

Conspiracies: The essential argument

The paper “On the functional unity of phonological rules” (Kisseberth 1970; henceforth FUPR) made what in essence is a very simple argument. It claimed that in the phonologies of the world’s languages, it is often the case that there are phonological structures that are either barred or required, and that (from a standard generative phonology point of view) multiple rules may be involved in guaranteeing that these structures are avoided or achieved. This observation seems to be undeniably accurate. FUPR, however, went further and suggested that it was not sufficient to simply recognize this truth about the world’s phonologies, but that somehow (a) these barred/required structures should be an explicit part of the phonological system of a given language, and (b) grammars that utilize multiple means to achieve/avoid a certain structure are not to be viewed as more complex than grammars that use fewer rules for the same end. As we shall discuss below, these claims are not consistent with the prevailing notion in generative phonology that all significant linguistic generalizations are expressible in terms of simplifications in the formal system of rules and representations. FUPR suggested that there was instead a functional aspect to phonological rules that eluded the formal approach of early generative phonology. It should be emphasized that the use of the term “functional” in FUPR is distinct from later usage where functional refers to the idea that (a) phonological phenomena are motivated by, i.e. grounded in, phonetic considerations (summarized in Boersma 1997 as the “minimization of articulatory effort and maximization of perceptual contrast”), as well as the idea that (b) notions such as contrast, paradigmatic considerations and frequency may shape the phonological grammar. FUPR emphasized the existence of avoided/ preferred structures, but not what factors make the structure in question good or bad nor what factors favor one repair over another. The idea that phonological rules “conspire” to avoid/achieve a given phonological structure is one that had been suggested to me originally by Haj Ross and George Lakoff on the basis of their syntactic work (as well as that of David Perlmutter). A close reading of major pre-generative linguists (particularly Boas and Sapir) seemed to support this notion, as these linguists not infrequently linked a particular phonological phenomenon (e.g. so-called “inorganic” vowels) The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0070

Conspiracies

2

to some claimed limitation on phonological structure. It was, however, Morris Halle who wisely suggested to me that I take a close look at the Yawelmani dialect of Yokuts if I wished to push this line of thought. Yawelmani has, over the decades, been a point of reference for almost every approach to the essential problems of phonology. Newman (1944) provided the initial detailed description of Yawelmani (as well as three other dialects of the Yokuts language). His description of the language was in the tradition of Sapir. Harris (1944) and Hockett (1967, 1973) looked at the Yawelmani data from the point of view of American structuralism, and Kuroda (1967) reworked Newman’s analysis in terms of standard generative phonology. Later, Archangeli (1984 and subsequent work) applied the principles of underspecification and non-linear phonology to Yawelmani. In the optimality-theoretic literature, Yawelmani has been critical to the analysis of opacity (cf. Cole and Kisseberth 1995; McCarthy 2007). FUPR examined a number of the essential aspects of Yawelmani phonology, but while not differing radically from Kuroda’s analysis, drew a conclusion that was out of the mainstream. What follows is a detailed summary of FUPR’s account of Yawelmani. Yawelmani words, in their surface form, consist of a sequence of syllables of the shape CV, CVV, CVC (where VV = long vowel). Thus all words begin with a single consonant and they may end either in a vowel or a single consonant. Internal to the word, vowels do not occur in succession: there is at least one and at most two consonants located between vowels. Vowels may be long or short, but when a vowel stands in the environment __ CC or __ C#, it can only be short. These observations lead to the conclusion that Yawelmani bans syllables with complex margins (i.e. consonant clusters in either onset or coda position) as well as trimoraic syllables. In the timeframe of FUPR, syllables played no role in generative phonology (see chapter 33: syllable-internal structure). As a consequence, all of the statements about phonological structures and all the rules formulated referred only to sequences of consonants and vowels. The discussion below follows the presentation in FUPR, but also translates it into a syllable-based analysis. Underlying representations in Yawelmani are shaped in part by the above limitations on surface structure (chapter 1: underlying representations). There are no prefixes in Yawelmani, so stems always occupy word-initial position. All stems in Yawelmani have an initial consonant. Thus there is no need to have a rule to insert a consonant at the beginning of vowel-initial words. On the other hand, while no stem contains a sequence of three consonants, some stems do end in two consonants. When such stems are followed by a consonant-initial suffix, we have a situation where the ban on triconsonantal clusters (or complex margins) is endangered. An epenthetic vowel is introduced between the first two consonants in the three-consonant sequence (chapter 67: vowel epenthesis). This vowel epenthesis phenomenon is illustrated by the data in (1). (We have supplemented the data cited in FUPR for purposes of exposition.) (1)

Underlying form of stem

Stem + aorist /hin/

?ilk ?utj logw ?ajj

?ilik-hin ?utuj-hun logiw-hin ?ajij-hin

‘sing’ ‘fall’ ‘pulverize’ ‘pole a boat’

3

Charles W. Kisseberth

The stems in (1) appear in their underlying form when followed by a vowelinitial suffix (e.g. [?ilk-al] ‘might sing’, [logw-ol] ‘might pulverize’, [?ajj-al] ‘might pole a boat’, etc.), but with an epenthetic [i] when followed by a consonantinitial suffix like the aorist /-hin/. If these stems appear in word-final position (e.g. in the imperative), then a vowel is also epenthesized between the two consonants that stand in word-final position (due to the fact that a word can only end in a single consonant, as a consequence of the ban on complex codas). Thus the stem /?ilk/ will appear as [?ilik] if not followed by a suffix. The epenthetic vowel in Yawelmani is the high front unround vowel [i], though this vowel will appear as [u] when preceded by a high round vowel (cf. [?utuj-hun]). In FUPR, the rule in (2) is postulated for Yawelmani: (2)

Vowel Epenthesis Ø → i / C __ C {#, C}

These epenthetic vowels are clearly a means to avoid complex syllable margins. But Vowel Epenthesis is not the sole method for avoiding complex margins in Yawelmani. Specifically, there are two morphologically restricted cases where one of the consonants is deleted. One case involves two suffixes that have an initial consonant cluster (in both cases, the first consonant in the cluster is a laryngeal), where the first of these consonants is deleted in position after a consonant-final stem. For example, the suffix /-hnil-/ elides its initial consonant after a consonantfinal stem like /gitiin-/ ‘to hold under the arm’ (this morpheme sequence occurs in the noun ‘armpit’ and there is other morphology and irrelevant phonology involved in the final form of this noun, [giten-nel-a-w]). The available data are not sufficient to make it clear whether the two suffixes in question are at all productive. The rule given in (3), if ordered before Vowel Epenthesis, will correctly delete the initial consonant of /-hnil-/. (3)

C → Ø / C + __ C (in fact, the only consonants that occur in the environment C + __ C are [? h])

The second case of consonant deletion occurs as an aspect of the phonology of certain suffixes which trigger moraic reduction in a preceding stem (in Newman’s terminology, these suffixes require the “zero” form of a stem). For instance, stems with three consonants (e.g. /halaal-/ ‘lift up’) are converted to the shape CVCC(e.g. [hall-]) in front of a suffix such as /-hatin/. This suffix, in turn, will elide its initial consonant, due to the prohibition against three-consonant sequences (i.e. complex margins). Again, we are not aware of whether a suffix such as /-hatin/ is productive. The rule given in (4), if ordered before Vowel Epenthesis, will correctly delete the initial consonant of the specified suffixes. (4)

C → Ø / CC + __ (affects only suffixes such as /-hatin/ that trigger the so-called “zero stem”)

It is clear that both of these restricted deletion rules are functionally related to the vowel epenthesis phenomenon: they guarantee that an input that potentially violates the ban on triliteral consonant clusters and word-final consonant clusters

Conspiracies

4

(i.e. complex syllable margins) will in fact obey this constraint on the surface (chapter 68: deletion). Of course, all the potential violations of the ban on complex margins could be avoided by the single phenomenon of Vowel Epenthesis. The consonant deletion processes in (3) and (4) are not necessary in order to secure an outcome where there are no complex margins. However, what the existence of conspiracies tells us is that (in a rule-based approach to phonology) languages do not always opt to use the fewest number of rules possible to avoid an offending phonological structure. Rules (2)–(4) constitute only one part of the Yawelmani conspiracy against complex margins. There is evidence to motivate the postulation of a rule in Yawelmani that elides a short vowel (either /i/ or /a/) in the environment VC __ CV. This rule can explain both cases where vowels in verb suffixes are elided, and also aspects of the nominal case system. Specifically, it deletes what Newman referred to as the “protective” vowel /a/ in a structure like /k’iliij + a + ni/ ‘cloud (indirect objective)’ but not in a structure like /puulm + a + ni/ ‘husband (indirect objective)’. In an example like /k’iliij + a + ni/, deletion of /a/ yields an output where there is only one consonant in onset position and one consonant in coda position. On the other hand, there is no deletion of /a/ in /puulm + a + ni/, due to the fact that such a deletion would yield an output with three consonants in a row: [lmn], a sequence that would require either a complex onset or a complex coda. The rule in (5) achieves the correct result. (5)

V → Ø / VC __ CV

Rule (5) does not bear any formal similarity to either the vowel epenthesis rule in (2) or the consonant deletion rules in (3) and (4). However, despite this lack of formal similarity, there is an obvious functional similarity: (5) deletes a vowel unless to do so would create violations of the ban on complex margins. Deleting a vowel is of course the opposite of inserting one from a formal point of view, but both actions, along with consonant deletion, reveal the overarching principle that complex margins are not permitted in Yawelmani. There is yet one more aspect to the Yawelmani complex margin conspiracy. Word-final verbal suffixes of the shape -CV elide their vowel when preceded by a vowel-final stem but not a consonant-final stem. In other words, the vowel of these suffixes will elide unless its elision would produce a violation of the ban on complex margins. Thus the imperative suffix /-k’a/ loses its vowel after the stem /taxaa-/ ‘take’, yielding [taxa-k’], but no elision takes place after the stem /xat-/ ‘eat’, yielding [xat-k’a] rather than *[xat-k’], with a complex coda. This second vowel deletion rule is formulated in (6): (6)

V → Ø / V + C __ #

There is of course some formal similarity between (5) and (6), since both are vowel deletion rules and only apply in the event a VC structure precedes. However, since (6) is restricted to word-final vowels that are part of a CV suffix, there is not identity even in terms of the preceding structure: i.e. VC __ in (5) but V + C in (6). We have now discussed the rules in Yawelmani that implement the ban on complex margins: (2)–(6). These rules, however, do not tell the entire story. In the standard generative phonology that prevailed at the time FUPR was written, it was

5

Charles W. Kisseberth

proposed that morpheme structure conditions (chapter 86: morpheme structure constraints) served to restrict the shapes of morphemes in underlying representation. In Yawelmani, there are no morphemes that contain a sequence of three successive consonants. In other words, structures are avoided inside a morpheme if they could only be syllabified by creating a complex margin. This aspect of conspiracies was developed at greater length in Kenstowicz and Kisseberth (1977), under the rubric of the “duplication problem.” Specifically, it was noted that in theories where there are both “morpheme structure conditions” and also phonological rules, it is often necessary to repeat the same generalization both as a morpheme structure condition and also as a feature-changing rule. For example, underlying representations of morphemes may disallow a NC sequence where the nasal and the consonant are not of the same point of articulation, but a rule may still be required that converts a morpheme-final nasal to be homorganic with a consonant in initial position in the next morpheme (chapter 81: local assimilation). As long as there is a morpheme structure component of the phonology that is distinct from the rules that account for alternations in the shapes of morphemes, then there will be conspiracies whereby structures that are banned in the underlying representations of morphemes will trigger morphophonemic alternations as well. To summarize: in Yawelmani, a morpheme structure ban on trisyllabic consonant sequences prevents morphemes from having offensive segmental material to begin with. A rule of vowel epenthesis, as well as two minor rules of consonant deletion, prevents violations from occurring at the juncture of morphemes. Two vowel deletion phenomena are constrained in a fashion to prevent the creation of complex margins. But having pointed out this conspiracy, what are we to make of it? FUPR suggested that although there is no way in which a formal unity can be found for these various rules, nevertheless the ban on triliteral and word-final consonant clusters (i.e. the ban on complex margins) should be part of the phonological grammar of Yawelmani. However, in standard generative phonology, a phonology is a set of representations and a set of ordered rules that derive a surface form from an input form. A ban on complex margins cannot be part of the phonology unless it participates in the derivation of surface forms from input forms; otherwise it is simply a useless appendage that has no basis for existence in a formal system. In an attempt to find some way to make the ban on complex margins a part of the phonology, FUPR suggested that bans of this sort might function as derivational constraints. The idea of a derivational constraint is this. Suppose that we formulate rule (5) as (7): (7)

A word-medial (short) vowel elides.

Say that the application of this rule fails if its immediate output would violate the ban on triliteral or word-final consonant clusters (i.e. complex margins). This derivational constraint would allow a word-medial vowel to delete only if deletion does not produce an illicit structure. Of course, this notion of derivational constraint would radically alter the way in which rules apply, but it would mean that a constraint like the ban on complex margins in Yawelmani would have an actual role to play in derivations.

Conspiracies

6

The notion of derivational constraints, however, is only a very partial account of the Yawelmani complex margin conspiracy. It is not evident how the ban on complex margins would play any role in the derivation of words where offending structures arise across morpheme boundaries. Because the notion of a derivational constraint did not solve the conspiracy problem, it did not play a significant role in phonology until considerably later.

2

Conspiracies: An historical overview

In order to fully understand the argument made in FUPR, one must begin with perhaps the central concern of early generative phonology: specifically, the question of how a language learner deduces the correct grammar from the data to which the language learner is exposed. The rough answer to this question that was advanced was that the learner adopts the “simplest” grammar. Simplicity, at least in terms of phonology, was taken to be determined by reference to the counting (particularly) of feature specifications both in rules and in lexical representations. A critical part of this enterprise was to design grammars so that the phonological patterns most commonly found in languages could be expressed in a simple fashion, while patterns that were never found in languages could be expressed only under great duress. An essential element of this enterprise was the building of a system of notation that would allow what the linguist understood to be the “same” or “related” phenomena to be subsumed under a single rule (a rule that while covering the observed data would often go beyond those data to make predictions about data the learner may not have encountered). (See Chomsky and Halle 1968 (SPE) for an extended discussion of this point of view. These ideas can be found throughout the entire early generative phonology literature.) What FUPR showed was that in synchronic grammars one could find cases where obviously related phenomena could not be given a unitary treatment from the point of view of any available formal notation because they were related not in terms of their actions (insertions, deletions, feature changes, etc.), but rather the structural configurations that they either avoided or strove to achieve. As long as phonology was viewed as a theory of rules which married phonological actions to specific phonological contexts, a solution to the problem of conspiracies was impossible. The one partial solution suggested – derivational constraints – was a tentative step in the direction of separating the action (vowel deletion, in the Yawelmani case) from the context in which it occurs. It was, of course, not until the development of Optimality Theory (cf. Prince and Smolensky 1993) that a total solution emerged. Although the FUPR paper itself focused on conspiracies in the synchronic grammars of specific languages, the paper developed out of my (ultimately unsuccessful) attempt to extend the ideas of “Chapter Nine” of Chomsky and Halle (1968) by developing a notion of “universal rules.” What Chapter Nine attempted to do was to make it formally simpler for a grammar to conform to what is “natural” than for it to go against what is most natural. In terms of phonological representations, it did this by creating a system where an unmarked feature value was cost-free, while a marked feature value rendered the representation a more costly one. The consequence was that whenever possible, an underlying representation would contain an unmarked value rather than a marked value

Charles W. Kisseberth

7

(since generative phonology claimed that the least costly grammar was always chosen over the more costly grammar if both grammars yielded the correct outputs). Chapter Nine went further, however, and attempted to extend the idea of making unmarked specifications cost-free when formulating phonological rules. However, it found only a very restricted way of achieving this goal. In particular, it proposed that if a phonological rule specifies a particular structural change, then markedness principles come into play to add other changes that follow naturally. For instance, Yawelmani has a vowel harmony rule whereby a vowel becomes round when preceded by a round vowel of the same height. This vowel harmony rule affects the vowel /i/ when it stands after the vowel /u/. However, when /i/ rounds, it also becomes back and surfaces as [u]. Chapter Nine suggested that in a case such as this, the vowel harmony rule simply specifies that a vowel acquires the feature [+round] and then markedness principles will automatically add the feature [+back]. In Chapter Nine, the only way that markedness considerations could play a role in “simplifying” the grammar was through this device of “linking” a natural structural change (e.g. the backing of a round vowel) to a language-specific rule (e.g. vowel harmony). As such, Chapter Nine had a proposal only for the case where a secondary structural change is natural given some primary structural change. It did not have an explanation for why the same primary changes occur under similar conditions in language after language (e.g. nasal assimilation, epenthesis of onsets, lenition). In an earlier version of Kisseberth (1969), I attempted to develop (but later abandoned) the idea that grammars contain a set of universal rules (cost-free, so to speak). This proposal was motivated by the recognition that no matter how much tinkering one did with the system of notation, it would always be possible to state very simple rules that have linguistically implausible consequences. As a consequence, generative phonology’s attempt to make more natural rules “easy” to formulate and less natural rules more difficult seemed fundamentally flawed. The principal difficulties that the search for universal rules faced at the end of the 1960s included: (i)

(ii)

(iii)

In a given language, a very natural phonological process accrues a significant number of language-specific restrictions that reduce the generality of the rule needed to account for the phenomenon; it was unclear how to separate the essence of a rule from all the baggage required to properly delimit its scope of application. The conspiracy problem. Specifically, there are multiple distinct actions (insertions, deletions, feature changes, sequencing changes) that are triggered by essentially the same context. Since a given language may utilize several rules to avoid/achieve a certain structural configuration, the scope of application of each rule must be delimited, obscuring the universal nature of the rule. If “universal” means “in every language,” why are there languages where these (proposed) universal rules are not in fact implemented in all cases or indeed at all?

Optimality Theory (Prince and Smolensky 1993 and a myriad of subsequent references), of course, ultimately provided a solution to these problems by (a) separating the actions themselves from the “rules” (now expressed as constraints),

Conspiracies

8

(b) allowing constraints to interact with one so that different actions are favored in certain situations over other actions, and (c) postulating a type of constraint (faithfulness) that could suppress the effects of other constraints by outranking them. As the preceding discussion indicates, FUPR developed out of the problem of developing a notion “universal rule.” It could not succeed in solving the problem of conspiracies, since it assumed the existence of (learned) rules. Between FUPR and the optimality-theoretic solution to conspiracies and universal rules, there were many significant phonological developments. Most of these developments have some bearing on the conspiracy problem. Perhaps it will be useful to begin with an observation in McCarthy (1993: 169): The idea that constraints on well-formedness play a role in determining phonological alternations, which dates back at least to Kisseberth’s (1970) pioneering work, has by now achieved almost universal acceptance. A tacit assumption of this program, largely unquestioned even in recent research, is the notion that valid constraints must state true generalizations about surface structure or some other level of phonological representation. Anything different would seem antithetical to the very idea of a well-formedness constraint.

McCarthy goes on to reject the point of view that the constraints that grammars conspire to enforce are necessarily true surface generalization. But it is important to understand the way in which the idea of conspiracies evolved and McCarthy here identifies a principal theme. It is true that in the Yawelmani case the ban on complex onsets/codas is (largely) satisfied by the surface representations of the language. FUPR did not claim, however, that this was necessarily the case, but rather allowed for the possibility that a relevant constraint might be true of only a certain stage of the derivation. This conclusion was a necessary one, because FUPR did not propose to abandon the notion that phonological systems are systems where principles interact in a possibly complex way such that some principles may not be true of the surface but only of some other level of the representation. From that point of view, it could very easily be the case that a certain configuration is favored or disfavored through much of a derivation, only to have late, low-level rules derive surface forms where the principle in question is violated. It was well known, for instance, that commonly assumed laws of syllabification in English may be violated in fast speech. In other languages, violations may result from processes operative in careful speech as well. Although FUPR was careful not to suggest that the phonological targets of conspiracies were surface targets, the discussion that evolved over the subsequent years generally emphasized the surface nature of the constraints that rules conspired to serve (e.g. Haiman 1972; Shibatani 1973; Sommerstein 1974). This emphasis is extremely significant, since it had the consequence that the essential point of FUPR was lost as phonological thinking veered in new directions. The question raised by Kiparsky (1968), “How abstract is phonology?” led several influential phonologists to move in various interrelated directions. On the one hand, “Natural Generative Phonology” (cf. Vennemann 1971, 1974; Hooper 1973, 1979) attempted to limit phonology to surface-true generalizations. On the other hand, the “Natural Phonology” of Stampe (1973) attempted to re-focus

9

Charles W. Kisseberth

phonology away from the alternations observed in the shapes of morphemes (alternations which were often of restricted scope, were non-productive, and had exceptions, and which Stampe considered to be arbitrary and learned) towards processes that were “automatic,” “exceptionless,” and “innate,” which could be observed in a variety of domains such as language acquisition, fast speech, unguarded speech, drunken speech, language games, etc. Both Natural Generative Phonology and Stampe’s Natural Phonology did not survive, for a quite simple reason: both approaches essentially removed the many examples of very regular, productive morphophonemic processes from the scope of phonology, since they were usually not surface-true generalizations due to exceptions or interactions with independent phenomena, etc. However, Stampe’s Natural Phonology did have a lasting impact, in that it eventually led to the so-called “Lexical Phonology” approach, which made a significant attempt to distinguish among principles that obtained in the lexicon and principles with a wider scope of application that were not dependent on the particulars of morphological structure (cf. Kiparsky 1982, 1985; Mohanan 1986, 1995). To this day this remains a critical issue in working out a comprehensive theory of phonology (chapter 94: lexical phonology and the lexical syndrome). The debate with regard to the “abstractness” of phonology (i.e. the extent to which surface forms may differ from their underlying sources and what sorts of evidence are required in order to postulate a divergence between surface and underlying structure) had considerable implications for the concerns of FUPR, as well as the ultimately related notion of universal rules. If phonology has little abstractness, and if most of the rules that had been proposed during the early years of generative phonology were not really rules of phonology at all, then perhaps arguments such as the one presented in FUPR are irrelevant. If the various rules that we claimed conspire to avoid complex margins in Yawelmani are not in fact real rules of the language, then the argument in FUPR is no argument at all. And if the only phonological rules are ones that are directly represented by overt surface forms, then explaining how a phonological system is learned no longer seems so challenging, and appeal to universal considerations is less necessary. The abstractness controversies of the early 1970s, however, were never really resolved, but instead phonologists turned to a new approach to phonological analysis and universals, namely an approach that emphasized the development of phonological representations from which outputs could be predicted with a minimal appeal to rules and rule interaction. The motto of phonology became “if the representations are right, then the rules will follow” (McCarthy 1988: 84). For our purposes, we will refer to this as representational phonology (cf. Goldsmith 1976, 1990; Clements and Goldsmith 1984; Clements 1985; and many other references). Although representational phonology had very considerable successes (e.g. the autosegmental approach provided substantial insights into the complicated tonal phenomena of Bantu languages and the vowel harmony patterns of a variety of languages; chapter 114: bantu tone; chapter 45: the representation of tone; chapter 91: vowel harmony: opaque and transparent vowels; chapter 118: turkish vowel harmony; chapter 123: hungarian vowel harmony), it became apparent from several of its most significant contributions that an adequate account of phonological patterns requires appeal not just to representations and rules, but also to constraints. The introduction of constraints can be found in such papers as Itô’s (1989) theory of epenthesis, the extensive literature on the

Conspiracies

10

Obligatory Contour Principle or concepts such as word-minimality in the work on prosodic morphology (cf. McCarthy 1979, 1981; McCarthy and Prince 1986). The return to a role for constraints in phonological thinking naturally also triggered a return to relevance of the notion of conspiracies and constraints on derivations. Papers such as Myers (1991) on the notion of “persistent rules,” the Theory of Constraints and Repairs developed in Paradis (1988), and ultimately Optimality Theory all found the issue identified in FUPR to be a significant one that needed to be addressed in phonological theory. We shall discuss the optimalitytheoretic analysis of the conspiracy phenomena later, but at this point we would like to turn to specific examples of conspiracies.

3

Conspiracies in various domains of phonological research

The argument in FUPR is limited to a single synchronic phonological system, but the notion is relevant to most, if not all of the domains of phonological exploration: universals, variation, and dialectology (chapter 92: variability), language change (chapter 93: sound change), acquisition (chapter 101: the interpretation of phonological patterns in first language acquisition), loanword phonology (chapter 95: loanword phonology), etc. Space limitations do not permit an extensive discussion of all the domains where the notion of conspiracies is relevant, but we present some brief discussion of several domains: synchronic grammars, universal rules, phonological acquisition, and loanword phonology.

3.1

Conspiracies in synchronic grammars

We have already discussed at length the conspiracy in Yawelmani that revolves around the avoidance of complex syllable margins. There is a great variety of other examples of conspiracies that have been discussed over the past few decades. Here we will illustrate just two: hiatus avoidance conspiracies and conspiracies banning sequences of a nasal consonant followed by a voiceless obstruent. Many languages do not allow onsetless syllables, particularly in word-medial position (cf. Casali 1997; chapter 61: hiatus resolution; chapter 55: onsets for discussion). Traditionally, such languages are said to avoid hiatus (a succession of two vowels with no intervening consonant). There are, of course, several ways in which a VV sequence (hiatus) may be avoided. The first or the second vowel may be deleted. The first or the second vowel may undergo glide formation. A consonant may be inserted between the vowels. While some languages may choose to avoid hiatus by invoking a single anti-hiatus action, it is not at all uncommon to find a language invoking multiple actions according to the specific VV sequence. Chicano Spanish provides a relevant example (cf. Hutchinson 1974; Reyes 1976; Bakovio 2007). Consider the data in (8), taken from Bakovio (2007). We have used orthographic representation for the input and retained that representation for the output, except that we put the surface form resulting from the hiatus avoidance rules inside brackets to highlight how the hiatus is dealt with.

Charles W. Kisseberth

11 (8)

tu uniforme lo odio era asi se escapó mi hijo b. paga Evita la iglesia casa humilde niña orgullosa c. mi obra mi ultima mi hebra mi arbol tu epoca tu alma tu hijo su Homero d. me urge pague ocho porque aveces como Eva tengo hipo lo habla e. como uvitas se hinca a.

t[u]niforme l[o]dio er[a]si s[e]scapó m[i]jo pag[e]vita l[i]glesia cas[u]milde niñ[o]rgullosa m[jo]bra m[ju]ltima m[je]bra m[ja]rbol t[we]poca t[wa]lma t[wi]jo s[wo]mero m[ju]rge pagu[jo]cho porqu[ja]veces com[we]va teng[wi]po l[wa]bla com[u]vitas s[i]nca

‘your uniform’ ‘hate (1sg him/it)’ ‘it was like that’ ‘escaped (3sg)’ ‘my son’ ‘Evita pays’ ‘the church’ ‘humble home’ ‘proud girl’ ‘my deed’ ‘my last one (fem)’ ‘my thread’ ‘my tree’ ‘your time’ ‘your soul’ ‘your son’ ‘your Homer’ ‘it is urgent to me’ ‘that s/he pay eight’ ‘because sometimes’ ‘like Eva’ ‘I have the hiccups’ ‘speaks it’ ‘like grapes (dim)’ ‘kneels’

(h is silent)

(h is silent)

(h is silent) (h is silent)

(h is silent) (h is silent) (h is silent)

The data in (8) show that when two vowels are juxtaposed in Chicano Spanish, these VV sequences are not resolved in a single way. (7a) shows that when two identical vowels are adjacent to one another, the sequence is reduced to a single vowel. Thus in tu uniforme, a single [u] vowel is found. It is of course not readily apparent whether one or the other vowel is deleted or whether one should just say the two vowels coalesce. (8b) demonstrates that a word-final low vowel a deletes before any vowel. Thus in paga Evita, the a at the end of the verb is absent in pronunciation. But the first vowel of the VV does not always delete. (8c) shows that if the first vowel is high, then it becomes the corresponding glide, whatever the second vowel might be. Thus mi obra yields [mjo] and tu epoca yields [twe]. On the other hand, if the first vowel is mid, the results are a bit more complex. (8d) shows that if the second vowel differs from the first with respect to either [±low] or [±back], then the initial vowel glides: me urge surfaces with [mju], como Eva results in [mwe] and lo habla (the h of the orthography is silent) becomes [lwa]. However, if the vowel that follows the mid vowel differs from it only with respect to the feature [±high], then there is no glide formation. Rather, as shown in (8e), the two vowels coalesce in a form identical to the second vowel: como uvitas results in [mu]. What this example from Chicano Spanish illustrates is that although a single strategy might avoid hiatus, languages may choose multiple means (for example vowel coalescence, vowel deletion, glide formation) to eliminate the problematic structures. Combinations of a nasal consonant and a following voiceless consonant are disfavored in many languages (cf. Hayes and Stivers 2000 for discussion of the

Conspiracies

12

phonetic preference for voiced consonants following nasals; see also chapter 8: sonorants). There are different ways in which these ill-formed consonant sequences could be avoided. The most common “repairs” are voicing the post-nasal consonant or deleting this consonant (while assimilating the nasal to the same point of articulation), or devoicing the nasal or even deleting it all together. Pater (1999) points out that in various languages, nasal–voiceless stop sequences are avoided by the application of more than one rule (even though a single rule in principle could resolve the problem). For example, in Kwanyama, a Bantu language discussed in Steinbergs (1985), there is evidence to support a ban on nasal–voiceless consonant sequences. One piece of evidence for the ban is the absence of such sequences in morpheme-internal position. It is also the case that the sounds [k] and [g] are in complementary distribution. [k] occurs wordinitially and intervocalically, and [g] appears only after nasals. This distributional pattern supports the proposition that there is a principle that voices a consonant after a nasal. Such a proposal is also supported by the treatment of English loanwords, as shown in (9) below. (9)

Post-nasal voicing in Kwanyama loanwords [sitamba] [pelenda] [oinga]

‘stamp’ ‘print’ ‘ink’

In these borrowings, English nasal–voiceless stop sequences are replaced by nasal–voiced stop sequences. But voicing of the stop is not the only repair found in Kwanyama. A root-initial voiceless stop located after a nasal prefix requires assimilation of the nasal, but then elides from the representation. (10)

Root-initial nasal substitution in Kwanyama /e(N + pati/ [e(mati] /oN + pote/ [omote] /oN + tana/ [onana]

‘ribs’ ‘good-for-nothing’ ‘calf’

In two other Bantu languages, Umbundu and Luyana, the same ban on nasal–voiceless consonants can be found. In these languages we again find two different repairs. A nasal that comes to stand in front of a voiceless fricative elides, while a nasal in front of a stop will assimilate the stop’s point of articulation, but the stop itself elides. Schadeberg (1982) illustrates from Umbundu that /N + tuma/ surfaces as [numa] ‘I send’, while /N + seva/ surfaces as [seva] ‘I cook’. Givón (1970) shows that in Luyana /N + tabi/ surfaces as [nabi] ‘prince’, while /N + supa/ surfaces as [supa] ‘soup’. While Kwanyama, Umbundu, and Luyana use two distinct strategies to avoid a nasal–voiceless consonant sequence, other languages may opt for a uniform repair. According to Pater, in Indonesian a nasal assimilates the point of articulation of both a voiceless fricative and a voiceless stop, with the oral stop then eliding. In other languages, like Kelantan Malay, Venda, Swahili, and Maore, nasals delete before voiceless stops and fricatives alike. Although uniform avoidance strategies are possible, many languages are like Kwanyama, Umbundu, and

13

Charles W. Kisseberth

Luyana in that they opt for a (perhaps only superficially) more complex pattern of avoidance.

3.2

Phonological conspiracies and universal grammar

As mentioned earlier, the FUPR paper arose out of an attempt to make some sense out of the notion that there might be “universal” rules. The problem that confronted the researcher during the generative phonology period was that while one could easily find phenomena that seemed to reflect some universal principle, the rule-based descriptions were rarely uniform across languages in their details. Furthermore, just as in the case of conspiracies in synchronic grammars, sometimes formally unrelated rules in different languages could be involved in achieving the same outcome. Let us take an example from Bantu tonal systems (chapter 45: the representation of tone; chapter 114: bantu tone). Early on in the research on these systems, it was recognized that a sequence of H tones is not preferred in these languages. Meeussen (1963) observed that in Tonga, for example, a succession of two H tones is converted to HL. In Leben (1973) and Goldsmith (1976), this ban on successive H tones was seen as a natural consequence of the approach to phonology that came to be known as autosegmental phonology. Specifically, autosegmental models typically represent surface sequences of a feature value (e.g. H tone) as a single multiply linked autosegment. As a consequence, to the extent that this representation is maximized, sequences of the same autosegment will be unexpected. The proposed constraint against successive identical autosegments became known as the Obligatory Contour Principle (OCP). Although some support for the idea that the OCP constrains all features emerged in the phonological literature, there is no question that its tonal instantiation is by far the most robust evidence for the principle. The notion of a ban on successive H tones depends on the ability to distinguish between true and apparent sequences of H tones. This distinction is captured in autosegmental phonology as follows. A true H tone is one located on the tonal tier, regardless of whether it is associated with one or more than one tone-bearing unit. Successive H-toned moras are not sequences of H tones if they are all linked to a single H tone on the tonal tier. There are a number of ways in which the *HH principle can be manifested in a language. But before looking at these manifestations, two quite separates matters must be mentioned, both of which dramatically expand the diversity of the manifestations of the OCP. First of all, in the analysis of Bantu languages, it has sometimes been argued that phonologically there is just a contrast between H tone and the absence of tone. However, even in analyses that utilize inputs that lack L tones, rules have been proposed that derive L tones that then contrast with toneless moras. Consequently, we have some analyses where a H tone that violates the OCP may be simply deleted, and other analyses where it is changed to L. Formally, the rules are quite distinct, but in both versions the ban on successive H tones is satisfied. A second complication has to do with what it means for two H tones to be adjacent and thus in violation of the *HH ban. In some languages, adjacency of H tones on the tonal tier is the defining characteristic; in other languages, what is significant is that the H tones may not be associated with successive syllables; in yet other languages, what is critical is that the H tones not

Conspiracies

14

be linked to successive moras. Because of these differences in adjacency, there will be considerable variation from language to language with respect to which representations actually violate the ban on HH in those languages. Let us now look at some of the different ways in which the *HH ban is implemented in different languages. One implementation has to do with the very nature of the underlying representations found in a given language (chapter 1: underlying representations). For example, Cassimjee (1992) shows that in Venda (a Bantu language spoken in South Africa and adjacent parts of Zimbabwe) noun stems, there may be sequences of H-toned moras, but in every case there is evidence that these sequences consist of a single H tone on the tonal tier associated with multiple successive moras. There are no morphemes with successive true H tones. In other words, underlying representations are structured so as to avoid violations of the OCP. The evidence that these H-toned sequences are a single H tone comes from a morphophonemic phenomenon known as Meeussen’s Rule (Goldsmith 1984), whereby a H tone that is immediately preceded by a H tone is changed to L. For example, a noun such as /góI´óI´ó/ ‘bumblebee’ will, when preceded by a verb ending in a H tone, change first of all to the intermediate form: */gòI´òI´ò/, due to Meeussen’s Rule: i.e. all three syllables become L-toned (indicating that all three syllables started off linked to a single H tone that then changed to L). Subsequently, the preceding H tone spreads onto the first syllable forming a contour tone: */gô/. Falling tones in Venda, however, are only permitted on bimoraic vowels and bimoraic vowels occur only in the penultimate syllable of an Intonational Phrase. As a consequence, /gô/ surfaces simply as a H-toned syllable: [góI´òI´ò]. If /góI´óI´ó/ were analyzed as having a sequence of three H tones rather than having a single H-tone multiply linked, we would have to explain why Meeussen’s Rule does not affect the underlying representation of this word (see Cassimjee 1992 for more detailed discussion). Even if underlying representations are configured to avoid violations of the OCP (as in Venda), it still may happen that the juxtaposition of morphemes yields potential HH sequences. One very common reaction to this threat is the deletion/lowering of the rightmost of these adjacent H tones as shown above for Venda. Rules that target the rightmost H in a HH sequence are said to be instantiations of “Meeussen’s Rule” (cf. Goldsmith 1984). In Venda it is necessary for Meeussen’s Rule to change H to L in order to obtain the right results. In the Ikorovere dialect of Emakhuwa (spoken in southern Tanzania), Meeussen’s Rule simply deletes the H tone. For example, an underlying form like /k-a-ho-kaviha/ ‘I helped’ (underlining indicates the location of input H tones) has a H-toned tense-aspect morpheme /ho/ followed by a stem /kaviha/, which has a H tone (predictably) associated with its first vowel. This morpheme sequence violates the OCP ban *HH. The second of these H tones deletes, but subsequently the first H tone doubles onto the second by a general High Tone Doubling rule that applies in a wide variety of circumstances in the language: [k-a-hó-káviha]. Of course, one might ask: how do we know that the H tone on the stem-initial /ka/ has deleted if in fact this mora is pronounced on a H tone? The answer is simple. If the underlying H tone on /ka/ had not deleted, then it would have triggered doubling onto the next mora, resulting in the ill-formed *[k-a-hó-kávíha]. Notice that it is clear that the H tone on /ka/ is deleted and not the H tone on the preceding morpheme /ho/. If we had deleted the H tone from /ho/, then we would predict the incorrect output *[k-a-ho-kávíha], since the second H tone would

15

Charles W. Kisseberth

double to its right. (See Kenstowicz and Kisseberth 1979 for a more extended discussion of Emakhuwa tone, albeit in a pre-autosegmental framework.) Although deletion of the rightmost H is particularly common in Bantu, other responses to violations of the *HH constraint can be found. In some cases it is the leftmost of the two H tones that deletes or changes to L. For instance, in the Bantu language Rimi (Yukawa 1989), underlying H tones shift systematically one vowel to the right. Thus [u-teghéja] ‘to understand’ has a H-toned verb stem, where the H tone is underlying on the stem-initial vowel (underlined) but surfaces on the syllable [ghe]. In an example like [u-va-ríghitja] ‘to speak to them’, the verb stem is toneless but the object prefix /va/ bears an underlying H tone that shifts onto the first syllable of the verb stem. When a H-toned object prefix precedes a H-toned verb stem, as in [u-va-teghéya], we see that the object prefix loses its H tone and the syllable [te] retains its H tone, although this H tone does shift to the next vowel. Rimi thus differs from languages like Venda and Emakhuwa in that a violation of the OCP is repaired by deleting the leftmost H rather than the rightmost H. In yet other cases, the two adjacent H tones are merged into a single H tone that is still linked to all of the moras that the original H tones were linked to. We can refer to this as H-tone fusion. The evidence for H-tone fusion is sometimes a bit indirect. The Bantu language Shambaa (Odden 1982) provides an interesting example, however, since in addition to the need for H-tone fusion, it also illustrates an entirely different means of avoiding *HH violations. In Shambaa, whenever a sequence of H tones would be created (either within a word or across words), a downstep is inserted between the H tones. For example, the second H tone in each of the following examples is downstepped relative to the first (downstep is indicated by the downward arrow and an underlying H tone is indicated by underlining; the data also illustrate H tone spreading, but we do not discuss this aspect of the data): [até-k↓ómá] ‘he killed’, [angé-↓já] ‘he should have cooked’, [ázakómá nj↓óká] ‘he killed a snake’, and [ní k↓úi] ‘it is a dog’. These data suggest clearly that a sequence H!H does not count as a violation of *HH. It should be noted that while in some languages downstep may derive from a so-called “floating” L tone, this is not the case in Shambaa. We do not address here the issue of how downstep is represented, nor whether it is represented in the phonology or only in the phonetics. The insertion of a downstep between successive H tones in Shambaa is a very general phenomenon, but there are some cases where successive input H tones are not separated by a downstep. For instance, there is no downstep between a H-toned object prefix and a H-toned verb stem: /ku-wá-kómá/ ‘to kill them’. Odden explains the failure of a downstep to be inserted at this juncture by proposing that the H tone of the object prefix and the H tone of the verb stem fuse into a single multiply linked H tone. As a consequence, there is a single H tone and insertion of downstep cannot occur (since downstep is used only between H tones). In all the preceding examples, we have dealt with situations where an input would violate *HH and formally different rules operate to alter the representation so that it no longer has a HH sequence. The OCP ban *HH is manifested in other ways in the grammars of the world’s languages. Recall from our discussion of Yawelmani how the ban on complex margins may serve to restrict the application of vowel deletion rules. The same thing can be observed in tonal systems: the ban

Conspiracies

16

on *HH may restrict the application of other tone rules. The most commonly observed phenomenon where *HH plays a restrictive role on another tonological process is in H-tone spreading. The precise formulation of H-tone spreading differs from language to language (at least in a rule-based model of phonology), but one overarching pattern is that spreading may be prevented from going onto a mora that itself is adjacent to a H-toned mora. It should be observed that in some languages, it is not spreading but shifting (i.e. spreading of a H followed by a delinking from all but the last mora in the spreading structure) that is blocked. For example, in Isixhosa (cf. Cassimjee and Kisseberth 1998) the H tone on the subject prefix /bá-/ shifts to the following toneless syllable in [ba-yá-lwa] but is unable to do so in [bá-ya-bóna] due to the fact that the prefix /ya/ is followed in this case by a H-toned syllable. (As in our earlier examples, a mora that bears an underlying H tone is underlined.) Sometimes the adjacency of the H tones may be obfuscated. In Isixhosa the H on /bá-/ does not shift in [bá-ya-bonísa], even though at first glance it does not seem that the /ya/ is adjacent to a H-toned syllable. The problem in this example is that there is a H tone on the syllable /bó/ in the input, but in the output this H tone has shifted to the following syllable. What we observe here is the much discussed problem of phonological opacity: the H-toned nature of /bo/ serves to block spreading onto the syllable in front of it, even though in fact /bo/ is toneless on the surface. The preceding discussion shows that if we look at *HH (an instantiation of the OCP) across a diverse set of languages, we find that it is entirely parallel to the constraint against complex margins in the synchronic grammar of Yawelmani. There is a “functional” unity that unites all these diverse ways of avoiding HH sequences: they are working towards the same end, representations that lack HH sequences. Any theory (such as generative phonology) that sees rules as devices that marry a structural change to a structural description will fail to express the universal principle *HH.

3.3

Phonological conspiracies in phonological acquisition

Much of the work on the child’s acquisition of the phonology of a language assumes that the child accurately perceives (for the most part) the data to which s/he is exposed, but that various markedness principles (e.g. preferences for open syllables, preferences for oral over nasal vowels, preferences for stops over fricatives) restrict the child’s attempt to produce an output faithful to that perception (chapter 101: the interpretation of phonological patterns in first language acquisition). As early as the extremely important work of Smith (1973), it was recognized that the notion of conspiracies is as relevant to child phonology as it is to adult phonologies. Smith proposed various rules to account for the fact that the child Amahl simplified consonant clusters in adult speech, but just as in Yawelmani, the unity found in the child’s output was not reflected in the diversity of the rules that achieved this unity. Smith recognized this as a failing of his rule-based, derivational approach. Naturally, constraint-based approaches such as OT have gained currency in the field of phonological acquisition, at least in part because of their ability to capture conspiracies better in child language acquisition. It should be clear from the discussion throughout this chapter that “conspiracies” are first and foremost attempts to achieve an unmarked structure

17

Charles W. Kisseberth

or avoid a marked structure. Since markedness principles serve to shape the child’s outputs, it follows that we will expect conspiracies to be manifested. Space considerations limit us to a single instance of a conspiracy in phonological acquisition. Pater (2002) and Pater and Barlow (2003) discuss the ban on fricatives, *Fricative, which can be observed in the data on phonological acquisition (see also chapter 28: the representation of fricatives). The common repair of this constraint is for a fricative to be converted to the corresponding stop. Pater discusses a child LP65, aged 3:8 with a phonological delay, who in acquiring English lacked fricatives entirely from her output. However, adult English forms were repaired in two different ways: the deletion of the fricative or the stopping of the fricative. The choice of the repair was dependent on the adult input. If the fricative was part of a cluster, it was deleted. If it was not part of a cluster, it was converted to the corresponding stop. Thus sneeze became [ni(d], three became [wi], drive became [wa>b]. Two different repairs secure the absence of fricatives, one of the many examples of conspiracy in the acquisition literature.

3.4

Conspiracies in loanword phonology

When speakers of L1 adapt words from L2 for use in speaking their native language, some of these words may contain structures that violate a constraint operative in L1. If these words are fully nativized, then these structures will be altered so as to avoid violations of the constraint in question. If the notion of conspiracies is applicable to the phenomenon of loanword adaptation then we expect that there will be cases where a given constraint will be enforced by means of quite distinct adaptation strategies (see also chapter 95: loanword phonology). In the Australian Aboriginal language Gamilaraay, words must end either in a vowel or a coronal sonorant ([n j l rr]), according to McManus (2008). If Gamilaraay borrows an English word that ends in a consonant that is permitted word-finally, then that consonant will surface. Thus English barrel is realized as [baril], and poison as [baadjin]. When an English word ending in a labial or dorsal consonant is borrowed, an epenthetic vowel appears. This epenthetic vowel obviously functions to avoid a disallowed coda consonant. Some examples of epenthesis are given in (11). (11)

baaybuu dhuubuu milgin

‘pipe’ ‘soap’ ‘milk’

nhaayba yurraamu yurrugu

‘knife’ ‘rum’ ‘rope’

However, when the English word ends in one of the coronal obstruents, which are not allowed in a word-final coda, an epenthetic vowel is not inserted; rather, the coronal is converted to a sonorant. (12)

bulaang.giin ~ bulang.giin marrgin dhalbin garaarr dhindirr yarrarr

‘blanket’ ‘musket’ ‘tablet’ ‘grass’ ‘tin dish’ ‘rice’

burrgiyan yuruun ~ yurruun

‘pussy cat’ ‘road’

nhiigiliirr maadjirr gabirr

‘necklace’ ‘matches’ ‘cabbage’

Conspiracies

18

Gamilaraay employs both vowel epenthesis and the sonorization of a consonant to achieve an output where every output ends in a vowel or a sonorant coronal. Two formally distinct alterations yield an output that conforms to the same regularity.

4

The optimality-theoretic analysis of conspiracies

In the preceding section we provided some illustration of the relevance of the notion “conspiracy” in various domains of phonology. An examination of the phonological literature over the four decades since the publication of FUPR is rife with examples of the phenomenon. Although FUPR identified a facet of phonological structure that is no doubt of critical importance to the theory of phonology, it failed to offer a comprehensive solution to the problem that conspiracies pose. In addition, it failed to raise a significant question: why do conspiracies exist? Why do languages not employ a single device to achieve a preferred structure or to avoid a structure that is not preferred? Optimality Theory provides the essential ingredients of both a comprehensive account of conspiracies and an explanation for why universal “rules” may not be reflected in the outputs of particular languages. By separating the constraints from the actions that repair potential violations of these constraints, it allows a constraint to both trigger some actions while preventing others. Thus *ComplexMargins can both trigger the appearance of an epenthetic vowel and block the elision of a vowel if elision would violate *ComplexMargins. OT is not a theory of actions, but rather a theory of how constraint interactions account for the pattern of observed actions. At the same time, the existence of highly ranked faithfulness constraints may prevent a constraint violation from being repaired at all in some languages. Optimality Theory also explains why conspiracies occur, and can perhaps even be expected. Given a constraint such as *ComplexMargins, there are several different repairs that might avoid complex margins. However, each of these actions necessarily violates some other constraint (at the same time that they avoid a violation of *ComplexMargins). The constraints that the repair violates may be a faithfulness constraint, a markedness constraint, or some sort of morphological constraint. Since these other constraints have a particular ranking, this ranking will determine which repair to *ComplexMargins is optimal in a given situation. Almost any optimality-theoretic description of a language will offer clear evidence that it has provided an insightful account of conspiracies. There is, however, a significant problem with the OT analysis of conspiracies. Specifically, for any constraint C, there are many logical “actions” that might repair a representation so that C is not violated. OT seems to claim that any of these actions could occur in some language. But is this in fact true? Various linguists have suggested that it is not true that all “repairs” for the violation of a constraint are in fact possible. This has been labeled the “Too Many Solutions” problem (cf. Steriade 2001, 2009). For example, Steriade considers the constraint that disfavors voiced obstruents at the end of a word. She notes that there are a considerable variety of phonological actions that might result in outputs that do not violate this constraint. Taking an input with a final /b/ as an example, the following repairs could avoid a violation of the constraint: (a) the devoicing of the /b/ to [p], (b) the nasalization

19

Charles W. Kisseberth

of the /b/ to [m], (c) the lenition of /b/ to [w], (d) deletion of the /b/, (e) the insertion of an epenthetic vowel after /b/, and (f) the metathesis of the /b/ with a preceding consonant that does not violate the constraint, etc. Steriade argues, however, that in fact it is only devoicing that is utilized to repair violations of the ban on word-final voiced obstruent. The “Too Many Solutions” problem is a critical issue for Optimality Theory, since it calls into question the foundation of the theory – i.e. the separation of phonological actions from the constraints on structure that trigger these actions. It was the need to find an account of conspiracies that led to an abandonment of “rules” in favor of a set of constraints whose ranking determines the optimal action. So it is natural that a challenge to its analysis of conspiracies is at the same time a challenge to its very foundations. There have been various attempts to solve the “Too Many Solutions” problem in OT: the “P-map” proposal in Steriade (2001, 2009), the “targeted constraints” in Wilson (2001), and the appeal to procedural markedness principles (“implicational constraint principle”) in Blumenfeld (2006) are examples. It is beyond the scope of this chapter to explore the adequacy of these different attempted solutions, but there is no question that the issue is a central one in the exploration of OT approaches to phonology. Despite the challenges to the OT account of conspiracies, there is no doubt that much of the motivation for Optimality Theory resides in the advances that it made in explaining conspiracies, and these advances have been considerable indeed.

5

Conclusion

In this chapter we have explained the notion “conspiracy” in phonology and have illustrated its relevance to several domains of phonological investigation: synchronic grammars, phonological universals, the acquisition of phonology, and loanword adaptation. We have attempted to explain the historical background out of which the notion emerged, specifically the attempt to find what is universal in systems of phonological rules, and the reasons why an adequate solution was not available. We concluded with the observation that while Optimality Theory goes a long way towards providing an insightful account of conspiracies, it must still deal with the Too Many Solutions problem.

REFERENCES Archangeli, Diana. 1984. Underspecification in Yawelmani phonology and morphology. Ph.D. dissertation, MIT. Bakovio, Eric. 2007. Hiatus resolution and incomplete identity. In Fernando Martínez-Gil & Sonia Colina (eds.) Optimality-theoretic studies in Spanish phonology, 62–73. Amsterdam & Philadelphia: John Benjamins. Blumenfeld, Lev. 2006. Constraints on phonological interactions. Ph.D. dissertation, Stanford University. Boersma, Paul. 1997. The elements of Functional Phonology. Unpublished ms., University of Amsterdam (ROA-173). Casali, Roderic F. 1997. Vowel elision in hiatus contexts: Which vowel goes? Language 73. 493–533.

Conspiracies

20

Cassimjee, Farida. 1992. An autosegmental analysis of Venda tonology. New York: Garland. Cassimjee, Farida & Charles W. Kisseberth. 1998. Optimal Domains Theory and Bantu tonology: A case study from Isixhosa and Shingazidja. In Larry M. Hyman & Charles W. Kisseberth (eds.) Theoretical aspects of Bantu tone, 33–132. Stanford: CSLI. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. & John A. Goldsmith (ed.) 1984. Autosegmental studies in Bantu tone. Dordrecht: Foris. Cole, Jennifer & Charles W. Kisseberth. 1995. Restricting multi-level constraint evaluations: Opaque rule interaction in Yawelmani vowel harmony. In Keiichiro Suzuki & Dirk Elzinga (eds.) Proceedings of the 1995 Southwestern Workshop on Optimality Theory (SWOT), 18–38. Tucson: Department of Linguistics, University of Arizona. Givón, Talmy. 1970. The Si-Luyana language. Lusaka: Institute for Social Research, University of Zambia. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Published 1979, New York: Garland. Goldsmith, John A. 1984. Meeussen’s Rule. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 245–259. Cambridge, MA: MIT Press. Goldsmith, John A. 1990. Autosegmental and metrical phonology. Oxford & Cambridge, MA: Blackwell. Haiman, John. 1972. Phonological targets and unmarked structures. Language 48. 365–377. Harris, Zellig S. 1944. Yokuts structure and Newman’s grammar. International Journal of American Linguistics 10. 196–211. Hayes, Bruce & Tanya Stivers. 2000. Postnasal voicing. Unpublished ms., University of California, Los Angeles. Hockett, Charles F. 1967. The Yawelmani basic verb. Language 26. 278–282. Hockett, Charles F. 1973. Yokuts as a testing ground for linguistic methods. International Journal of American Linguistics 39. 63–79. Hooper, Joan B. 1973. Aspects of natural generative phonology. Ph.D. dissertation, University of California, Los Angeles. Hooper, Joan B. 1979. Substantive principles in natural generative phonology. In Daniel A. Dinnsen (ed.) Current approaches to phonological theory, 106–125. Bloomington: Indiana University Press. Hutchinson, Sandra P. 1974. Spanish vowel sandhi. In Papers from the Annual Regional Meeting, Chicago Linguistic Society 10(2). 184–192. Itô, Junko. 1989. A prosodic theory of epenthesis. Natural Language and Linguistic Theory 7. 217–259. Kenstowicz, Michael & Charles W. Kisseberth. 1977. Topics in phonological theory. New York: Academic Press. Kenstowicz, Michael & Charles W. Kisseberth. 1979. Generative phonology: Description and theory. New York: Academic Press. Kiparsky, Paul. 1968. How abstract is phonology? Distributed by Indiana University Linguistics Club. Kiparsky, Paul. 1982. Lexical morphology and phonology. In Linguistic Society of Korea (ed.) Linguistics in the morning calm, 3–91. Seoul: Hanshin. Kiparsky, Paul. 1985. Some consequences of Lexical Phonology. Phonology Yearbook 2. 85–138. Kisseberth, Charles W. 1969. Theoretical implications of Yawelmani phonology. Ph.D. dissertation, University of Illinois. Kisseberth, Charles W. 1970. On the functional unity of phonological rules. Linguistic Inquiry 1. 291–306.

21

Charles W. Kisseberth

Kuroda, S.-Y. 1967. Yawelmani phonology. Cambridge, MA: MIT Press. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic Inquiry 12. 373–418. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. McCarthy, John J. 1993. A case of surface constraint violation. Canadian Journal of Linguistics 38. 169–195. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. McCarthy, John J. & Alan Prince. 1986. Prosodic morphology. Unpublished ms., University of Massachusetts, Amherst & Brandeis University. McManus, Hope E. 2008. Loanword adaptation: A study of some Australian Aboriginal languages. Honours thesis, University of Sydney. Meeussen, A. E. 1963. Morphotonology of the Tonga verb. Journal of African Languages 2. 72–92. Mohanan, K. P. 1986. The theory of Lexical Phonology. Dordrecht: Reidel. Mohanan, K. P. 1995. The organization of the grammar. In John A. Goldsmith (ed.) The handbook of phonological theory, 24–69. Cambridge, MA & Oxford: Blackwell. Myers, Scott. 1991. Persistent rules. Linguistic Inquiry 22. 315–344. Newman, Stanley. 1944. Yokuts language of California. New York: Viking Fund. Odden, David. 1982. Tonal phenomena in KiShambaa. Studies in African Linguistics 13. 177–208. Paradis, Carole. 1988. On constraints and repair strategies. The Linguistic Review 6. 71–97. Pater, Joe. 1999. Austronesian nasal substitution and other NY effects. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.) The prosody–morphology interface, 310–343. Cambridge University Press. Pater, Joe. 2002. Form and substance in phonological development. Proceedings of the West Coast Conference on Formal Linguistics 21. 348–372. Pater, Joe & Jessica Barlow. 2003. Constraint conflict in cluster reduction. Journal of Child Language 30. 487–526. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Reyes, Rogelio. 1976. Studies in Chicano Spanish. Ph.D. dissertation, Harvard University. Schadeberg, Thilo C. 1982. Nasalization in Umbundu. Journal of African Languages and Linguistics 4. 109–132. Shibatani, Masayoshi. 1973. The role of surface phonetic constraints in generative phonology. Language 49. 87–106. Smith, Neil V. 1973. The acquisition of phonology: A case study. Cambridge: Cambridge University Press. Sommerstein, Alan H. 1974. On phonotactically motivated rules. Journal of Linguistics 10. 71–94. Stampe, David. 1973. A dissertation on natural phonology. Ph.D. dissertation, University of Chicago. Published 1979, Garland: New York. Steinbergs, Aleksandra. 1985. The role of MSCs in OshiKwanyama loan phonology. Studies in African Linguistics 16. 89–101. Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual account. In Elizabeth Hume & Keith Johnson (eds.) The role of speech perception in phonology, 219–250. San Diego: Academic Press. Steriade, Donca. 2009. The phonology of perceptibility effects: The P-map and its consequences for constraint organization. In Kristin Hanson & Sharon Inkelas (eds.) The nature of the word: Studies in honor of Paul Kiparsky, 151–179. Cambridge, MA: MIT Press.

Conspiracies

22

Vennemann, Theo. 1971. Natural generative phonology. Paper presented at the 45th Annual Meeting of the Linguistic Society of America, St Louis. Vennemann, Theo. 1974. Phonological concreteness in natural generative grammar. In Roger Shuy & Charles-James N. Bailey (ed.) Toward tomorrow’s linguistics, 202–219. Washington, DC: Georgetown University Press. Wilson, Colin. 2001. Consonant cluster neutralization and targeted constraints. Phonology 18. 147–197. Yukawa, Yasutoshi. 1989. A tonological study of Nyaturu verbs. In Yasutoshi Yukawa (ed.) Studies in Tanzanian languages, vol. 2, 451–480. Tokyo: ILCAA.

71 Palatalization Alexei Kochetov

1

Introduction

The term “palatalization” denotes a phonological process by which consonants acquire secondary palatal articulation or shift their primary place to, or close to, the palatal region. This usually happens under the influence of an adjacent front vowel and/or a palatal glide (e.g. [ki] → [kji], [tja] → [Œa]). As such, palatalization is a type of consonant–vowel interaction. The term may also refer to a phonemic contrast between consonants with secondary palatal articulation and their nonpalatalized counterparts (e.g. [p ja] vs. [pa]). The primary focus of this chapter will be on palatalization as a process, and particularly as a synchronic phonological process manifested in segmental alternations. Palatalization has been typically viewed as a classic example of a “natural” phonological process – one that is widely attested across world languages and has a clear phonetic motivation, such as in consonant-to-vowel co-articulation (e.g. Hyman 1975; see also chapter 75: consonant–vowel place feature interactions). However, many formal accounts of palatalization undertaken over the last 40 years have faced considerable challenges. These challenges partly stem from the fact that palatalization processes show a wide range of manifestations – across languages and within a given language. Many synchronic palatalization processes also exhibit complex phonological and morphological conditioning and pervasive opacity effects, reflecting complicated historical sound changes and paradigmatic restructuring. Given this, the difficulty faced by many theoretical approaches has been in providing an empirically adequate and uniform formal treatment of the phenomenon, capturing both cross-linguistic and languageparticular generalizations. The goal of this chapter is to review some of the key formal accounts of palatalization, focusing particularly on the challenges posed by the phenomenon for generative theories of phonological representations. The chapter is organized as follows. §2 presents some concrete examples of palatalization, followed by an overview of cross-linguistic patterns of the phenomenon. §3 examines how the feature system of the early generative phonology captures natural classes involved in palatalization. §4 focuses on two key approaches to palatalization within the feature geometry framework. §5 turns to treatments of palatalization within Optimality Theory, and §6 concludes with a review of some alternative proposals. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0071

Palatalization

2 2.1

2

Background Some examples

We will begin with some relatively well-known examples of palatalization. English has at least three kinds of alternations that fall under the general definition of palatalization processes. Coronal palatalization involves an alternation between alveolars [t d s z] and palato-alveolars [Œ – œ Ú], as shown in (1). In these examples, the palato-alveolars occur before a palatal glide (in an unstressed syllable), while alveolars occur elsewhere. These alternations can be analyzed as a process – a change of alveolars to palato-alveolars in the context of [j] (Chomsky and Halle 1968; Borowsky 1986; among others).1 The process is assimilatory in the sense that the consonants targeted by palatalization become more similar in place of articulation to the segment that triggers palatalization. Note that stops not only shift in place, but also undergo assimilation (becoming sibilant affricates). All the other features of the target consonants (e.g. continuancy, voicing, etc.) remain unchanged. (1)

t–Œ d–– s–œ z–Ú

perpe[t]uity resi[d]ue gra[s]e plea[z]e

perpe[Œ]ual resi[–]ual gra[œ]ious plea[Ú]ure

The second process – velar softening – is exhibited by alternations between velar stops [k g] and coronal fricatives or affricates [s] and [–], respectively. The coronal alternants are found before certain Latinate or Greek suffixes beginning with (mainly) front vowels; the velar alternants are found elsewhere (2). Given this, the alternations are usually analyzed as a palatalizing change of velars to coronals triggered by front vowels (Chomsky and Halle 1968; Borowsky 1986). Unlike coronal palatalization, this process is more complex, as it actually involves two non-identical changes – a shift of the voiced velar stop to the palato-alveolar affricate and a shift of the voiceless velar stop to the alveolar fricative. While the outputs of velar softening are not identical in terms of minor place of articulation and continuancy, they are both sibilant coronals. (2)

a.

k–s

b.

g––

medi[k]ation medi[s]ine criti[k] criti[s]ize analo[g] analo[–]y pedago[g]ue pedago[–]y

The third process – spirantization – exhibits alternations between the alveolar stop [t] and the alveolar fricative [s] (or [œ] in conjunction with coronal palatalization). The latter segment occurs before suffixes with an unsyllabified /i/ (3), and this process is assumed to involve a change of stop to fricative before high front vowel (Borowsky 1986). As such, the process does not involve a change in place 1

Similar, albeit optional and phonetically gradient, alternations are also exhibited across words, as in go[Œ] you, plea[,]e yourself, etc. (Zsiga 1993).

Alexei Kochetov

3

of articulation, but a change in continuancy and sibilancy (chapter representation of fricatives). (3)

t–s

secre[t] regen[t] emergen[t] par[t]

28:

the

secre[s]y regen[s]y emergen[s]y par[œ]ial

The three palatalization processes manifested by the alternations in (1)–(3) differ in several respects. The targets of palatalization are anterior coronals (alveolars) in (1) and (3), and dorsals in (2). The outputs are posterior coronals (palato-alveolars) in (1) and (2b), and anterior coronals in (2a) and (3). The triggers are /j/ in (1), and high front vowels in (2) and (3). (The processes are also different in terms of their phonological or morphological conditioning: morpheme boundaries, particular suffixes, stress, etc.) What the processes have in common, however, is that they appear to be triggered by front vocoids and result in coronal segments, notably, all sibilants. As such, they are representative of the three general types of palatalization processes defined by Bhat (1978): “coronal raising” (1), “velar fronting” (2), and “spirantization,” which may or may not be accompanied by change in place ((2) and (3) respectively), as discussed in the next section. Another important type of palatalization, not exhibited by the English processes, is an addition of secondary palatal articulation, without a change in primary place or assibilation. As shown in (4a), Russian exhibits alternations between nonpalatalized consonants of all places – labials, anterior coronals (dentals), and dorsals – and their palatalized counterparts (chapter 121: slavic palatalization). The palatalized segments in such alternations occur before front vowels (/e/ in (4a)), while non-palatalized consonants are unrestricted (Kenstowicz and Kisseberth 1979). These alternations can be straightforwardly analyzed as an assimilatory process involving a simple addition of secondary palatal articulation (the high front position of the tongue body) before front vowels. This process in Russian is fairly general, and is not restricted to particular morphological categories. Secondary palatalization may co-occur in a given language with “place-changing” palatalization. In Russian, non-palatalized dentals and velars also exhibit alternations with palato-alveolars, with the latter occurring before certain verbal suffixes (4b) (Lightner 1965; Kenstowicz and Kisseberth 1979). Note that the palatalizing suffixes may or may not begin with overt front vocoids, showing that placechanging palatalization in Russian is a more opaque, morphologically conditioned process. Note also that the relation between targets and outputs of palatalization in (4b) is less transparent than in (4a): for example, the same palato-alveolar output [Œ] can result from two different target consonants, /t/ or /k/. (4)

a. j

p–p t – tj k – kj b. t–Œ k–Œ

nom sg trap-a sjirat-a sabak-a inf pr jat-at j plak-at j

dat sg trap j-e sjiratj-e sabakj-e 3 pers sg prjaŒ-it plaŒ-it

‘path’ ‘orphan’ ‘dog’ 1 pers sg prjaŒ-u plaŒ-u

‘hide’ ‘weep’

Palatalization

4

Palatalization in Russian produces outputs that are phonemic, since the language has palatalized consonants and posterior coronals whose occurrence is not conditioned by front vowels. The same can be said about the English palatalization processes. Allophonic palatalization is also quite common, however. In Nupe (5), for example, the velars have secondary palatal articulation before front vowels (5a), secondary labial articulation before round vowels (5b), and no secondary articulation before the back unrounded /a/ (5c). This pattern can be analyzed as an allophonic assimilatory process involving palatalization and labialization of phonemic plain velars before front and round vowels respectively (Sagey 1990; Archangeli and Pulleyblank 1994). (5)

a. b. c.

/egi/ /ege/ /egj/ /ego/ /ega/

eg ji eg je egwj egwo ega

‘child’ ‘beer’ ‘mud’ ‘grass’ ‘stranger’

So far we have examined fairly representative examples of palatalization, those involving changes that are common across world languages. It is worth contrasting these examples with that from two Southern Bantu languages, Tswana and Swati, shown in (6). Here, labials alternate with (labialized) palato-alveolars in the context of the passive suffix /-wa/. Consonants of other places remain unaffected (cf. Tswana [ratwa] /rat-wa/ ‘love’; Swati [phegwa] /pheg-wa/ ‘cook’). These alternations are also considered to manifest palatalization (Halle 2005; Bateman 2007); however, the process is different from the cases above in several important respects. First, the targets are labial consonants to the exclusion of the other places, and the labials change their place of articulation, something that is usually restricted to coronals and dorsals. (Coronals, however, do undergo palatalization in other contexts.) Second, the trigger of the process, the suffix /-wa/, does not contain an overt or even an underlying front vocoid, but presumably develops it as a result of labial dissimilation (Kotzé and Zerbian 2008). Third, the process in Swati is not strictly local, as it can target labials occurring in non-adjacent syllables. All this makes palatalization in Tswana and Swati (and similar processes in other Southern Bantu languages) a relatively “unnatural” case in the typology of palatalization, as we will see below. (6) a.

b.

Tswana p – Œw ph – Œhw b – –w Swati b–– + – Œ’ ph – œ

non-passive

passive

lDpa tlh Ápha roba

lD(Œ wa tlh Á(Œ hwa rD(–wa

/lDp-wa/ /tlh Áph-wa/ /rob-wa/

‘request’ ‘choose’ ‘break’

hambse+enta siph ula

han–wa seŒ’entwa siœulwa

/hamb-wa/ /se+ent-wa/ /siph ul-wa/

‘go’ ‘work’ ‘uproot’

The examples from English, Russian, Nupe, Tswana, and Swati provide a snapshot of a vast range of variation found in palatalization processes both within and across languages. Of particular interest here are the variation and preferences in

5

Alexei Kochetov

terms of featural composition of segmental classes involved in palatalization – its triggers, targets, and outputs. There are clearly many other theoretically important issues relevant to palatalization – including those of allophonic/phonemic status, morphological or lexical conditioning, locality, etc.; these, however, are beyond the scope of this chapter (on some of these topics, see chapter 29: secondary and double articulation; chapter 81: local assimilation; chapter 88: derived environment effects; chapter 121: slavic palatalization).

2.2

Typological patterns of palatalization

To better understand the complexity of the phenomenon, it is useful to examine cross-linguistically more and less common patterns of palatalization. The following discussion is based on the author’s survey of synchronic palatalization processes, with some reference to the earlier often-cited survey by Bhat (1978), and the more recent one by Bateman (2007).2 (See also Chen 1973, Kochetov 2002, and Stadnik 2002 for surveys specific to certain geographic areas or phonological contrasts.) The survey covers cases of palatalization as exhibited by segmental alternations (as opposed to phonotactic restrictions or historical changes) selected from theoretical phonological literature for the purposes of this chapter. Altogether, it contains data from 64 languages and dialects belonging to 17 language families and 25 genera. We will begin with observations about targets and outcomes of palatalization, and corresponding general types of palatalization processes. The focus will be on changes targeting labial, (anterior) coronal, and dorsal stops. Table 71.1 represents three general processes of palatalization: secondary palatalization Table 71.1 Targets and outputs of palatalization (alternations only) and corresponding processes, and their relative frequency in world languages (based on numbers of language families and genera, given in square brackets; see the text for details) Type

Palatalization

I

Secondary

II

To a posterior coronal

III

To an anterior coronal

labial

coronal

dorsal

p → pj common [6, 9]

t → tj common [6, 8]

k → kj common [6, 7]

a. to a nonsibilant

p→c rare [1, 1]

t→c common [7, 8]

k→c common [4, 6]

b. to a sibilant

p→Œ rare [1, 1]

t→Œ k→Œ common [9, 14] common [4, 7]

a. to a nonsibilant b. to a sibilant

2

p→t absent [0, 0]

n/a

k→t absent [0, 0]

p → ts rare [1, 1]

t → ts common [3, 6]

k → ts rare [2, 4]

The latter two studies have certain limitations with respect to the goals of this chapter. Although quite extensive, Bhat’s survey does not clearly distinguish between synchronic processes and historical sound changes, of which only the former are relevant here. Bateman’s survey, while drawing on a genetically balanced language sample and focusing on synchronic processes, is restricted to only certain types of palatalization processes, leaving out, for example, place-changing palatalization of labials and changes resulting in anterior coronals. The latter types are important for our discussion, as these are the ones that have been most problematic for theoretical accounts.

Palatalization

6

(Type I), palatalization resulting in a posterior coronal (palato-alveolar or (alveolo-)palatal; Type II), and palatalization resulting in an anterior coronal (alveolar or dental; Type III). The Type II process can produce either non-sibilants or sibilants, resulting in two subtypes ((a), (b)). The same subdivision is given for the Type III process. Columns on the right schematically present typical (or expected) changes involved in each process, depending on the target consonant – labial, coronal, or dorsal. (To avoid cluttering the table, voiceless stops/ affricates stand here for segments regardless of their laryngeal specification, e.g. “Œ” can include [Œ], [Œ’], [Œh], or [–]; changes in continuancy are not noted, e.g. “k → ts” includes the changes to [ts] or [s].) To facilitate the comparison, each change is labeled as “common,” “rare,” or “absent,” indicating its relative frequency in the sample, based on numbers of separate language families and genera (given in parentheses), rather than individual languages.3 (See the online version of this chapter for examples of the processes.) Note that the typology of processes in Table 71.1 is compatible with the typology of changes in Bhat (1978), whose terms are based on SPE (Chomsky and Halle 1968) feature terminology (see §3.1). Taking coronal targets as an example, Type I and Type IIa correspond to Bhat’s process of alveolar “raising” without “spirantization” (i.e. [−high] → [+high] in SPE notation), Type IIb corresponds to “raising” accompanied by “spirantization” (i.e. [−high, −strident] → [+high, +strident]), and Type IIIb corresponds to “spirantization” without “raising” (i.e. [−strident] → [+strident]). Bhat’s terms are not fully appropriate for our purposes, as they do not distinguish between secondary and place-changing palatalization, in addition to being tied to a specific feature framework. Place-changing processes (Types II and III) involving non-coronals have also been referred to as “coronalization” (Hume 1992; Flemming 2002). What is interesting about the different types of palatalization shown in Table 71.1 is that certain targets and outputs (and the corresponding processes) are common, while others are rare or unattested. Overall, there is a tendency for place-changing palatalization to result in sibilants rather than non-sibilants. While both sibilants and non-sibilants are possible outputs for Type II palatalization, only sibilants are possible for Type III palatalization. Another important observation is that labials as targets of place-changing palatalization processes (Types II and III) are exceedingly rare, compared to coronals and dorsals. The only examples of labial place-changing palatalization (with stops as targets) in the sample are Southern Bantu languages (see (6)) and Moldova Romanian (e.g. [plop] ‘poplar tree’, [ploc] /plop-i/ (plural); [drob] ‘block (of salt)’, [droÁ] /drob-i/ (plural); Bateman 2007). Among the other two place categories, coronals as targets tend to occur overall more frequently than dorsals. Notably, the most common palatalization process is a change of alveolars to palato-alveolars (Type IIb), attested in nine language families and 14 genera. Further examination of the cases (Table 71.2) shows that in a given language, coronals and dorsals can be targeted 3

As an example, Type IIb coronal palatalization in English (1) and Russian (4b) represents a single case of palatalization at the level of family (Indo-European), and two separate cases at the level of genus (Germanic and Slavic). This allows for an estimation of cross-linguistic frequency that is relatively conservative and less biased toward certain language families or genera. For expository reasons, “common” changes are defined as occurring in at least three or more families, and “rare” changes occur in one or two families. The study uses the language classification from Haspelmath et al. (2005).

7

Alexei Kochetov Table 71.2 Targets of palatalization and their relative frequency in world languages Target consonants

Occurrence

coronal only

common [13, 16]

dorsal only

common [4, 6]

labial only

absent [0, 0]

coronal and dorsal

common [3, 5]

coronal and labial

rare [1, 1]

dorsal and labial

absent [0, 0]

coronal, dorsal, and labial

common [6, 9]

by palatalization independently or together, while labials are targeted only when coronals (and dorsals) are targeted too (but see (6)). This suggests implicational relations among targets of palatalization, with place-changing palatalization of labials implying palatalization of coronals and dorsals (cf. Chen 1973; Foley 1977; Bhat 1978). The results of the survey are also indicative of a greater propensity of coronals to palatalization, compared to dorsals. This is consistent with some previous studies (cf. Bateman 2007), while in part contradicting those based on more limited language samples or mixed synchronic/diachronic data (Chen 1973; Foley 1977; see §3.2). Turning to triggers of palatalization, Table 71.3 shows that these include front vocoids (vowels and glides) that differ in height and high vocoids that differ in backness. Non-high back vowels do not trigger palatalization. Among all the triggers, high front /i/ and /j/ are the most likely triggers, followed at a considerable distance by mid front vowels. Recall that these are the triggers in all examples shown above (leaving aside the more opaque Bantu cases). Examples of the rare types of palatalization triggered by front low vowels or high back vocoids include Slovak ([vnu(Œa] /vnu(k-æ/ ‘grandson-dim’; Hume 1992) and Lomongo ([kon–wá] /kond-wa/ ‘cover with sand-pass’; Kenstowicz and Kisseberth 1977). It is important to note that in a given language, low and mid front vowels Table 71.3 Triggers of palatalization and their relative frequency in world languages Trigger Backness

Occurrence Height

front only front only front only front only front and back back only back

high only high and mid high, mid, and low mid/low high only high only mid/low

i/j i/j, e/e i/j, e/e, æ e/e/æ i/j, q/u/w q/u/w A/H/o/D

common [17, 24] common [4, 5] rare [1, 1] absent [0, 0] rare [1, 1] absent [0, 0] absent [0, 0]

Palatalization

8

apparently only trigger palatalization if high front vowels trigger it too.4 Similarly, non-front high vocoids are triggers when front high vocoids are also triggers. These observations are indicative of implicational relations between vowel height/ frontness and the ability to trigger palatalization (cf. Chen 1973; Foley 1977; Bateman 2007). Further, there are some interesting dependencies between triggers and targets. Coronals tend to be targeted by high vocoids, and especially by /j/, while dorsals are almost exclusively targeted by /i/ and other front vowels (see Bhat 1978: 52–56 for details). In terms of directionality, palatalization processes can be regressive (right-toleft) or progressive (left-to-right). Both types are quite common, with regressive palatalization attested in eight families and sixteen genera, and progressive palatalization in nine families and nine genera (mainly in the Americas). Some languages show both types, although in different morphological/phonological contexts, as, for example, Chimalapa Zoque (7a) (Kenstowicz and Kisseberth 1977). The overwhelming majority of palatalization processes are local, triggered by immediately adjacent vocoids. In a few cases, however, palatalization can apply across a consonant, as in Barrow Inupiaq (7b) (Archangeli and Pulleyblank 1994), or across one or more syllables as in Harari (7c) (Rose 1997; cf. Swati in (6)). (7)

a.

b.

c.

2.3

Chimalapa Zoque tqŒi /tqts-i/ ‘dry’ kuj Œets-pa /kuj tsets-pa/ ‘is carving wood’

cf. tqts-pa ‘is drying’ tsets-pa ‘is carving’

Barrow Inupiaq isiøsuq /isiq-tuq/ isiöOuni /isiq-luni/

‘be smoky (3sg int)’ (3sg irrealis)

Harari kiŒHbi

‘write! (2sg fem)’

/kitHb-i/

cf. kitHb (2sg masc)

Summary

Our examination of particular cases and cross-linguistic patterns of palatalization processes reveals a number of asymmetries involving targets, triggers, and outputs. Some of the key observations are restated in (8). First, certain places of articulation are better targets of palatalization than others (8a). Among the three places, the key difference is between coronals and dorsals on one hand and labials on the other. Another important difference is between coronals and non-coronals. Second, certain vowels/glides are better triggers of palatalization than others (8b). In particular, front vowels are considerably better triggers than non-front vowels, and among the former class, high vowels are the best triggers, and low vowels the worst. Third, there are important dependencies between the vowel height (and syllabicity) of triggers and the place of targets (8c). Fourth, outputs of palatalization are either palatalized consonants or coronals; among the latter, posterior coronals and/or sibilants are the preferred outputs (8d). In addition, all but a few cases of palatalization are local, triggered by immediately preceding or following vocoids. 4

Bhat (1978) mentions some cases where mid front vowels palatalize velars to the exclusion of high front vowels. None of these cases, however, appears to involve alternations.

Alexei Kochetov

9 (8)

Main typological generalizations about palatalization (“>” = “better than, more likely than”) a.

Target place asymmetry (place-changing palatalization) i. coronal, dorsal > labial ii. coronal > dorsal, labial

b.

Trigger asymmetry i. (high) front > high central/back ii. high front > mid front > low front

c.

Trigger–target dependencies i. front vowels and dorsals ii. high vocoids (especially /j/) and coronals

d. Output asymmetries i. posterior > anterior ii. sibilant > non-sibilant It should be noted that as a type of consonant–vowel interaction, palatalization is somewhat unique (see chapter 75: consonant–vowel place feature interactions). Although other consonant-to-vowel assimilation processes commonly result in consonants with secondary vowel articulations (t → tw / __ u), they hardly ever exhibit synchronic shifts in primary place of articulation (t → p or k / __ u) (cf. Ní Chiosáin and Padgett 1993). If such alternations are indeed observed, they appear to imply palatalization alternations, as in Ikalanga, where vowels (through gliding) trigger both place-changing velarization and palatalization (9) (Mathangwane 1996). In addition, unlike palatalization, hardly any other vowel– consonant interactions affect consonant manner features, in a way that, for example, produces sibilant affricates or fricatives from stops. Equally implausible are the processes that would produce the reverse effect of place-changing palatalization, for example, converting sibilant coronals to coronal or non-coronal stops (Œ → t or k). All this underscores the seemingly unique place of palatalization in the typology of consonant–vowel interactions, and its highly asymmetric nature. (9) a. b.

plain

diminutive

Úínó œamú báni semé

ÚiIwáná œaIwáná baJáná seJáná

/Úino-ana/ /œamu-ana/ /bani-ana/ /seme-ana/

‘tooth’ ‘lash’ ‘forest’ ‘basket’

What makes palatalization so special? Why are some patterns of palatalization cross-linguistically more common, while other patterns are rare or unattested? It has long been known that the naturalness of many palatalization processes has its roots in phonetics – articulation and perception. As Hyman (1975: 171) noted in his discussion of velar palatalization, gradient fronting of a [k] before [i] is a phonetic process that is universal, shared by all languages. The two articulatory gestures – the tongue body backing for [k] and the tongue body fronting for [i] − simply cannot be co-produced without this co-articulatory adjustment. In this sense, the process is automatic, part of the “universal phonetics” (although the degree of velar fronting can be language-particular). Further, fronted velars or

Palatalization

10

palatals tend to be produced with greater frication at the release, which makes them acoustically more similar to palato-alveolar affricates. Given this acoustic similarity, the former are often auditorily confused with the latter (but not the reverse), resulting in common historical shifts of velars to palato-alveolars (Guion 1996). The change [k] → [Œ] before [i] is therefore motivated by both articulation and perception. Similar articulatory, and possibly perceptual, reasons underlie the change of [t] → [Œ] before [j] or [i] – presumably arising due to overlap of the tongue tip and tongue body gestures, producing a more retracted laminal constriction with a turbulent sibilant-like release (Zsiga 1993). In contrast, the articulation of [p] before [j] or [i] presents no articulatory difficulties, as the two gestures – the lips and the tongue body – are physically uncoupled and therefore can be freely co-produced. Despite some frication at the release, [pj] is still quite acoustically different from [Œ] and is thus less likely to be confused with the former. This suggests that unlike dorsal and coronal palatalization, labial palatalization is phonetically much less plausible, and therefore phonologically less natural. Indeed, comparative historical evidence suggests that cases of labial palatalization have arisen through “telescoping” – a series of historical changes involving glide strengthening and cluster simplification (Hyman 1975; Bateman 2007). In fact, different stages of these developments are often reflected in closely related languages or dialects, as the case with Tswana and Moldova Romanian (10) (Udler 1976; Kotzé and Zerbian 2008). Finally, the lack of phonetic motivation can explain some of the asymmetries in triggers of palatalization (e.g. high front vocoids vs. low and back) and the unnaturalness of changes reverse to palatalization (e.g. [Œ] → [t] or [k]). (10)

a.

Tswana -gaΠwa /-gap-wa/

Northern Sotho -gapœa

b.

Moldova Romanian dialects Standard Northern Bukovina aric /arip-i/ aripc

Lobedu -habja ‘request (pass)’ Chernovcy arip j ‘wing (pl)’

While the diachronic phonetic sources of palatalization have rarely been debated, most phonologists would agree that at least some of the common patterns and important asymmetries in palatalization in (9) (or any phonological process) require synchronic explanation (but see §6). Further, regardless of historical changes, it is commonly agreed that synchronic grammars should have ways of modeling palatalization alternations (as in English or Russian) or allophonic variation (as in Nupe). Yet the question of how to represent the process synchronically while capturing relevant significant generalizations has proven to be difficult, if not impossible. It is remarkable that almost 40 years after the first generative account of English velar palatalization in Chomsky and Halle (1968), Halle (2005: 23) concedes that “to this time there has been no proper account of palatalization that would relate it to the other properties of language, in particular, to the fact that it is found most commonly before front vowels.” This is despite the fact that palatalization has received extensive treatment in early generative phonology, autosegmental phonology, and more recently Optimality Theory. The goal of this chapter is to review some of the influential theoretical treatments of palatalization as a synchronic process, while focusing particularly on distinctive features and feature geometry representations as ways of capturing the naturalness

Alexei Kochetov

11

of common palatalization processes. As we will see, some of the problems encountered by formal models of palatalization can be attributed to the complexity of the phenomenon; other difficulties, however, seemingly stem from the reliance on a universally fixed, closed set of rigid, unidimensional representations. We will also examine other formal ways of capturing relevant generalizations using constraints and constraint hierarchies or more phonetically detailed representations in Optimality Theory, and conclude with a brief review of some recent alternative proposals that challenge traditional generative assumptions.

3 3.1

Palatalization in early generative phonology Distinctive features and marking conventions of SPE

One possible way of capturing the naturalness of phonological processes is through stating natural classes of segments involved (as triggers, targets, and outputs), using distinctive features (chapter 17: distinctive features). The concept of natural classes encoded by a universal set of features has been an important part of generative phonology since Chomsky and Halle (1968; SPE). The distinctive features in SPE were exclusively articulatorily based (unlike the auditorily based features of Jakobson et al. 1952). One proposal that has important theoretical consequences for our discussion is the use of features [±high], [±back], and [±low] for both vowels and consonants. Among the latter, these features are used as “a natural manner to characterize subsidiary consonant articulations such as palatalization, velarization, and pharyngealization” (SPE: 305), which are defined as [−back, −low], [+back, −low], and [+back, +low], respectively. This proposal was intended to capture the fact that secondary articulations tend to occur before vowels of the same qualities, for example, palatalized consonants before front vowels (cf. Nupe (5)). The feature specification thus allowed one to state these restrictions as “an obvious case of regressive assimilation” (SPE: 308). In addition, the proposal captured a cross-linguistic observation that the three types of secondary articulations are mutually exclusive, since, for example, palatalized consonants cannot be simultaneously velarized or pharyngealized. These feature specifications also helped in the formulation of typical vowel raising and fronting changes in the environment of palatalized consonants as a simple case of assimilation. An example from Russian is shown in (11a), where underlying vowels /e/ and /a/ shift to [i] when occurring after a palatalized consonant in an unstressed syllable (Kenstowicz and Kisseberth 1979). An SPE-style rule capturing the process is stated in (11b). (11)

a.

1st plural ’p j iœim ’mjeŒim ’vjaÚim ’maœim

1st singular p j i’œu ‘write’ j m i’Œu ‘throw’ j v i’Úu ‘bind’ ma’œu ‘wave’

b.

[+syll, −high] → [+high, −back] / [−syll, +high, −back] __

Note that in this respect the SPE feature system is a step forward compared to Jakobson et al.’s (1952) system, where consonants with secondary articulations and corresponding vowels did not share the same feature values. For example,

Palatalization

12

palatalized consonants were specified for [+sharp] (and [+grave] or [−grave]), while (high) front vowels were [−sharp] and [−grave]. The SPE specification for [+high, −back] was not limited to high front vowels and palatalized consonants, but also extended to palatals and postalveolars. This is important, because it naturally grouped together common triggers and outputs of palatalization processes (Types I and II). The feature system, however, treated palatals as non-coronals and grouped them with velars: both are [−coronal, −anterior, +high] and differ in [±back]. Notably, palatalized velars were not distinguished from palatals (both are [−coronal, −anterior, +high, −back]). These two specifications created certain problems. First, Type I (secondary) palatalization was represented as two different featural processes: a change from [−high] to [+high] (raising) for coronals and labials (which are [±coronal, +anterior, −high, −back]), and a change from [+back] to [−back] (fronting) for velars. This also predicted – partly incorrectly – that palatalization of velars should be triggered by front vowels only (regardless of height), while secondary palatalization of coronals and labials should be triggered by high vowels only (regardless of frontness/backness). Another seemingly non-trivial problem is revealed by the treatment of placechanging palatalization processes (Types II and III). Consider the rules proposed to account for English coronal palatalization (12a) and velar softening (12b) (see (1) and (2)). As Chomsky and Halle note (SPE: 424), palatalization is an intrinsically assimilatory process. Nothing in the rules below, however, captures its assimilatory nature. In fact, specifications of triggers and outputs in each of the rules do not share a single feature. (12)

a.

G−backJ G−sonJ G−ant J G−cons J → /__ H−voc K I+cor L I−stridL I−stressL I−cons L

b.

G−cont J G+cor J G−backJ H−ant K → H+strid K /__ H−low K IL IL I−consL

Acknowledging this and other problems arising from the excessively permissive rule notation mechanism as a “fundamental theoretical inadequacy” (SPE: 400), Chomsky and Halle propose to supplement rules and feature specification with a substantive component – a theory of markedness consisting of a list of “marking conventions.” They illustrate the application of these conventions in rules representing historical palatalization processes in Slavic. The so-called “first velar palatalization” in Slavic (Type IIa; [k g x] → [Œ – œ]) can be stated as a “simple assimilation rule” (SPE: 400) by which velars ([−anterior]) acquire the [−back] value from following front vowels (13a). The change of stops to strident coronal affricates and fricatives ([+coronal, +delayed release, +strident]) is not an assimilatory effect, but is due to an application of relevant marking conventions (13b). According to these conventions, a postalveolar affricate [Œ] is less marked than a palatal stop [c] or a palato-alveolar stop [t], and therefore “when velar obstruents are fronted, it is simpler for them also to become strident palato-alveolars with delayed release” (SPE: 423). Thus, the unmarked value of the feature [coronal] ([ucoronal]) for [−back, −anterior] consonants is [+coronal], and the unmarked values for the other two features of posterior coronals are [+delayed release] and [+strident] (chapter 12: coronals; chapter 22: consonantal place of

Alexei Kochetov

13

articulation). Similar assimilation rules and marking conventions were proposed for the Slavic “second velar palatalization” and language-specific realizations of “dental palatalization” (Types IIa, IIb, and IIIb). (13)

a. b.

[−ant] → [−back, +cor, +del rel, +strid] /__ [−cons, −back] [ucor] → [+cor] / [__, −back, −ant] [udel rel] → [+del rel] / [__ , −ant, +cor] [ustrident] → [+strident] / [__ , +del rel, +cor]

The combined use of rules and marking conventions made it possible to formulate palatalization as an assimilatory process. Yet it remains unclear when marking conventions should be invoked in general, and there are questions about the appropriateness of some of the conventions for particular cases of palatalization. For example, while it is true that postalveolar affricates are less marked (at least less cross-linguistically common) than palatal or postalveolar stops, the same is true, even more so, about the unmarked status of alveolar or labial stops (both [−anterior, −back]) – the segments that are never produced by palatalization. Further, the account views all place-changing palatalization processes as consisting of two consecutive stages – the fronting or secondary palatalization followed by simplification (e.g. [k] → [c] → [Œ]; [t] → [tj] → [Œ]). While these stages may correctly recapitulate the historical development of some palatalization processes (as in Slavic), it can be argued that they are simply unnecessary as statements of the synchronic rules of a language.

3.2

Naturalness and phonological rules

Questions about alternative ways of constraining the excessively powerful rule machinery of SPE were central to the theoretical debate in the late 1960s and the 1970s (see Hyman 1975 for a review). Why, for example, is the palatalization rule in (14a) cross-linguistically common and natural, while the exact reverse of it (14b) is highly unlikely and unnatural? From the point of view of computational simplicity, both rules are equally simple, involving the same number of features. The fact that the formal theory had no way of distinguishing between natural and unnatural rules was seen by some phonologists as highly problematic. In response to this, Schachter (1969) proposed to encode naturalness directly into phonological rules, introducing the feature specification n, marking feature values that are “natural” for a given process. Given this, the rule of velar palatalization can be rewritten as (14c), stating that the natural value of the feature [±back] before front vowels is [−back]. Features marked as natural are not counted by the rule simplicity metric, thus rendering the rule in (14c) less “costly” than the rule in (14b). Taking this idea further, Chen (1973) proposed to formalize the target place and trigger height asymmetries of palatalization (8a) and (8b.ii) as part of special metarules – language-specific rules supplemented with universal constraint statements. For example, his meta-rule in (14d) states that consonants become palatalized before front vowels ([1 back] = {i, e, æ}), however, with certain implicational relations: (i) if a consonant of a given point m along the backness scale undergoes palatalization, so does the consonant higher on the scale (i.e. [p] implies [t] and [k], and [t] implies [k]; cf. (8a), but see (8b)); (ii) if a consonant undergoes palatalization by a vowel of a given point n of the height scale, it also does so before any vowel higher on that scale (i.e. [æ] implies [e] and [i], and [e] implies [i]; cf. (8b.ii)).

Palatalization

14

In a related proposal, Foley (1977) formulated “synchronic truth statements” – implicational relations among triggers and targets of palatalization – and provided detailed calculations of relative probability of palatalization depending on the target place and trigger height (among other factors), as shown in (14e). (14)

a. b. c. d.

e.

C[+back] → [−back] / __ V[−back] (k → Œ / __ {i, e, æ}) C[−back] → [+back] / __ V[+back] (Œ → k /__ {u o a}) C[+back] → [nback] / __ V[−back] C[aback] → palatalized / V[1back, bhigh] language-universal constraints: a ≥ m, b ≥ n, where C backness scale: 1 [p], 2 [t], 3 [k]; V backness scale: 1 [i e æ], 2 [u o A]; V height scale: 1 [æ], 2 [e], 3 [i] Relative probability scale of palatalization kj > ki, tj > ke, ti, pj > kæ, te, pi > tæ, pe > pæ

These proposals, despite some empirical inadequacies, are interesting as the first relatively systematic attempts to directly incorporate substantive factors into the formal computational mechanism. The use of phonetic naturalness as a formal phonological criterion, however, did not receive much support in mainstream generative phonology at the time, as it was difficult to reconcile with the fact that languages allow both natural and unnatural rules, seemingly without preference for the former. If naturalness considerations were part of the grammar, why would some languages maintain such phonetically implausible rules as labial palatalization (cf. Hyman 1975 on the Bantu rule [p] → [s] / __ [i])?

4

Feature geometry

New ways of constraining the application of phonological rules were brought by the framework of feature geometry (Clements 1985; Sagey 1986; among others; see also chapter 27: the organization of features). More elaborate, geometrically organized autosegmental featural representations were expected to delimit the typology of phonological rules, distinguishing between possible, natural and impossible, unnatural processes. Within feature geometry, it is useful to distinguish two main approaches to palatalization. Both view palatalization as an assimilatory phenomenon, but differ in the feature specification of the main triggers of the process – front vocoids. The first approach treats front vocoids as [dorsal], essentially following the SPE tradition. The second approach specifies front vowels as [coronal], in an attempt to state some of the generalizations missed by the SPE-style featural accounts. Palatalization is thus modeled as either spreading [dorsal] or spreading [coronal].

4.1

Palatalization as spreading [dorsal]

One key proposal of this approach, initially developed in Sagey’s dissertation (1986), is that vowels, glides, dorsal consonants, and secondary articulations like palatalization and velarization are characterized by the [dorsal] node with features [±high, ±back, ±low]. In contrast to SPE, labials and coronals are not specified for these features, but are characterized by [labial] and [coronal] nodes respectively. The feature [anterior] in the new approach is limited to coronals only, being specified

Alexei Kochetov

15

under the [coronal] node. Palatalized consonants in this system are viewed as complex segments with temporally unordered primary [labial] or [coronal] nodes and the secondary [dorsal] node specified for [+high, −back] (with “designated articulators” – primary nodes – marked diacritically with a pointer from the Root node): (15)

Root

x ...

Place [dorsal] [labial] or [coronal] [+high][−back] As in SPE, this system does not allow for palatalized velars distinct from palatals: both are designated as [kj], [gj], etc. and represented as simple segments having Place[dorsal[+high, −back]]. Given these feature specifications, palatalization processes are treated as spreading [+high, −back], either with or without the [dorsal] node. Sagey presents data from Zoque to illustrate the application of palatalization. As shown in (16), palatal glide /j/ triggers palatalization of following consonants by adding a secondary palatal articulation to labials and velars, and changing anterior coronals to posterior coronals ([t] = a [−anterior] alveolo-palatal stop). (16)

a. b. c.

/j-pata/ /j-kama/ /j-tatah/ /j-tsZhku/

pjata kjama tatah Œahku

‘his mat’ ‘his cornfield’ ‘his father’ ‘he did it’

Sagey analyzes palatalization as spreading [−back] from the preceding /j/ to all places (together with [dorsal] for labials and coronals), with a subsequent deletion of the glide. This analysis is illustrated for labials in (17). Spreading [−back] produces the required results for labials and dorsals, but not for coronals. For the latter, the addition of [dorsal[−back]] to [coronal[+anterior]] results in palatalized alveolars instead of the expected posterior coronals. What is necessary here, Sagey argues, is an additional process that would simplify the complex coronal structure Place[coronal[+anterior]]*[dorsal[−back]] to a posterior articulation, [coronal[−anterior]]. She refers to this process as “fusion,” by which coronal and dorsal nodes are fused to produce a simple posterior coronal. (17)

a.

/j/ ...

/p/ ...

b.

[j] ...

Place

[p j] ... Place

→ [dorsal] [labial] [−back]

[dorsal] [labial] [−back]

Palatalization

16

In sum, while the model provides a relatively simple and intuitively appealing account of secondary palatalization, its treatment of place-changing coronal palatalization as a two-stage process is arguably problematic for the same reasons as the SPE solution. Even more problematic, critics would argue, is the analysis of place-changing velar palatalization (Types IIa and IIb) (cf. Lahiri and Evers 1991; Hume 1992). Such an analysis is not worked out by Sagey, but would presumably involve a subsequent change of palatalized velars/palatals (Place[dorsal[−back]]) to posterior coronals (Place[coronal[−anterior]]). This, however, cannot be motivated by structural complexity, as palatalized velars (or palatals) are assumed to be simple articulations. Given this, fusion is not an option. Nor is it clear why this “simplification” should result in a posterior coronal, as opposed, for example, to an anterior coronal or a labial. Further, the model remains silent about the role of sibilancy in outputs of palatalization, and does not predict palatalization to anterior coronals (Type III) as a possible option. Despite these and other limitations (see Kenstowicz 1994), the approach to palatalization as spreading [dorsal] has been relatively successful when dealing with contrastive palatalization, particularly with systems having both palatalized and velarized consonants. For example, both in Irish and in Russian consonants in clusters assimilate to following consonants in secondary palatalization or velarization, and vowels get fronted or backed by adjacent palatalized and velarized consonants. These facts can be easily stated as spreading the [dorsal] node with either [−back] or [+back] (Ní Chiosáin 1991; Rubach 2000). The use of the binary feature [±back] is also useful when it comes to stating morphological “exchange rules” – reversing palatality of consonants ([aback] → [−aback]) to mark certain morphological categories, as in Kildin Saami and Scots Gaelic (18) (Kert 1971; MacAulay 1992). (18)

a. b.

4.2

kebb kobj b j ma(V ahHrj

‘illness (nom sg)’ ‘pit (nom sg)’ ‘rent (nom sg)’ ‘father (nom sg)’

kebj b je kobba ma(lj ahHr

(dat/illat sg) (dat/illat sg) (gen sg/nom pl) (gen sg/nom pl)

Palatalization as spreading [coronal]

The approach to palatalization as spreading [coronal] was advanced to remedy some of the inadequacies of the [dorsal] spreading model. It develops the original insight of Clements (1976) that palatalization and coronality are related, and that front vowels and coronals should form a natural class. While treatments of palatalization as spreading [coronal] were advocated in a number of works (Mester and Itô 1989; Broselow and Niyondagara 1990; Clements 1991; Lahiri and Evers 1991), the most extensive development of the idea was presented in Hume’s dissertation (1992). Hume’s feature geometry model builds on Clements’s (1991) proposal to use distinct tiers for consonant and vowel places, C-Place and V-Place nodes. These separate tiers were introduced for reasons largely independent of modeling palatalization – to allow for cross-consonantal assimilatory effects (such as vowel harmony and umlaut). These structures also made it possible to represent consonants with secondary articulation as having both C-Place and V-Place nodes. The V-Place node of vowels included the features [±coronal] and [±dorsal], with

Alexei Kochetov

17

front vowels being [+coronal[−anterior]]. Height features were represented under a separate Stricture node, the property that will be relevant to our further discussion. Unlike the binary V-Place features, C-Place features were assumed to be primitive: [labial], [coronal], and [dorsal]. The [+anterior] under the [coronal] nodes referred to dentals and alveolars (as in Sagey’s framework), while [−anterior] referred to various posterior coronal articulations, crucially including palatals. Despite some formal inconsistency in the use of binary and primitive features, the model allows for representing front vowels, coronal consonants, and palatalized consonants as a natural class – all sharing [coronal], specified either at the V-Place or C-Place. This is clearly a considerable advance in the theoretical modeling of palatalization, as both secondary palatalization (Type I) and place-changing palatalization (Types IIa and IIb) can be stated as assimilatory processes, virtually involving a single step. According to this analysis, secondary palatalization is triggered by spreading V-Place[coronal[−anterior]] from a front vowel or glide to the consonant. In the case of place-changing palatalization, this spreading is accompanied by delinking the original C-Place and promoting V-Place to the position of the former. Changes in other features, such as stridency or continuancy, are not considered to be part of the assimilation process per se, being specified as a rule parameter (the “constriction status change”). Hume’s analysis of the two general processes is illustrated in (19). a.

Constriction status change: No

b. Constriction status change: Yes

C ...

V ...

C ...

V ...

C-Pl

C-Pl

C-Pl

C-Pl

=

(19)

[F]

(V-Pl)

V-Pl cor −ant

[F]

V-Pl cor −ant

The key insight of the [coronal] spreading approach is that secondary palatalization and place-changing palatalization (also known as “coronalization” with noncoronal targets) are essentially the same general process. Hume notes that both are cross-linguistically common, and in fact may optionally apply under the same phonological conditions in a given language, as in Acadian French (20). (20)

a. b.

ki ~ kj i ~ Œi ‘who’ cf. [ka] ‘case’, [kut] ‘cost’, [kote] ‘side’ j ke ~ k e ~ Œe ‘quay’ tjed ~ tjed ~ Œed /tied/ ‘lukewarm’ cf. [dyp] ‘dupe’, [typ] ‘type’

While these data nicely illustrate the similarity between secondary palatalization and place-changing palatalization, they also highlight some problems with the model. The triggers of the processes involving dorsals (20a) and coronals (20b) are different: front vowels in the first case and [j] in the second case. The target– trigger dependencies (see (8c)) are therefore not predicted by the model (cf.

Palatalization

18

Kenstowicz 1994). While the same is true for Sagey’s and Chomsky and Halle’s models, Hume’s model has a structural limitation – stricture features are assumed to be independent of place features, and therefore statements of such dependencies are not possible. In fact, the model predicts that vowel height is not a factor in the process and any front vocoid can equally well palatalize a consonant of any place of articulation. While correctly capturing the important role of front vowels in palatalization processes, Hume’s model does not allow for finer-grained frontness/height distinctions and rules out some of the attested processes. Among such processes are vowel raising next to palatalized consonants (as in Russian (11)) and a shift of velars to anterior coronals – the phenomena that could be relatively straightforwardly handled in the SPE approach.

4.3

Further developments

Subsequent work in the framework of feature geometry included attempts to resolve some of the problematic aspects of either approach, or to combine the insights of both. Lahiri and Evers (1991) propose to revise the [coronal] spreading approach by simplifying the two-tier place system and dispensing with the tier promotion mechanism used by Clements (1991) and Hume (1992). While maintaining the treatment of place-changing palatalization as due to spreading [Coronal[−anterior]], they analyze secondary palatalization as spreading [+high] (specified under the Tongue Position node) – a representationally elegant, yet arguably empirically problematic approach (Hume 1992; Jacobs and van de Weijer 1992). Calabrese (1993) uses alternative feature geometry representations and markedness filters in an attempt to address some of the issues largely overlooked in the Sagey and Hume approaches. Among these are the propensity of palatalization to produce sibilant affricates and fricatives, and the possibility of anterior coronals as outputs of the process. Jacobs and van de Weijer (1992) propose that front vowels are complex articulations, having both [coronal] and [dorsal] nodes (cf. Halle 2005). Palatalization may involve spreading only dorsal features, as in the case of velar fronting ([x] → [ç]), or both coronal and dorsal features, as in the case of place-changing palatalization of velars. This specification is also intended to characterize the class of coronals and dorsals as common targets of palatalization, as opposed to labials. While the move to specify front vocoids for both features adds flexibility to analyses of palatalization, its implications for analyses of other processes, such as vowel harmony and consonant harmony, and the interactions of these processes with palatalization, still remain to be explored. For example, is the patterning of front vowels in palatalization (as triggers) consistent with their patterning in backness vowel harmony (as targets or transparent vowels)? Do palatalized consonants always block backness harmony (as in Turkish: Kenstowicz and Kisseberth 1979)? Why do front vocoids fail to block coronal consonant harmony in some languages (Sanskrit: Calabrese 1993), while triggering it in other languages (Rundi: Broselow and Niyondagara 1990)? Finally, none of the above reviewed approaches seems to address the important question of why palatalization is special among consonant– vowel interactions – that is, why front vowels systematically displace consonant primary place of articulation, while other vowels hardly ever do so (cf. Ní Chiosáin and Padgett 1993). To conclude this section, palatalization has played an important role in the development of a feature geometry framework, serving as a testing ground for

Alexei Kochetov

19

competing proposals. While the rigidly constrained featural representations combined with a set of simple operations have contributed to a more empirically adequate account of cross-linguistic patterns of palatalization, it became clear that the same representations have often stood in the way of further empirical coverage of the phenomenon (and sometimes created problems for accounts of other phenomena). This particularly applies to cases of palatalization that can be considered less phonetically natural, such as place-changing processes resulting in anterior coronals or involving labial consonants. Ironically, some of the processes that could be easily stated in the SPE-style approach (although not always in a natural and insightful way) could no longer be stated in the feature geometry approach without making some ad hoc stipulations. At the same time, the discrete and binary feature geometry representations have also turned out to be incapable of capturing finer-grained, presumably phonetically motivated, scalar phenomena and trigger–target dependencies. This subsequently led some phonologists to (at least partly) revise the traditional view of representations as fixed and universal, and to explore ways of capturing cross-linguistic generalizations and variability in phonological processes through underspecification (Steriade 1995), contrastive specification (Avery and Rice 1989), contrastive feature hierarchies (Dresher 2009), or a system of parameterized rules (Archangeli and Pulleyblank 1994); see chapter 2: contrast; chapter 7: feature specification and underspecification).

5

Constraints and representations in Optimality Theory

The advent of Optimality Theory (OT; Prince and Smolensky 1993) brought back phonetic substance into phonology, now in the form of violable markedness constraints. While feature geometry-style representations and feature spreading assumptions have continued to play an important role in most OT accounts of palatalization, the task of capturing relevant feature asymmetries was partly relegated to constraints and constraint hierarchies. For example, the labial/non-labial target asymmetry could now be formalized as a universally fixed hierarchy of constraints prohibiting palatalized labials, dorsals, and coronals (21a) (Chen 1996; Rose 1997), while the trigger height asymmetry was represented as a fixed hierarchy of Palatalize (spread V-Place) constraints indexed for vowel height (21b) (Rubach 2003). Meshing these two hierarchies and combining them with different rankings of other markedness (e.g. Affrication and Posteriority; Rubach 2000) and faithfulness constraints can generate a restrictive factorial typology of palatalization patterns (cf. (14)), to some extent approximating the actual typology of palatalization (see §2). However, as the objects of constraint manipulation were the same inviolable feature geometry representations, some of the earlier noted problems persisted into OT analyses. (21)

a. b.

*[lab]/VPl[cor] >> *[dors]/VPl[cor], *[cor]/VPl[cor] Pal/j, Pal/i >> Pal/e >> Pal/æ

One possible solution to these problems was sought in the use of more detailed, phonetically realistic representations. Chen (1996), for example, uses articulatory gestures in conjunction with the traditional feature geometry representations to

Palatalization

20

analyze palatalization in Japanese, Polish, and Swati. All palatalization processes are assumed to involve spreading V-Place[coronal] from front vocoids (following Hume 1992) and resulting in abstract complex segments with a secondary place and the original primary place (e.g. [Dorsal]/V-Place[Coronal]). The cross-linguistic diversity in outputs of palatalization arises, according to Chen, from languageparticular phonetic implementation via articulatory gestures (Browman and Goldstein 1989). Bateman’s (2007) OT analysis of cross-linguistic patterns of palatalization fully replaces the traditional feature geometry representations with articulatory gestures. She models secondary palatalization and place-changing palatalization as resulting from two different gestural coordination strategies: the coordination of the vowel gesture at the release of the consonant gesture or at the center of it – producing either consonants with secondary articulation or simple articulations of intermediate constriction location and degree, respectively. The appeal of both proposals is in the use of independently motivated, physically concrete representations and a simple mechanism of gestural overlap. Problems arise, however, as before, with treatments of articulatorily less natural cases of dorsal and labial place-changing palatalization. In Chen’s analysis, velar palatalization results in the abstract phonological structure [Dorsal]/V-Place[Coronal], which can be phonetically interpreted as [kj], [c], [Œ], or [tu], depending on the language. Yet it is not clear how this mapping would work in languages with more than one velar palatalization process (as Russian (4)). In Bateman’s analysis, the process [k] → [Œ] cannot be reasonably analyzed as resulting from gestural blending only (which would give only [kj] or [c]), and requires additional markedness stipulations (cf. Chomsky and Halle 1968 on [c] → [Œ]). While Chen analyzes labial palatalization in Swati as a case of phonological neutralization to the default coronal place, this is not an option for the gestures-only framework of Bateman. As gestural blending is technically impossible between the mechanically uncoupled gestures of the lips and the tongue body, labial palatalization is in principle ruled out by the model. Bateman contends that the few attested cases of labial–coronal alternations (as in Southern Bantu and Moldova Romanian) can be explained diachronically. Yet, arguably, these cases still require a synchronic analysis. A different approach – exploring phonetically detailed, scalar auditory features – is taken by Flemming (2002). He analyzes palatalization as a process driven primarily by constraints requiring perceptual enhancement of phonological contrasts (as part of his Dispersion Theory; see chapter 98: speech perception and phonology). In this analysis, secondary palatalization is an optimizing strategy, as it enhances the contrast of the vowel with other vowels by extending the span of its second formant (F2) to the preceding consonant (e.g. Nupe [egje] vs. [ega] compared to [ege] vs. [ega]). The change of the fronted velar or palatal stop to a palato-alveolar affricate is yet another step in the enhancement of the contrast (e.g. [e–e] vs. [ega]), by which the duration of frication and its loudness are increased, while the contrast with the non-palatalized counterpart in F2 remains relatively large. Thus, sibilants as outputs of palatalization are fully expected, as affrication is part of contrast enhancement: “It is easier to enhance a contrast by exaggerating a difference that would be present anyway as an articulatory side-effect, rather than attempting to reverse the articulatorily motivated pattern” (2002: 106). The same kind of enhancement through affrication is also possible for coronals, but is unlikely for palatalized labials, since the production of these

21

Alexei Kochetov

does not involve as much frication. While the actual implementation of this analysis of palatalization is relatively complex, it does capture some important generalizations about palatalization processes that have evaded many previous analyses. The Dispersion Theory approach thus provides an interesting insight into how palatalization may arise through auditory enhancement of phonological contrasts (cf. Padgett 2001, 2003). It remains to be seen, however, how the approach can model synchronic alternations, and particularly more complex cases of morphophonological palatalization.

6

Recent alternatives

Despite the greater flexibility and apparent naturalness provided by violable substantive constraints in OT, some of the problems with the formal modeling of palatalization have not been resolved. In part, these difficulties appeared to stem from a more fundamental problem – the persistent use of traditional featural representations (with some modifications), which were assumed to be inviolable, universal, and innate. These assumptions about representations were clearly important in the development of generative phonology, as the universal set of features provided a simple formal tool to state phonological rules and to capture significant cross-linguistic generalizations about natural classes of segments. Yet the basis for these assumptions has hardly been questioned or systematically investigated until recently. As Mielke’s (2008) survey of phonological processes shows, unnatural classes are widespread in languages, with some of them being more common than typical natural classes. As traditional feature theories are incapable of characterizing many of these classes, the usefulness of maintaining the assumptions about feature universality and innateness is in serious doubt. Mielke’s proposal is that features are not innate but emergent, arising from language learners’ phonetic generalizations (cf. Hayes and Steriade 2004 on OT constraints). If features, and phonological representations in general, are indeed emergent, this has some wide-ranging implications for phonological theory, and for formal modeling of phonological processes. Specifically with respect to palatalization, languages may be expected to vary in how they define features and natural classes involved in the process, while at the same time showing many similarities, given the similar articulatory and acoustic properties of alternating consonants and vowel triggers. One may also expect that featural representations are not immutable within a given language, but possibly reflect local generalizations, specific to certain morphological domains or lexical strata (as, for example, in cases of multiple palatalization processes targeting the same consonants). However, these and many other implications for analyses of palatalization have not yet been explored. Another notable recent development reflects a resurgence of interest in diachronic explanation of synchronic phonological patterns. This approach is most systematically represented by Blevins’s (2004) Evolutionary Phonology, where cross-linguistically common, “natural” sound patterns are explained exclusively diachronically – as a by-product of recurrent phonetically motivated sound changes. Given the well-established phonetic motivation for palatalization in co-articulation and auditory misperception (see §2.3), synchronic patterns of palatalization can be interpreted as arising from sound changes involving these

Palatalization

22

phonetic factors. As such, these patterns arguably do not require synchronic explanation – either structural or substantive (cf. Kochetov 2002 on the phonotactics of palatalization contrasts). Taking velar place-changing palatalization as an example, the unidirectional nature of this change ([ki] → [Œi], *[Œi] → [ki]) and its common result (a postalveolar affricate) has little to do with phonological grammar per se, as it can be attributed to common errors in the perception of fronted velars (Guion 1996). The same applies to the asymmetry between high and non-high front vowels as triggers – listeners simply make more errors of the type [ki] → [Œi] than [ke] → [Œe]. By the same token, listeners rarely make errors like [pi] → [Œi], unless under some specific phonetic conditions (see Ohala 1978) – the fact that explains the labial/non-labial asymmetry in palatalization. If most or all the cross-linguistic generalizations about palatalization in (8) can be accounted for by phonetically based sound changes, the goal of synchronic grammar becomes much simpler – to state language-particular generalizations about the patterning of segments in alternations or their phonotactic distribution. What specific form these language-particular synchronic grammatical generalizations would take, however, is not clear, and has not been sufficiently explored by the proponents of Evolutionary Phonology. One interesting implication of the approach is that synchronic patterns of palatalization alternations should mirror sound changes involving the process. Whether this is true, however, is subject to further typological research. Another related question is how to reconcile the substance-free grammar envisioned by Evolutionary Phonology with apparent evidence that speakers possess some phonetic knowledge and seem to use it to make higher-level grammatical generalizations (Hayes and Steriade 2004). An interesting relevant case is provided by the cross-linguistically common use of palatalization in baby talk and diminutive sound-symbolism – presumably reflecting bottom-up generalizations, grammaticalized associations between the phonetics of palatalized consonants and the meaning of smallness and childishness (Kochetov and Alderete, forthcoming). Whether phonetic knowledge plays a role in phonological generalizations, and specifically whether phonetic naturalness considerations are part of the grammar, are important questions that could possibly be answered through systematic psycholinguistic experimentation and computer simulations (see some relevant work by Wilson 2006). The challenge for future work is, therefore, to tease apart synchronic phonological and phonetic knowledge of palatalization and historical influences shaping cross-linguistic patterns of palatalization over time.

ACKNOWLEDGMENTS The chapter has benefited greatly from the insightful comments and helpful suggestions provided by two anonymous reviewers and the editors, Marc van Oostendorp and Beth Hume.

REFERENCES Archangeli, Diana & Douglas Pulleyblank. 1994. Grounded phonology. Cambridge, MA: MIT Press.

23

Alexei Kochetov

Avery, Peter & Keren Rice. 1989. Segment structure and coronal underspecification. Phonology 6. 179–200. Bateman, Nicoleta. 2007. A crosslinguistic investigation of palatalization. Ph.D. dissertation, University of California, San Diego. Bhat, D. N. S. 1978. A general study of palatalization. In Joseph H. Greenberg, Charles A. Ferguson & Edith A. Moravcsik (eds.) Universals of human language, vol. 2: Phonology, 47–92. Stanford: Stanford University Press. Blevins, Juliette. 2004. Evolutionary Phonology: The emergence of sound patterns. Cambridge: Cambridge University Press. Borowsky, Toni. 1986. Topics in the lexical phonology of English. Ph.D. dissertation, University of Massachusetts, Amherst. Broselow, Ellen & Alice Niyondagara. 1990. Feature geometry and Kirundi palatalization. Studies in the Linguistic Sciences 20. 71–88. Browman, Catherine P. & Louis Goldstein. 1989. Articulatory gestures as phonological units. Phonology 6. 201–251. Calabrese, Andrea. 1993. On palatalization processes: An inquiry about the nature of a sound change. Unpublished ms., Harvard University. Chen, Matthew Y. 1973. Predictive power in phonological description. Lingua 32. 173–191. Chen, Su-I. 1996. A theory of palatalization and segment implementation. Ph.D. dissertation, New York State University, Stony Brook. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1976. Palatalization: Linking or assimilation? Papers from the Annual Regional Meeting, Chicago Linguistic Society 12. 96–109. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. 1991. Place of articulation in consonants and vowels: A unified theory. Working Papers of the Cornell Phonetics Laboratory 5. 77–123. Dresher, B. Elan. 2009. The contrastive hierarchy in phonology. Cambridge: Cambridge University Press. Flemming, Edward. 2002. Auditory representations in phonology. London & New York: Routledge. Foley, James. 1977. Foundations of theoretical phonology. Cambridge: Cambridge University Press. Guion, Susan. 1996. Velar palatalization: Coarticulation, perception, and sound change. Ph.D. dissertation, University of Texas. Halle, Morris. 2005. Palatalization/velar softening: What it is and what it tells us about the nature of language. Linguistic Inquiry 36. 23–41. Haspelmath, Martin, Matthew S. Dryer, David Gil, Bernard Comrie & Hans-Jörg Bibiko (eds.) 2005. The world atlas of language structures. Oxford: Oxford University Press. Hayes, Bruce & Donca Steriade. 2004. Introduction: The phonetic bases of phonological markedness. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 1–33. Cambridge: Cambridge University Press. Hume, Elizabeth. 1992. Front vowels, coronal consonants and their interaction in nonlinear phonology. Ph.D. dissertation, Cornell University. Published 1994, New York: Garland. Hyman, Larry M. 1975. Phonology: Theory and analysis. New York: Holt, Rinehart & Winston. Jacobs, Haike & Jeroen van de Weijer. 1992. On the formal description of palatalisation. In Reineke Bok-Bennema & Roeland van Hout (eds.) Linguistics in the Netherlands, 125–135. Amsterdam & Philadelphia: John Benjamins. Jakobson, Roman, C. Gunnar M. Fant & Morris Halle. 1952. Preliminaries to speech analysis: The distinctive features and their correlates. Cambridge, MA: MIT Press.

Palatalization

24

Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kenstowicz, Michael & Charles W. Kisseberth. 1977. Topics in phonological theory. New York: Academic Press. Kenstowicz, Michael & Charles W. Kisseberth. 1979. Generative phonology: Description and theory. New York: Academic Press. Kert, G. M. 1971. Saamskii iazyk (Kildinskii dialekt): Fonetika, morfologiia, sintaksis. Moscow: Nauka. Kochetov, Alexei. 2002. Production, perception, and emergent phonotactic patterns: A case of contrastive palatalization. New York & London: Routledge. Kochetov, Alexei & John Alderete. Forthcoming. Patterns and scales of expressive palatalization: Typological and experimental evidence. Canadian Journal of Linguistics. Kotzé, Albert E. & Sabine Zerbian. 2008. On the trigger of palatalization in the Sotho languages. Journal of African Languages and Linguistics 29. 1–28. Lahiri, Aditi & Vincent Evers. 1991. Palatalization and coronality. In Carole Paradis & Jean-François Prunet (eds.) The special status of coronals: Internal and external evidence, 79–100. San Diego: Academic Press. Lightner, Theodore M. 1965. Segmental phonology of Contemporary Standard Russian. Ph.D. dissertation, MIT. MacAulay, Donald. 1992. The Scottish Gaelic language. In Donald MacAulay (ed.) The Celtic languages, 137–248. Cambridge: Cambridge University Press. Mathangwane, Joyce T. 1996. Phonetics and phonology of Ikalanga: A diachronic and synchronic study. Ph.D. dissertation, University of California, Berkeley. Mester, Armin & Junko Itô. 1989. Feature predictability and underspecification: Palatal prosody in Japanese mimetics. Language 65. 258–293. Mielke, Jeff. 2008. The emergence of distinctive features. Oxford: Oxford University Press. Ní Chiosáin, Máire. 1991. Topics in the phonology of Irish. Ph.D. dissertation, University of Massachusetts, Amherst. Ní Chiosáin, Máire & Jaye Padgett. 1993. Inherent VPlace. Report LRC-93-09, Linguistics Research Center, University of California, Santa Cruz. Ohala, John J. 1978. Southern Bantu vs. the world: The case of palatalization of labials. Proceedings of the Annual Meeting, Berkeley Linguistics Society 4. 370–386. Padgett, Jaye. 2001. Contrast dispersion and Russian palatalization. In Elizabeth Hume & Keith Johnson (eds.) The role of speech perception in phonology, 187–218. San Diego: Academic Press. Padgett, Jaye. 2003. Contrast and post-velar fronting in Russian. Natural Language and Linguistic Theory 21. 39–87. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Rose, Sharon. 1997. Theoretical issues in comparative Ethio-Semitic phonology and morphology. Ph.D. dissertation, McGill University. Rubach, Jerzy. 2000. Backness switch in Russian. Phonology 17. 39–64. Rubach, Jerzy. 2003. Polish palatalization in Derivational Optimality Theory. Lingua 113. 197–237. Sagey, Elizabeth. 1986. The representation of features and relations in nonlinear phonology. Ph.D. dissertation, MIT. Published 1990, New York: Garland. Schachter, Paul. 1969. Natural assimilation rules in Akan. International Journal of American Linguistics 35. 342–355. Stadnik, Elena. 2002. Die Palatalisierung in den Sprachen Europas und Asiens: Eine arealtypologische Untersuchung. Tübingen: Gunter Narr Verlag. Steriade, Donca. 1995. Underspecification and markedness. In John A. Goldsmith (ed.) The handbook of phonological theory, 114–174. Cambridge, MA & Oxford: Blackwell.

25

Alexei Kochetov

Udler, R. Y. 1976. Dialektnoe chlenenie moldavskogo iazyka. Chi;inau: Shtinica. Wilson, Colin. 2006. Learning phonology with substantive bias: An experimental and computational study of velar palatalization. Cognitive Science 30. 945–982. Zsiga, Elizabeth C. 1993. Features, gestures, and the temporal aspects of phonological organization. Ph.D. dissertation, Yale University.

72 Consonant Harmony in Child Language Clara C. Levelt

1

Introduction

Consonant harmony (CH) in child language production data has attracted a great deal of attention in the phonological literature. It has been defined as an “assimilation-at-a-distance” process between consonants (Vihman 1978), in which consonants affect other, non-adjacent consonants. The assimilating features in child language CH are mostly primary place of articulation features, like Labial and Dorsal, but cases where other features are involved have also been observed. Over time, several analyses of this phenomenon have been proposed in the literature, the nature of the analysis changing with the theoretical state of the art: a phonological rule (Smith 1973), autosegmental spreading (Menn 1978; McDonough and Myers 1991; Stemberger and Stoel-Gammon 1991; Levelt 1994), a connectionist account (Berg and Schade 2000), and constraint interaction (Goad 1997; Pater and Werle 2003; Fikkert and Levelt 2008). Consonant Harmony as such is not a phenomenon specific to child language (see chapter 77: long-distance assimilation of consonants). However, the nature of CH in child language differs from that in the languages of the world in an important way: unlike in the world’s languages, in child language it appears that primary place of articulation features can assimilate between non-adjacent consonants. This constitutes a challenge for a phonological account, because it clearly violates the principle of locality (see e.g. Archangeli and Pulleyblank 1987). According to this principle, only segments that are adjacent at some level of analysis can interact.1 For primary place of articulation features (chapter 22: consonantal place of articulation) of consonants that are not string-adjacent such a level can only be assumed under special circumstances, such as planar segregation (McCarthy 1989; see §3 below). Of course, CH in child language would violate locality only if a strong form of continuity is assumed, i.e. if the phonological systems of language learners and adults make use of the same units and obey the same set of principles. Therefore, as we will see, some accounts of child language 1

For an overview of the different definitions of locality, and the ways in which accounts of CH in the world’s languages deal with this principle, see chapter 77: long-distance assimilation of consonants. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0072

2

Clara C. Levelt

CH refer to a child-language-specific aspect of the developing phonological system, which allows for a local interpretation of the interaction. Alternatively, the locality problem can be circumvented by denying that the data are the result of an assimilation-at-a-distance process. Instead, some – mostly child-specific – form of feature licensing is invoked to account for the data. This interesting and sometimes controversial topic will be discussed in the following way. First, in §2 the main similarities and differences between CH in adult language and in child language will be pointed out. Because in child language CH involving primary place of articulation features is the most salient and systematic phenomenon, and forms a challenge for most phonological theories, I will focus on this type of harmony in the remainder of the chapter. In §3, different accounts of CH in child language will be discussed, focusing on the way in which phonological theory has highlighted different – problematic – aspects of the phenomenon. In §4, a view on CH in child language will be presented that sets it apart from CH in adult languages, and it will be discussed how such a child-specific phenomenon can come about during language acquisition. §5 concludes this chapter.

2

Consonant harmony in adult and child language

The prevalent view on consonant harmony is that it is a widespread phenomenon in child language, while it is rare in the world’s languages. This view is not totally supported by facts, however. Hansson (2001) describes different types of CH processes in 127 languages, while descriptions of systematic CH processes in child language can be found for only a handful of children (Smith 1973 and Goad 1997 (Amahl); Cruttenden 1978 (one child); Menn 1978 (Daniel); Berg and Schade 2000 (Melanie); Levelt 1994 and Fikkert and Levelt 2008 (Eva, Robin); Rose 2000 (Clara); Pater and Werle 2001, 2003 (Trevor)). Some studies discuss larger groups of children (Vihman 1978 (13 children); Stemberger and Stoel-Gammon 1991 (69 children)). However, it is unclear from Stemberger and Stoel-Gammon’s study how many of these children actually had CH productions or to what extent the phenomenon occurred systematically in the data. Vihman studied CH forms in vocabularies that contained between 109 and 372 words. In only four of the 13 children did she find a relatively high number of CH productions, i.e. between 18 and 32 percent of the productions in the vocabulary studied. Of the remaining nine children, three scored around 10 percent and the other six scored between 1 percent and 5 percent. In the study, almost half of all the CH forms were provided by two of the children, Amahl and Virve. All in all, if we base ourselves on facts, i.e. cases reported in the literature, then “rare in languages of the world” is supported, but “widespread in child language” is less obvious. It is widespread in the sense that an occasional CH form will probably show up in the speech of many children. Systematic CH patterns, however, i.e. CH forms that show up predictably for a longer period of time, have, up to now, only been described for a handful of children.

2.1

Features involved in harmony

Most CH processes, in child and adult language alike, involve place of articulation features. In the languages of the world, CH always concerns secondary

Consonant Harmony in Child Language

3

place of articulation features. Feature-geometrically speaking, these features are usually dependents of the coronal node: [anterior], [distributed], and [strident]. CH of features that are dependents of the labial or dorsal node exists too, but is very rare. Place of articulation harmony in the world’s languages occurs mostly between consonants that are already very similar: within a word, the feature value for [anterior] is shared between sibilants, or the feature values [anterior] and [distributed] are shared between stops. For a more extensive review see chapter 77: long-distance assimilation of consonants and Hansson (2001). In child language, however, the CH phenomenon that is discussed most often concerns primary place of articulation features, specifically Labial and Dorsal. The existence of a systematic Coronal harmony process in child language is less evident. For one thing, neutralization to a coronal place of articulation, as in Velar Fronting, where /k/ is replaced by [t], often occurs as an independent process during phonological development (chapter 12: coronals). Utterances with multiple coronal consonants are thus often the result of neutralization rather than assimilation. Below, in §3.2.2, we will see that underspecification of Coronal in the lexicon has been invoked to account for the absence of coronal harmony in child language. Unlike in adult CH, the consonants involved are not necessarily highly similar in other respects: primary place of articulation features are shared between any combination of nasals, fricatives, and stops. Concerning place of articulation, then, the two groups of speakers appear to have almost contrasting sets of features that are active in CH: adult speakers only show CH involving secondary place of articulation features, mostly dependents of Coronal, while children show CH involving primary place of articulation features, most commonly Labial and Dorsal. Typical examples of these two types of harmony are given in (1). (1)

Place of articulation harmony Adult speakers a. Sibilant [anterior] harmony in Ineseño Chumash (Applegate 1972, cited by Hansson 2001) /k-su-œojin/ [kœuœojin] ‘I darken it’ /s-api-Œ ho-it/ [œapiŒ holit] ‘I have a stroke of good luck’ b. Coronal [anterior] harmony in Päri (Andersen 1988, cited by Hansson 2001) [dè(l] ‘skin’ [dè(nd-á] ‘my skin’ [}ùol] ‘snake’ [}úo|{-à] ‘my snake’ Child speakers c. Dorsal harmony (English) (Trevor at 1;5: Compton and Streeter 1977, cited by Pater and Werle 2003) dog [gDg] bug [gZg] coat [kok] d. Labial harmony (Dutch) (Robin at 1;10: Levelt 1994) tafel /tafHl/ [pafy] ‘table’ zeep /zep/ [fep] ‘soap’ neef /nef/ [mef] ‘cousin’

4

Clara C. Levelt

In (1a) we observe that two sibilants that underlyingly carry different feature values for [anterior] at the surface both show up as [−anterior]. In (1b) the feature value for [anterior] of the stop consonant in the stem is shared with the prenasalized stops in the derived forms. The examples in (1c) from child language show Dorsal CH in data from an English-speaking child. Both underlying Labial and Coronal consonants show up as Dorsal consonants on the surface. In (1d) Labial CH is illustrated with examples from a Dutch-speaking child. Here we see that the interacting consonants do not necessarily agree in their manner features. Harmony involving other features is quite rare, both in languages of the world and in child language. In languages of the world, systematic patterns of long-distance assimilation have been attested for laryngeal features, nasality, and continuancy. In Hansson’s (2001) overview we also find languages that show some form of liquid harmony. This form of harmony, or rather lateral harmony, is the only other type of consonant harmony that apparently, occurs in a systematic way in the speech of a child, Amahl, discussed in Smith (1973). In Amahl’s case a target word with a combination of an /r/ or /j/ and a lateral results in a production with two laterals, as illustrated in (2). (2)

really lorry yellow

[li(li(] [lDli] [lelHu]

Some less evident forms which have been listed as CH forms, involving other features, are given in (3) (from Spanish child language; Vihman 1978). (3)

comiendo llorando telefono

[kabiendo] [nrdardno] [zwezwZno]

‘eating’ ‘crying’ ‘telephone’

nasal is assimilated to stop nasal harmony continuant harmony

However, no systematic patterns of CH in child language have been described that involve these features. For forms like in (3), which are probably just produced once, there is no predictable relation between the form of the adult target and the resulting production. More examples of occasional CH productions are given in (4), from the speech of Ji±i (Czech; Vihman 1978). (4)

balonek jeNek gramofon

[babo(nek] ‘ball’ [ÚeÚek] ‘hedgehog’ [gagafo(n] ‘gramophone’

According to Vihman (1978), forms like these seem analogous to speech errors or alliterations in adult speech. Up to now, the reported set of occasional CH forms has been so diverse and fragmented that it has been impossible to come up with a comprehensive analysis. Since CH in child language involving features other than place of articulation features presents such an unclear picture, the remainder of the chapter is concerned solely with primary place of articulation harmony.

Consonant Harmony in Child Language

2.2

5

Directionality of the process

Consonant harmony comes in two varieties in the languages of the world. Most commonly it takes the form of a morpheme structure constraint (chapter 86: morpheme structure constraints): certain combinations of consonants within a stem are allowed, while others are disallowed (Hansson 2001). In these cases, determining the direction of assimilation is not always evident, since underlying and surface forms are identical with respect to these harmonic consonants. The direction can sometimes be reconstructed from diachronic and cross-linguistic comparisons, which have shown that the default direction of harmony is anticipatory, i.e. right to left. Furthermore, Hansson (2001) establishes a Palatal Bias effect in adult CH. An underlying – or former – alveolar–palatal combination is likely to become a palatal–palatal combination. The morpheme structure constraint can be accompanied by a morphological harmony rule, i.e. a productive harmony process. Here, harmony is most commonly stem-controlled: the consonant of an affix will carry a certain feature value, depending on the feature value of consonants in the stem (chapter 104: root–affix asymmetries). In child language, the CH phenomenon is usually present in the period before any productive morphology has been developed. We therefore do not find CH forms that can be analyzed as being either stem- or affix-controlled. Rather, certain combinations of sounds within a stem appear to be disallowed on the surface. In this sense, CH in child language seems to take the form of a morpheme structure constraint. However, under the assumption that children’s underlying forms are similar to the adult target forms, and given that these adult target forms can contain the disallowed combination of sounds, child CH is often assumed to be an active process. As in adult CH, the default direction of the process is right to left. Instead of the Palatal Bias effect that was found in languages of the world, in child language we find a strong Labial or Dorsal Bias effect: if the C2 in a target C1VC2(V) combination is Labial, the C1 will end up being Labial too, or if C2 is Dorsal, C1 will be Dorsal.

2.3

Summary

What can be concluded from the above comparison between adult CH and child CH? Are the phenomena similar or different? Hansson (2001) pulls together adult CH, child CH, and speech errors. He states that the underlying source for CH must lie in the domain of phonological encoding for speech production. Both speech errors and CH show a default right-to-left directionality, and, as in adult CH, assimilatory speech errors are more likely to occur between segments that are already very similar. According to Hansson, then, CH in languages of the world is a phonologized form of speech error. With a little twist, this could also apply to CH in child language. The occasional forms are speech errors (as proposed in Vihman 1978) and in some cases a systematic, i.e. phonologized, type develops. According to Hansson, the difference in place of articulation bias between adult CH and child CH is caused by the nature of the sound inventory. In child language, the sound inventory is much smaller, and minor place of articulation features do not yet play a role. This impoverished inventory also puts “similarity between consonants” in a different perspective. Pairs of segments that are judged as very different by adult speakers, like /t/ vs. /k/, could be judged

6

Clara C. Levelt

as relatively similar by children. This would account for the fact that major place harmony is child-language-specific, and is not found in languages of the world. In this view, then, CH in the world’s languages and CH in child language are of the same kind, and the different surface appearance can be attributed to the impoverished segment inventory in the developing phonological systems of young children. Although this is an elegant perspective, which makes it possible to view phonological development as being continuous, by invoking and adhering throughout to identical principles and processes, a different account of the childspecificness of the phenomenon will be proposed in §4. Taking into account the developing place of articulation structure of young children’s entire vocabulary, it appears that CH in child language is of a very different nature than CH in the world’s languages. In the remainder of this chapter we will concentrate on consonant harmony in child language, starting in §3 below with an overview of the accounts of CH that have been proposed in the literature.

3

Theoretical approaches to consonant harmony in child language

Several grammatical accounts have been presented of child language CH. These accounts are in terms of rules, autosegmental representations, activation spreading, or constraints. I will pay special attention to the way the account deals with the issue of locality: how is the intervening vowel dealt with, and which part of the process, if any, is deemed child-language-specific?

3.1

Consonant harmony as the result of a phonological rule

Smith (1973), working in the tradition of SPE (Chomsky and Halle 1968), presents a series of “realisation rules” that derive the consonant harmony forms of his son Amahl from “English Standard Pronunciation,” i.e. the adult forms. In fact, Smith argues that one of the general functions of realization rules is to implement both consonant and vowel harmony, and he suspects that it is universal in child language. Of the eight realization rules that have consonant (and vowel) harmony as their motivation, the one in (5) results in labial and dorsal harmony: (5)

[+coronal] →

G−coronal J G−coronal J /__ [+syllabic] IaanteriorL IaanteriorL

This rule initially applied systematically before velars, but was optional if a = +, i.e. before labials. The intervening vowel, i.e. the [+syllabic] element in the rule, does not play any role and is not considered to be an obstacle to the process. The rule undergoes a couple of changes over time, capturing the fact that fewer and fewer coronal segments are affected. As a first change, for example, the rule split into two parts, operating in the original way in case of [−anterior], but applying only to nasals and continuants when [+anterior]. In its different forms, the rule is operative in Amahl’s system from stage (1), when the data collection started

Consonant Harmony in Child Language

7

at 2 years and 60 days, until stage (14), when Amahl was 2 years and 247 days. The rule can be considered child-language-specific, in that it disappears from Amahl’s system over time. However, according to Smith, it is a genuine phonological rule in the sense that the formal properties of the realization rules are the same as those of phonological rules in mature grammars.

3.2

Consonant harmony in the autosegmental framework

3.2.1 Output templates The first account in an autosegmental framework (chapter 14: autosegments) is presented by Menn (1978). She views CH as one of the child’s strategies to comply with a general constraint on his or her output structure. The proposed, child-language-specific, output constraint is termed a “consonant harmony constraint” and states that consonants within a word should be of one place type. There are different ways to comply with this constraint. If an adult target word contains consonants with different place features, the child can either delete all but one consonant, or the child can render all the consonants in a word of one place type. This perspective integrates CH in the child’s phonological system as a whole, instead of treating it as an isolated phenomenon: CH is just one of the possible ways of satisfying the constraint. In §4 a similar integrated view is elaborated. Menn posits the following “output lexical entry” for the words stuck, duck, and truck, which are all produced [gZk] by the child Daniel: (6)

Output lexical entry for stuck, duck, and truck tier 3

stop position

#

velar

#

tier 2

fricative

#

Ø

#

word structure

C V # +voice low-mid

tier 1

C –voice #

The child has the output representation in (6), resulting from the rule: “If an entry in the recognition lexicon contains a velar, then select [velar] as the stop-position specification for the corresponding entry in the output lexicon” (1978: 167). As in Smith’s account, above, the underlying stored form is altered if conditions apply; in case a stored form contains [coronal] or [labial] and [velar], only the feature [velar] will end up being linked to the consonant positions in the word. Although this seems to be a classic case of autosegmental spreading, in this account the intervening vowel is not perceived as posing a potential problem. A slightly different templatic approach to CH is taken by Iverson and Wheeler (1987). Following Moskowitz (1971), among others, who posited that words appear to be unanalyzed units, they argue that many phonological phenomena in child language are the result of the association of features with suprasegmental constituents, like words, syllables, and rhymes. The child’s output representations are viewed as well-formedness templates, which characterize and filter the set of permissible words in the child’s language (1987: 249). The well-formedness template that would result in the CH productions [kok] for coat and [gag] for dog is given in (7):

Clara C. Levelt

8 (7)

Output template for coat and dog WORD [−anterior] CVC

For the child in question, any word having a [−anterior] consonant will be associated with the above word structure, resulting in the harmonized forms. In Iverson and Wheeler’s view, CH is actually not a derivational process, linking an adultlike input representation to a harmonized child output representation. The output form actually represents the child’s knowledge of the phonological system of his or her target language, and it thus equals the underlying representation. What the child needs to learn, then, is that features should be associated with segments instead of larger units like syllables or words. The child-language-specific aspect of the account is the fact that features link to entire words, rather than segments. Apart from the fact that the proposed constraint behaves like a morpheme structure constraint rather than as a constraint on output forms only, the account is almost exactly parallel to Menn’s account: there is a child-specific template and a floating place feature that will be linked to the C-slots in the template (chapter 54: the skeleton). Although the notion of association line is appealed to in Iverson and Wheeler’s paper, the intervening vowel is still not viewed as potentially problematic for the account. As long as [anterior] is the feature being associated, the intervening vowel will not disrupt the linking process, since vowels are not normally specified for [anterior]. In this case, the association of [−anterior] with the vowel will simply have no effect. In this account, then, the locality problem is circumvented because only consonant-specific features are used.

3.2.2 Underspecification A more theoretically detailed autosegmental analysis of CH is presented in Stemberger and Stoel-Gammon (1989, 1991) and in Stoel-Gammon and Stemberger (1994). Here, CH is considered to be an “unconscious” process, caused on the one hand by underspecified consonants in the child’s inventory, and on the other hand by a tendency for unmarked segments to assimilate to marked segments. CH is thus viewed as a feature-filling process, whereby a place feature spreads from a consonant specified for place, to a consonant unspecified for place. This is illustrated for the form [gZk] for duck in (8): (8)

A procedural representation of consonant harmony underlying representation d Ø

Z

surface representation

k

g

Place

Place

Dorsal

Z

k Place Dorsal

However, in the model of feature representation adopted by Stemberger and Stoel-Gammon and that of Sagey (1986), vowels also have a Dorsal place specification. This entails that spreading Dorsal from /k/ to the initial consonant position would lead to crossing association lines, as in (9), which is ruled out by the Line Crossing Prohibition (Goldsmith 1976).

Consonant Harmony in Child Language (9)

9

Consonant harmony and crossing association lines underlying representation d Z k

surface representation g Z k

Ø

Place

Place

Place Dorsal

Place

Place

Dorsal Dorsal

Stemberger and Stoel-Gammon recognize this locality problem. They argue that since these intervening vowels apparently do not block the harmony process, they should be transparent in one way or another. To achieve this, consonants and vowels should either reside on different planes when the process takes place, i.e. there is planar segregation (McCarthy 1989), or vowels and consonants should have different sets of place features (see also chapter 105: tier segregation; chapter 27: the organization of features). They opt for the last solution and turn to the feature model proposed by Clements (1985), where consonants and vowels are partially segregated. Place features are divided into a “primary” place tier containing consonantal place features, and a “secondary” place tier containing vocalic place features. In this model, place features of consonants can be spread across vowels, and place features of vowels can be spread across consonants. Consonant harmony can thus be characterized as feature spreading, affecting only the primary place node. Interference with vowels, specified for place on the secondary place node, is avoided. In Clements’s later elaboration of his feature model (Clements 1991), however, both vowels and consonants now have this primary consonant-place node. One of Clements’s arguments for the change is precisely to exclude the possibility of consonants spreading their place features across vowels, which does not occur in the world’s languages.

3.2.3 Planar segregation McDonough and Myers (1991) take the planar segregation option seriously. Planar segregation can only be invoked if the relative order of consonants and vowels is predictable (McCarthy 1989). According to McDonough and Myers, many children at this stage of development have quasi-templatic constraints on the structure of words, and they conclude that therefore consonant–vowel planar segregation can be assumed. Their representation of CH is as in (10) below, and involves spreading a specified place node onto an adjacent root node unspecified for place on the consonant plane. (10)

Consonant harmony and planar segregation underlying representation Place

d Root Ø

Root Z

k Root

surface representation Place

g Root

Root Z

k Root

Place

Place

Dorsal

Dorsal

Clara C. Levelt

10

The problem for this account is the background assumption, namely that CH is present in child language at the stage in development where the order of consonants and vowels in a word is predictable. Although children initially often do reduce the syllable structure of adult target words to simple consonant–vowel sequences, this does not necessarily happen at the time they have CH productions, as the Dutch examples in (11) show: (11)

CV sequences and Robin’s (1;9.21) consonant harmony forms a.

b.

CV structure CVC niet /nit/ [nit] ‘not’ VCC eend /ent/ [>nt] ‘duck’ CVCC fiets /fits/ [fits] ‘bicycle’ VCV auto /oto/ [oto] ‘car’ Consonant harmony schommelen /søDmHlH/ [vomH] ‘to swing’ Grover /ørovHr/ [fofH] (name) stoep /stup/ [fup] ‘sidewalk’

The data in (11) show that the position of the vowel vis-à-vis the consonant is not predictable at the stage where CH forms are produced. Planar segregation can thus not be invoked either at the segmental level or at the feature level. Locality is clearly a serious problem for accounts of CH in child language. In the literature discussed below the problem is dealt with in different ways.

3.3

Consonant harmony as the result of a speech-processing problem

There is no locality problem in the connectionist account of Berg and Schade (2000), since it is not a representational, but a local connectionist processing account. CH is viewed as a mispronunciation – i.e. a speech error – due to a speech plan that is carried out imperfectly. It is not, however, a low-level articulatory plan, precisely because the harmony is not co-articulatory, but involves units at a distance. The basic idea is that the level of activation differs between segments. Depending on their developmental status, links between phoneme-like units and their constituting features can be stronger or weaker. Weak links lead to hypo-activation, and hypo-activated features can be too weak to be available for production. This problem is then solved by inspecting activation levels in the network of nodes constituting a word, and picking out the element that has the highest activation level. In production, the hypo-activated feature is thus replaced by a more strongly activated feature in the word network, and this is one way in which consonant harmony can result. This can be seen as the processing version of the representational underspecification account of Stemberger and Stoel-Gammon, discussed above in §3.2.2. The other way is when a certain feature is hyperactivated because excessive weight has been attributed to the link between this feature and a segmental unit. A hyper-activated node in the network can mask the less activated nodes, leading to consonant harmony. Direction of harmony – which is usually right to left – is accounted for by self-inhibition. As soon as an onset consonant is selected, the activation level is temporarily set to zero, due to

Consonant Harmony in Child Language

11

self-inhibition. The onset is thus unable to interfere with a following consonant. When production of an onset is eminent, however, the following consonant is already active due to parallel activation. Both hypo- and hyper-activated states are characteristic for a developmental system, accounting for the fact that consonant harmony is typical for child language. Although there is no locality problem for this account formally speaking, the question remains what the effect of hyper- and hypo-activity levels of intervening vowels would be. Unfortunately, the intervening vowels are completely ignored in this account. The locality problem is formally circumvented, but in practice it is still there.

3.4

Consonant harmony as the result of an optimality-theoretic constraint

3.4.1 Agreement Pater and Werle (2003) give a detailed account of the CH pattern in the longitudinal data of Trevor (Compton and Streeter 1977) within Optimality Theory (OT: McCarthy and Prince 1993; Prince and Smolensky 1993). According to Pater and Werle, consonant harmony in child language is related to place agreement in consonant clusters in adult languages (see chapter 81: local assimilation). Both phenomena are due to a constraint Agree, which requires two successive consonants to be homorganic. The domain of application differs for children and adults: in child language the successive consonants can be separated by a vowel, in adult languages the process is strictly local and only applies to adjacent consonants. Development in this view consists of narrowing down the domain in which the constraint applies to this strictly local domain. The fact that we usually find labial and dorsal consonant harmony is independently regulated by a universal faithfulness hierarchy for place, whereby Faith[Dors] and Faith[Lab] are ranked above Faith[Cor]. That is, if in order to comply with Agree one place feature from the input form needs to be left out in the output form, it will be coronal rather than dorsal or labial. Examples from Pater and Werle illustrating this are given in (12): (12)

Interaction of Agree and Faith /dDg/

Agree Faith[Dors] Faith[Cor]

☞ a. [gDg]

*

b. [dDd] c. [dDg]

/tap/

*! *!

Agree Faith[Lab] Faith[Cor]

☞ a. [pap]

*

b. [tat] c. [tap]

*! *!

Clara C. Levelt

12

However, since this general Agree constraint would lead to both progressive and regressive consonant harmony, a more specific form of Agree is invoked to capture regressive harmony, namely Agree-L, which mentions the direction of agreement. In addition, when the regressive harmony has a specific trigger, like dorsal in the case of Trevor, this feature is mentioned in the directional Agree constraint. In (13) the working of Agree-L[Dors] is shown (from Pater and Werle 2003). While the general Agree constraint would lead to harmony both in the case of dog and coat, the specific Agree constraint affects only dog: (13)

Agree-L[Dors] for dog and coat /dDg/

Agree-L[Dors] Faith[Cor]

☞ a. [gDg]

*

b. [dDg]

/kot/

*!

Agree-L[Dors] Faith[Cor]

☞ a. [kot]

*

b. [kok]

*!

Trevor’s data show a developmental pattern, where Dorsal consonant harmony is initially both progressive and regressive, and Labial triggers both progressive and regressive harmony when other target consonants are coronals. Later, there is only regressive dorsal harmony. This development is captured by the demotion of markedness constraints below faithfulness constraints, the general way in which developmental changes are captured in OT (Gnanadesikan 2004; for an overview see Boersma and Levelt 2003). In this case, Agree is demoted below Faith[Cor], and Agree-L[Dors] is demoted below Faith[Lab]. A strictly local version of Agree-L[Dors] also plays a role in Korean, where labials and coronals assimilate only regressively to dorsals, as can be seen in (14): (14)

Agree-L[Dors] in Korean (de Lacy 2002, cited in Pater and Werle 2003) a.

/Hp+ko/ /kamki/ b. /pat+ko/ /han+kaI/ c. /kot+palo/ /han+bHn/ d. /paI+to/ /kuk+pap/

→ → → → → → → →

[Hkko] [kaIki] [pakko] [haIkaI] [koppalo] [hambHn] [paIto] [kukpap]

‘bear on the back+conj’ ‘a cold/influenza’ ‘receive+conj’ ‘the Han river’ ‘straight’ ‘once’ ‘room as well’ ‘rice in soup’

This gives support to the analysis. However, there are also some concerns with an analysis in terms of agreement between two (non-adjacent) consonants. First, the domain of the constraint Agree has to change in the course of development. Pater and Werle (2001) suggest that the domain for Agree changes from “Word” in childhood to “string-adjacent consonants” in adulthood. It is not obvious how

Consonant Harmony in Child Language

13

this domain change of Agree would come about, however. A concern is the extra set of rerankings the learner would need to perform. Because of the initial Worddomain, the child’s grammar has to go through different rankings in order to get the different Faith[Place] constraints in higher positions in the constraint hierarchy than the Agree constraint, which will over time rule out CH candidates. However, at the point where the child domain of Agree changes into the string-adjacent adult domain, the grammar needs to undo all the rerankings of the Faith[Place] constraints with respect to Agree, in order to allow, or rather force, string-adjacent consonants to agree in primary place of articulation – as will often be the case in the target adult language. A second concern is the child-language-specific, non-local domain for Agree itself. In this domain, the intervening vowel forms no obstacle for agreement between the non-adjacent consonants. This seems to imply that the solution to the locality problem of the proposal comes down to assuming that there is no locality requirement in child language.

3.4.2 Licensing Rose (2000) and Goad (1997, 2001, 2003) view CH as resulting from the relation between features and prosodically strong positions. CH is a consequence of the requirement that place features within the domain of a foot should be licensed by the foot head (chapter 40: the foot). Place features in prosodically weak positions can surface through being associated with, and therefore licensed by, prosodically strong positions. According to Goad, the directionality of CH follows from prosodic structure; in English, word onsets of trochaic words are prosodically strong positions and consequently they can license marked features that cannot be licensed in weak prosodic positions. A marked feature in a weak prosodic position, i.e. a coda or an intervocalic consonant, needs to be licensed by this strong onset position, resulting on the surface in regressive harmony. In Rose (2000), where both English and French child language data are analyzed, high-ranked faithfulness constraints on input place of articulation features can force the direction of harmony to go from head to dependent. In order to circumvent the problem of crossing association lines with an intervening vowel, association with the strong position is accomplished by melody copy, instead of spreading: a new instance of the harmonic feature is inserted in the harmonizing position. According to Goad (2001), this makes the process similar to reduplication in mature grammars. The drive to copy a melody is different in the two systems. In languages of the world, reduplication is morphologically driven: there is a reduplicative affix that needs melodic content. In child language, however, CH is usually present before morphology kicks in, and melody copy here is driven by prosodic licensing. A prediction is that languages with different prosodic structures exhibit different types of CH. This claim is defended by Rose (2000). English children have CH in both C1VC2 and C1VC2V words, because C1 and C2 are within the domain of the foot in both types of words. In contrast, for French the prediction is that CH only occurs in CVCV words, because of the claim that in French the second consonant of CVC words lies outside the foot, it being extra-prosodic. It will therefore not be involved in CH patterns. Indeed, in the data of the French subject Clara, a word like goutte ‘drop (n)’ does not undergo CH even though it has the same sequence of features, Dorsal . . . Coronal, as gâteau ‘cake’, which does undergo harmony. This is illustrated in (15)–(17), with three examples from Rose (2001):

Clara C. Levelt

14 (15)

Dependent-to-head consonant harmony in English (Coronal . . . Dorsal) dog

Ft

Max Lic [Dors] (Dors,Ft)

Max [Lab]

Lic Max Lic (Lab,Ft) [Cor] (Cor,Ft)

d D g Ø a. [dDg]

Cor Dors Ft *!

d D g Ø b. [dDd]

Cor Dors Ft

d D dØ ☞ c. [gDg]

*!

Cori Cori Ft *

g D g Ø Dorsi Dorsi (16)

Head-to-dependent consonant harmony in French (Dorsal . . . Coronal) gâteau

Ft q

q

Max Lic Lic [Lab] (Dors,Ft) (Cor,Ft)

Max [Cor]

Max Lic [Dors] (Lab,Ft)

g æ t o a. [gæto]

Dors Cor Ft q

q

g æ t o

*!

Dors Cor Ft b. [gæko] q

q

g æ k o

*!

Dorsi Dorsi Ft ☞ c. [dæto] q

q

d æ t o Cori Cori

*

Consonant Harmony in Child Language (17)

15

No consonant harmony in CVC words in French (Dorsal . . . Coronal) goutte

PWd

Max Lic Lic [Lab] (Dors,Ft) (Cor,Ft)

Ft g a. [g k]

Max [Cor]

Max Lic [Dors] (Lab,Ft)

t Ø

Dors Cor PWd Ft k Ø

g b. [d t]

*!

Dors Cor PWd Ft *! t Ø

d ☞ c. [g t]

Dorsi Dorsi PWd Ft g

t Ø

Cori Cori In (15) Dorsal is in the weak position of the foot, and needs to be licensed by the head position. This can be done by copying Dorsal into the head position, replacing input Coronal. Since faithfulness to input Coronal is low-ranked, this solution is optimal, and the output shows regressive Dorsal harmony. In (16), from French, Dorsal is again in the weak position and needs a licensor in the strong position of the foot. In this case, however, faithfulness to input Coronal is ranked higher than faithfulness to input Dorsal. The optimal candidate therefore does not show progressive Dorsal harmony, like the English example in (15), but regressive Coronal harmony. Finally, in (17), the second consonant is not in the foot. The licensing constraints do not apply in this case, and the optimal output candidate shows no harmony. Although the idea of a licensing requirement seems attractive, in the end it does not seem to work. In practice, the combination of licensing and faithfulness constraints leads to a situation in which in English there is always regressive, dependent-to-head CH, while in French there is always regressive, head-todependent CH. The constant factor in CH, then, appears to be the regressive direction, rather than the licensing requirement. In addition, it remains unclear why the constraint ranking leading to CH forms in child language is not found in mature grammars. According to Goad, the drive to copy is different in developing and mature grammars, but as far as I can see nothing would preclude mature grammars from having a prosodic licensing drive. Finally, other French children do appear to have CH in both CVC and CVCV words (Wauquier-Gravelines 2003).

16

3.5

Clara C. Levelt

Summary

We have now seen that it is hard to convert the view of consonant harmony as agreement, spreading, or harmony between two non-adjacent consonants into a sound theoretical account. We have seen three types of approach to the child-language specificness of the data: (i) the account has no child-language-specific aspects (Stemberger and Stoel-Gammon; Goad; Rose), leaving the issue unsolved; (ii) the account presents a child-language-specific aspect that could in principle also be present in mature grammars, i.e. planar segregation (McDonough and Myers), or a specific rule (Smith). Why mature grammars do not have this specific rule, or why a mature language with planar segregation probably does not have the specific type of CH we find in child language, still needs to be resolved; (iii) a formal change takes place in the grammar, which from then on precludes it from outputting CH forms (Menn; Pater and Werle; Iverson and Wheeler). This approach resolves the issue, but also introduces a new one: how does this formal change come about? The intervening vowel, leading to the locality problem, is treated in four different ways: (i) it is not acknowledged as a problem (Menn; Smith; Iverson and Wheeler; Berg and Schade; Pater and Werle). In some cases this is because in the theoretical framework of the time consonants and vowels had different sets of place features; (ii) separate sets of place features are assumed for consonants and vowels (Stemberger and Stoel-Gammon); (iii) planar segregation is invoked to make CH a local process (McDonough and Myers); (iv) feature copying is assumed instead of feature speading (Goad; Rose). Only this last solution is able to circumvent the problem of crossing association lines in child language CH. However, crossing association lines is assumed to preclude the spreading of primary place of articulation features of consonants across vowels in mature languages. The copy solution thus solves the problem for child language, but creates a problem for the account of the absence of primary place of articulation CH in mature languages.

4

A re-analysis of consonant harmony in child language

It is clear that researchers have struggled to find an account that can explain the presence of primary place of articulation CH data exclusively in child language, without compromising the available theoretical tools. While not abandoning these principles, there are advantages in assuming that the learner’s developing phonological system differs from the adult system in certain respects (chapter 101: the interpretation of phonological patterns in first language acquisition). Levelt (1994) and Fikkert and Levelt (2008) stress the fact that consonant harmony in child language is not an isolated phenomenon (see Menn 1978 and Iverson and Wheeler 1987 for a similar view). The goal is therefore not to come up with an exclusive account of CH data, but to come up with a comprehensive account of developing place of articulation patterns in child language. When looked at in this way, it turns out that data that could be branded as instances of consonant harmony are present in child language at two different developmental stages. Since the data are, in both stages, clearly

Consonant Harmony in Child Language

17

the result of a grammatical state specific to development, the fact that similar data are not found in adult languages is no longer puzzling. In the remainder of this section, an overview of this approach is presented, focusing on the stages where so-called consonant harmony data are produced, and illustrated with longitudinal data from children acquiring Dutch (data from the CLPF database2 and Phon (Rose et al. 2006)).

4.1

Place of articulation features

The place of articulation (PoA) features that play a role in this account are Labial, Coronal, and Dorsal. These features are monovalent and refer to both consonants and vowels (Clements 1991; Lahiri and Evers 1991; Hume 1992; Clements and Hume 1995). Thus Labial refers to both labial consonants and rounded vowels, Coronal to coronal consonants and front vowels, and Dorsal to dorsal consonants and back vowels (chapter 75: consonant–vowel place feature interactions). Front rounded vowels like /y/ thus have a complex specification [Coronal, Labial], and back rounded vowels like /u/ have a complex specification [Dorsal, Labial]. The assumption here is that these vowels initially have a non-complex specification: Coronal for front vowels, and either Dorsal or Labial for back vowels. Front rounded vowels are acquired late, and are often replaced by front unrounded vowels. The low vowels /a/ and /A/ are not specified for place of articulation – the idea being that place of articulation cannot be expressed in low vowels.3

4.2

Stage I: One word, one feature

In the CLPF corpus, in the first sets of meaningful words no combinations of different PoA features were found within words (chapter 51: the phonological word), i.e. consonants and vowels within a word are all produced with the same PoA feature. The low vowels /a/ and /A/ can be combined with either coronal, labial, or dorsal consonants. This is illustrated by data from Robin in (18): (18)

Robin’s (1;5.11) initial vocabulary a.

b.

2

die huis niet thuis zes tiktak aan daar pop mamma aap

/di/ /hœys/ /nit/ /tœys/ /zes/ /t>ktAk/ /an/ /dar/ /pDp/ /mAma/ /ap/

[ti] [hœys] [èt] [tœs] [ses] [tita] [an] [ta] [pD] [mAma] [ap]

‘that one’ ‘house’ ‘not’ ‘home’ ‘six’ ‘tick-tock’ ‘on’ ‘there’ ‘doll’ ‘mommy’ ‘monkey’

CLPF database: data collected by Levelt (1994) and Fikkert (1994) of 12 children acquiring Dutch as their first language. Recordings were made every other week over a 12-month period. The database contains over 20,000 spontaneous utterances. 3 In Dutch vowels also have a tense/lax specification, distinguishing /a e i o u/ from /A e > D r/. This specification is not relevant here.

Clara C. Levelt

18

The productions in (18a) all consist of coronal consonants (or placeless /h/) and coronal or low vowels, while the productions in (18b) have labial consonants and round or low vowels. A salient aspect of these data is that the adult target words have this same pattern. New words produced by Robin in the next two recording sessions also follow this pattern, as can be seen in (19): (19)

New words produced by Robin (1;5.21–1;6.9) a.

b.

Coronal forms deze /dezH/ televisie /telHvisi/ trein /trein/ ijs /eis/ sesamstraat /sesAmstrat/ uit /œyt/ Labial forms boom /bom/ mooi /moi/ bal /bAl/

[tis] [zizi] [tin] [æis] [zisa] [œyt]

‘this one’ ‘television’ ‘train’ ‘ice-cream’ ‘Sesame Street’ ‘out’

[bom] [bo(i] [bAo]

‘tree’ ‘beautiful’ ‘ball’

This initial stage, then, can be characterized as “one word, one PoA feature.” According to Levelt (1994) and Fikkert and Levelt (2008), this is caused by the fact that the initial unit for specification of PoA in the child’s phonological system is the unsegmentalized word (see also, among others, Moskovitz 1971; Waterson 1971; Iverson and Wheeler 1987; de Boysson-Bardies and Vihman 1991). In Levelt (1994) early productions, like the ones in (18) and (19), are therefore represented as {WORD, Coronal} and {WORD, Labial}. The data in (18) and (19) clearly do not resemble consonant harmony data at all. This is because the adult target words can be characterized by the same whole-word representations. Robin thus appears to select words for production that fit his phonological system. However, the data from Eva in (20) illustrate what happens in this whole-word stage when no selection takes place: (20)

Whole-word stage: Eva (1;4.12) Coronal words a. bed b. kijk c. prik d. beer e. dicht f. neus g. sleutel h. trein i. eend j. eten k. konijn l. teen m. vlinder n. auto o. patat

/bet/ /keik/ /pr>k/ /ber/ /d>øt/ /nøs/ /sløtHl/ /trein/ /ent/ /etH/ /konein/ /ten/ /vl>ndHr/ /Auto/ /pa’tAt/

[det] [te¢t] [t>t] [de] [d>] [nes] [høtœW] [tein] [en] [et>] [tæ¢n] [ten] [>nH] [A§tA§] [tAt]

‘bed’ ‘look!’ ‘injection’ ‘bear’ ‘closed’ ‘nose’ ‘key’ ‘train’ ‘duck’ ‘to eat’ ‘rabbit’ ‘toe’ ‘butterfly’ ‘car’ ‘French fries’

Consonant Harmony in Child Language Labial words p. brood q. buik r. poes s. sloffen t. schoenen u. oma v. op w. open x. aap

/brot/ /bœyk/ /pus/ /slDfH/ /søunH/ /oma/ /Dp/ /opH/ /ap/

[bop] [bop] [puf] [pDfH] [umH] [oma] [Dp] [opH] [ap]

19

‘bread’ ‘stomach’ ‘cat’ ‘slippers’ ‘shoes’ ‘grandma’ ‘on’ ‘open’ ‘monkey’

Here we see that the productions both of adult target words that fit and of adult target words that do not fit a whole-word representation are represented as either {WORD, Coronal} or {WORD, Labial} in the child’s phonological system. The productions in (20a)–(20c) and (20p)–(20t), of target adult forms that do not fit a whole-word representation, are forms that resemble CH forms consonants within a word that have different PoA features in the target form have identical PoA features in the produced form. However, with the possible exception of the account of Iverson and Wheeler (1987) discussed in §2.2.3, none of the above accounts can account for these forms. First, there is no fixed direction of assimilation, as illustrated below in (21a) and (21b). In (21a) a target Labial C–Coronal C combination becomes Labial C–Labial C, while the same target combination in (21b) leads to a Coronal C–Coronal C production. Second, sometimes the two “harmonized” consonants share a feature that is not present in the target form: the target Labial C–Dorsal C form in (21c) results in a Coronal C–Coronal C production. (21)

Problems for a CH account a. b. c.

brood bed prik

/brot/ /bet/ /pr>k/

[bop] ‘bread’ [det] ‘bed’ [t>t] ‘injection’

This is where the role of the target vowel becomes evident: the harmonizing feature does not originate with one of the target consonants, but with the target vowel. In case the target vowel is coronal, the child’s production ends up being coronal throughout. In case the target vowel is labial, the child’s production ends up being labial throughout. In (20a)–(20m) and (21b)–(21c), the target vowels are all coronal, and so are the consonants in the child’s production. In (20p)–(20w) and in (21a), the target vowel is labial, and so are the produced consonants. This also accounts for the form in (20t), where the target word schoenen, containing the labial vowel /u/ but no labial consonants, is produced as [umH], with a labial [m]. The same applies to the form beer in (20d), which contains the coronal vowel /e/, but no coronal consonants, and is produced as [de], with a coronal [d]. Low vowels, which do not carry a PoA specification, can be combined with either coronal consonants (20n), (20o) or labial consonants (20x). The idea put forward in Levelt (1994) and Fikkert and Levelt (2008) is that at this developmental stage, the PoA specification of the representational unit WORD is provided by the target, non-low, vowel. In case the vowel is low, the PoA specification is provided by a consonant. This could have a perceptual

Clara C. Levelt

20

background. The age group in which these homorganic productions are found, between 14 and 17 months, is the same age group that has been found to have difficulty discriminating similar-sounding words, e.g. bin vs. din, in perception studies (Stager and Werker 1997; Fikkert et al. 2003; Pater et al. 2004). Children in this age group have just started to build up their lexicon. It appears that at this point, the PoA information of perceptually salient segments, like vowels, can be mapped successfully onto a lexical representation, overriding the PoA information of less salient segments (chapter 7: feature specification and underspecification). To summarize, apparent CH data can be found in the initial vocabularies of children. As argued by Levelt (1994) and Fikkert and Levelt (2008), however, these data are not due to any interaction between non-adjacent consonants – hence “apparent CH” – but to the way in which early representations appear to be structured. The initial unit of specification is word-sized and carries a single PoA feature provided by the most salient segment of the target adult model, usually the vowel.

4.3

Stage II: Overgeneralization

After the initial stage in which words are unsegmentalized units, segmentalization gradually takes place, and at some point consonants and vowels within a word can be independently specified for place of articulation. This does not entail that all combinations of PoA features are immediately possible. Focusing on consonants, Labial + Coronal, Labial + Dorsal, and Coronal + Dorsal sequences are possible for quite some time, while Coronal + Labial and Dorsal + Labial sequences are absent from the data. It is at this stage that “typical” cases of CH start to appear in the data of some children. In this case too, it will be argued that the data are not due to a harmonic relation between non-adjacent consonants – hence the use of the quotation marks above. (22)

“Typical” cases of consonant harmony: Robin (1;10.7) a.

b.

sop sloffen tafel neef zeep klimmen

/sDp/ /slDfH/ /tafHl/ /nef/ /zep/ /kl>mH/

[fDp] [bDfH] [pafy] [mef] [fep] [p>mH]

‘suds’ ‘slippers’ ‘table’ ‘cousin’ ‘soap’ ‘to climb’

At this stage in Robin’s development, the PoA make-up of a produced word, given the PoA sequence of the adult target form, is completely predictable: any target word with a consonant sequence Coronal + Labial (22a) or Dorsal + Labial (22) will be produced with a consonant sequence Labial + Labial. Interestingly, Fikkert and Levelt (2008) observe, in the longitudinal data of Robin and four other children, that these typical cases of consonant harmony only occur after a period in which attempted, non-homorganic sequences of consonants are exclusively of the types Labial + Coronal/Dorsal and Coronal + Dorsal. These attempted adult target words are always rendered faithfully with respect to the PoA structure in the children’s productions. Adult target words with Coronal/

Consonant Harmony in Child Language

21

Dorsal + Labial sequences are simply not attempted in earlier stages. CH is thus an emerging phenomenon. In Robin’s data, for example, Fikkert and Levelt observe that around the age of 1;7.15 more and more adult target words with a Labial + Coronal consonant sequence, such as bed ‘bed’, boot ‘boat’, and maan ‘moon’, are attempted. Words with this PoA structure are highly frequent in Dutch. These targets are produced faithfully, i.e. with the same PoA structure. In Robin’s production, then, the consonant sequence Labial + Coronal occurs frequently. One month later, in the recording at 1;8.12, the first cases of Labial CH appear. According to Fikkert and Levelt, there is a relation between these two events. Their proposal is that, as soon as words can be segmentalized, after the initial stage in which words are unanalyzed wholes, the Dutch language learner analyzes his own active vocabulary and observes that Labial consonants are always found at the left edge of the word.4 On the basis of this observation the learner overgeneralizes that Labial should always at least be aligned with the left edge of a word. In OT terms this overgeneralization results in the emergence of a high-ranked constraint in the learner’s grammar, requiring Labial to be aligned with the left edge of the word. This constraint is termed [Labial (Levelt 1994; Fikkert and Levelt 2008). The definition of the constraint is that an output word containing the feature Labial should always have an instance of Labial aligned with the left edge of the word. At this point, unfaithful, CH-like output forms for attempted input forms where there is no left-aligned Labial are deemed optimal by the grammar. This grammar is illustrated in (23): (23)

Developmental grammar with emerged [Labial a. poes ‘cat’ /pus/ [Labial

Faith

☞ i. pus ii. puf

*!

b. tas ‘bag’ /tAs/ [Labial

Faith

☞ i. tAs ii. pAs

*

c. soep ‘soup’ /sup/ [Labial ☞ i. fup ii. sup

4

Faith *

*!

Even the few words in the Dutch learner’s vocabulary with a Labial consonant at the right edge always have a Labial consonant at the left edge too: pop ‘doll’, boom ‘tree’, mamma ‘mommy’, pappa ‘daddy’.

22

Clara C. Levelt

In (23a), the faithful candidate [pus] is the optimal candidate: a left-aligned instance of Labial is available without jeopardizing faithfulness. In (23b) the faithful candidate [tAs] does not contain Labial, vacuously satisfying high-ranked [Labial. In (23c) the faithful candidate contains Labial, but there is no instance of a left-aligned Labial. The faithful candidate thus fatally violates high-ranked [Labial. The “CH” candidate [fup] does contain a left-aligned instance of Labial, at the cost of a faithfulness violation. However, faithfulness is ranked below [Labial, and [fup] is the winning candidate. The very general constraint Faith in (23) needs to be elaborated to show why other possible output candidates for input /sup/ that do not violate [Labial, like [sus], or metathesized [pus], are blocked (chapter 63: markedness and faithfulness constraints). In (24), a more detailed version of (23c), Faith is split up into Faith[Lab], requiring faithfulness to input Labial, a lower-ranked Faith[Cor], requiring faithfulness to input Coronal, and Linearity, requiring faithfulness to the linear order of input segments. (24)

Interaction of [Labial and Faithfulness constraints [Labial

/sup/

Linearity

Faith[Lab] Faith[Cor]

☞ a. fup b. sup

* *!

c. pus

*! *!

d. sus

The hypothesis that CH data are due to an alignment requirement rather than to a harmony requirement is strengthened by data from other children, who appear to use metathesis in order to comply with the alignment requirement, as shown in (25): (25)

Metathesized forms in child language Dutch

kip

English

sheep TV sopa /sopa/ libro /libro/

Spanish

/k>p/

[p>k] [piç] [piti] [pwDta] [p>tD]

‘chicken’ (Noortje; Fikkert and Levelt 2008) (Alice; Jaeger 1997) ‘soup’ ‘book’

(Si; Macken 1979)

For these children, the faithfulness constraints Linearity and Faith[Cor] are ranked in the opposite order from the one in (24), i.e. Faith[Cor] >> Linearity. Of the output candidates complying with [Labial, this grammar prefers a metathesized form over a CH form. The relation between consonant harmony and metathesis is also noticed by Goad (2001) and Rose and dos Santos (2004). To summarize, according to Levelt (1994) and Fikkert and Levelt (2008), the apparent CH data arising at this stage of development are, again, not the result of an assimilation process between non-adjacent consonants. Instead, they appear to result from the overgeneralization of a frequent PoA pattern in the active vocabulary of the learner, where Labial segments are either exclusively – or

Consonant Harmony in Child Language

23

additionally, in case of Labial + Labial target words like mommy – found at word onsets. The pattern in the learner’s vocabulary, in turn, reflects a highly frequent PoA pattern in the surrounding language. This makes a testable prediction, namely that, depending on the distribution of PoA patterns in the language to be acquired, language learners will show different types of PoA alignment. Highly frequent patterns in the language enter the developing vocabulary first, and overgeneralization – if present – will be based on these language-specific frequent patterns. Fikkert et al. (2003) show that different distributions of PoA patterns in Dutch and English can indeed account for the different types of PoA-alignment data in Dutch and English child language. In English child language data, both Labial and Dorsal alignment occur, while in Dutch child language data only Labial alignment has been attested. This promising initial result should be elaborated. Detailed studies of the development of PoA in longitudinal data of children acquiring languages other than Dutch are now needed to test the validity of the perspective discussed in this section.

5

Conclusions

Hansson (2001) tries to pull together consonant harmony in languages of the world and in child language, by pointing out that the source of these data is probably found in speech processing; consonant harmony productions are in fact phonologized speech errors. According to Hansson, the difference in place of articulation bias between adult CH and child CH is caused by the nature of the sound inventory. The sound inventory of language learners is small, and doesn’t require the secondary place features that harmonize in adult languages to distinguish sounds. Based on a detailed study of the longitudinal development of PoA patterns in child language productions, Levelt (1994) and Fikkert and Levelt (2008) come to a different conclusion. Consonant harmony forms in child language are not phonologized speech errors, but products of an immature phonological system. In the earliest stages of word production, CH-like forms result from an as yet unanalyzed representational unit Word, which can be specified for a single place of articulation feature. At a later stage, when words have segmental representations, CH-like forms result from the overgeneralization of a place of articulation pattern in a small vocabulary. Instead of pulling together the two types of data, i.e. child language data and cross-linguistic data, then, it might be better to pull them apart, by providing them with different terms. Consonant harmony is a phenomenon that can be found in the world’s languages. In child language there is an initial stage where words are homorganic, while PoA alignment occurs in later stages. With this new perspective on child language data, it is no longer necessary to wonder why neither homorganic data nor PoA-alignment data are found in languages of the world: both types of data are due to specific developmental states of the phonological system that, because of fundamental changes, are transient and therefore no longer accessible in mature grammars. One developmental state is characterized by the initial unsegmentalized word-sized unit in early phonological representations. Later, these word-sized units are, irrevocably, replaced by segment-sized units. In the other developmental state the learner generalizes over

24

Clara C. Levelt

his own – still small – vocabulary. The PoA structure of words in this developing vocabulary reflects a highly frequent pattern in the surrounding language. PoAalignment data occur when this pattern is overgeneralized, leading to the emergence of a constraint in the developing grammar. Overgeneralization is typical for child language, and disappears when enough experience with other data is gathered. With the expansion of the learner’s vocabulary, more and more evidence will be available that Labial is not necessarily always left-aligned. On the basis of this counterevidence, the learner will conclude that [Labial should not be an active constraint in the grammar. The constraint will disappear, or be demoted to the lowest ranks of the grammar, and candidates formerly called consonant harmony forms will never again be optimal.

ACKNOWLEDGMENTS I would like to thank the editors for their invitation to contribute to the Companion to Phonology, and two anonymous reviewers for their detailed and helpful comments on the manuscript. Preparation of this chapter was made possible by a grant from the Netherlands Organization for Scientific Research (NWO Vidi grant 276-75-006).

REFERENCES Andersen, Torben. 1988. Consonant alternation in the verbal morphology of Päri. Afrika und Übersee 71. 63–113. Applegate, Richard B. 1972. Ineseño Chumash grammar. Ph.D. dissertation, University of California, Berkeley. Archangeli, Diana & Douglas Pulleyblank. 1987. Maximal and minimal rules: Effects of tier scansion. Papers from the Annual Meeting of the North East Linguistic Society 17. 16–35. Berg, Thomas & Ulrich Schade. 2000. A local connectionist account of consonant harmony in child language. Cognitive Science 24. 123–149. Boersma, Paul & Clara C. Levelt. 2003. Optimality Theory and phonological acquisition. Annual Review of Language Acquisition 3. 1–50. Boysson-Bardies, Bénédicte de & Marilyn Vihman. 1991. Adaption to language: Evidence from babbling and first words in four languages. Language 67. 297–319. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. 1991. Place of articulation in consonants and vowels: A unified theory. Working Papers of the Cornell Phonetics Laboratory 5. 77–123. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In John A. Goldsmith (ed.) The handbook of phonological theory, 245–306. Cambridge, MA & Oxford: Blackwell. Compton, Arthur & Mary Streeter. 1977. Child phonology: Data collection and preliminary analyses. In Eve T. Clark & Pamela Tiedt (eds.) Papers and Reports on Child Language Development 7. Stanford: Department of Linguistics, Stanford University. Cruttenden, Alan. 1978. Assimilation in child language and elsewhere. Journal of Child Language 5. 373–378. de Lacy, Paul. 2002. The formal expression of markedness. Ph.D dissertation, University of Massachusetts, Amherst.

Consonant Harmony in Child Language

25

Fikkert, Paula. 1994. On the acquisition of prosodic structure. Ph.D. dissertation, University of Leiden. Fikkert, Paula & Clara C. Levelt. 2008. How does Place fall into place? The lexicon and emergent constraints in children’s developing grammars. In Peter Avery, B. Elan Dresher & Keren Rice (eds.) Contrast in phonology: Theory, perception, acquisition, 231–268. Berlin & New York: Mouton de Gruyter. Fikkert, Paula, Clara C. Levelt & Joost van de Weijer. 2003. Input, intake and phonological development: The case of consonant harmony. Paper presented at the Generative Approaches to Language Acquisition conference, Utrecht. Gnanadesikan, Amalia E. 2004. Markedness and faithfulness constraints in child phonology. In René Kager, Joe Pater & Wim Zonneveld (eds.) Constraints in phonological acquisition, 73–108. Cambridge: Cambridge University Press. Goad, Heather. 1997. Consonant harmony in child language: An optimality-theoretic account. In S. J. Hannahs & Martha Young-Scholten (eds.) Focus on phonological acquisition, 113–142. Amsterdam & Philadelphia: John Benjamins. Goad, Heather. 2001. Assimilation phenomena and initial constraint ranking in early grammars. In Anna H.-J. Do, Laura Domínguez & Aimee Johansen (eds.) Proceedings of the 25th Annual Boston University Conference on Language Development, 307–318. Somerville, MA: Cascadilla Press. Goad, Heather. 2003. Licensing and directional asymmetries in consonant harmony. Poster presented at the Child Phonology Conference, University of British Columbia. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Published 1979, New York: Garland. Hansson, Gunnar Ólafur. 2001. Theoretical and typological issues in consonant harmony. Ph.D. dissertation, University of California, Berkeley. Hume, Elizabeth. 1992. Front vowels, coronal consonants and their interaction in non-linear phonology. Ph.D. dissertation, Cornell University. Published 1994, New York: Garland. Iverson, Gregory K. & Deirdre Wheeler. 1987. Hierarchical structures in child phonology. Lingua 73. 243–257. Jaeger, Jeri J. 1997. How to say “Grandma” and “Grandpa”: A case study in early phonological development. First Language 17. 1–29. Lahiri, Aditi & Vincent Evers. 1991. Palatalization and coronality. In Paradis & Prunet (1991), 79–100. Levelt, Clara C. 1994. On the acquisition of place. Ph.D. dissertation, University of Leiden. Macken, Marlys A. 1979. Developmental reorganization of phonology: A hierarchy of basic units of acquisition. Lingua 49. 11–49. McCarthy, John J. 1989. Linear order in phonological representation. Linguistic Inquiry 20. 71–99. McCarthy, John J. & Alan Prince. 1993. Prosodic morphology I: Constraint interaction and satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. McDonough, Joyce & Scott Myers. 1991. Consonant harmony and planar segregation in child language. Unpublished ms., University of California, Los Angeles & University of Texas, Austin. Menn, Lise. 1978. Phonological units in beginning speech. In Alan Bell & Joan B. Hooper (eds.) Syllables and segments, 315–334. Amsterdam: North-Holland. Moskowitz, Marlene. 1971. The acquisition of phonology. Ph.D. dissertation, University of California, Berkeley. Paradis, Carole & Jean-François Prunet (eds.) 1991. The special status of coronals: Internal and external evidence. San Diego: Academic Press. Pater, Joe & Adam Werle. 2001. Typology and variation in child consonant harmony. In Caroline Féry, Antony Dubach Green & Ruben van de Vijver (eds.) Proceedings of HILP 5, 119–139. Potsdam: University of Potsdam.

26

Clara C. Levelt

Pater, Joe & Adam Werle. 2003. Direction of assimilation in child consonant harmony. Canadian Journal of Linguistics 48. 385–408. Pater, Joe, Christine Stager & Janet F. Werker. 2004. The perceptual acquisition of phonological contrasts. Language 80. 384–402. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Rose, Yvan. 2000. Headedness and prosodic licensing in the L1 acquisition of phonology. Ph.D. dissertation, McGill University. Rose, Yvan. 2001. Licensing and feature interaction processes in child language. Proceedings of the West Coast Conference on Formal Linguistics 20. 484–497. Rose, Yvan & Christophe dos Santos. 2004. The prosodic basis of consonant harmony and metathesis. Paper presented at the 3rd Phonological Acquisition Workshop, Nijmegen. Rose, Yvan, Brian MacWhinney, Rod Byrne, Gregory Hedlund, Keith Maddocks, Philip O’Brien & Todd Wareham. 2006. Introducing Phon: A software solution for the study of phonological acquisition. In David Bamman, Tatiana Magnitskaia & Colleen Zaller (eds.) Proceedings of the 30th Annual Boston University Conference on Language Development, 489–500. Somerville, MA: Cascadilla Press. Sagey, Elizabeth. 1986. The representation of features and relations in nonlinear phonology. Ph.D. dissertation, MIT. Smith, Neil V. 1973. The acquisition of phonology: A case study. Cambridge: Cambridge University Press. Stager, Christine & Janet F. Werker. 1997. Infants listen for more phonetic detail in speech perception than in word learning tasks. Nature 388. 381–382. Stemberger, Joseph P. & Carol Stoel-Gammon. 1989. Underspecification and consonant harmony in child phonology. Unpublished ms., University of Minnesota. Stemberger, Joseph P. & Carol Stoel-Gammon. 1991. The underspecification of coronals: Evidence from language acquisition and performance errors. In Paradis & Prunet (1991), 181–199. Stoel-Gammon, Carol & Joseph P. Stemberger. 1994. Consonant harmony and phonological underspecification in child speech. In Mehmet Yavas (ed.) First and second language phonology, 63–80. San Diego: Singular Publishing Group. Vihman, Marilyn M. 1978. Consonant harmony: Its scope and function in child language. In Joseph H. Greenberg, Charles A. Ferguson & Edith A. Moravcsik (eds.) Universals of human language, vol. 2: Phonology, 281–334. Stanford: University Press. Waterson, Natalie. 1971. Child phonology: A prosodic view. Journal of Linguistics 7. 179–211. Wauquier-Gravelines, Sophie. 2003. Troncation et reduplication: Peut-on parler de gabarits morphologiques dans le lexique précoce!? In Bernard Fradin, Georgette Dal, Nabil Hathout, Françoise Kerleroux, Michel Roché & Marc Plénat (eds.) Les unités morphologiques, 76–84. Lille: University of Lille III.

73

Chain Shifts Anna Ìubowicz

1

What is a chain shift?

This chapter provides background on chain shift mappings. In a phonological chain shift, underlying /A/ maps onto surface [B] and underlying /B/ maps onto surface [C] in the same context but, crucially, underlying /A/ does not become surface [C]. Thus a chain shift has a standard representation A → B → C (see Ultan 1970; Kenstowicz and Kisseberth 1979; Labov 1994; Kirchner 1996; Parkinson 1996; Gnanadesikan 1997; Dinnsen and Barlow 1998; McCarthy 1999; Moreton and Smolensky 2002; amongst others). The Finnish vowel shift (McCawley 1964; Lehtinen 1967; Keyser and Kiparsky 1984; Anttila 1995, 2002a, 2002b; Karlsson 1999; Harrikari 2000) provides an example. In Finnish, before the plural suffix -i (and before the past tense marker -i), long low vowels shorten (/aa/ → [a]), short low vowels undergo rounding and raising (/a/ → [o]), and short round vowels surface unchanged (/o/ → [o]). Thus we have the following chain shift effect: (1)

Finnish chain shift aa → a → o

Some examples are given in (2). sing nom

(2) a. b. c.

/aa/ → [a] maa vapaa /a/ → [o] kissa vapa /o/ → [o] talo pelko

plural essive ma-i-na vapa-i-na kisso-i-na vapo-i-na talo-i-na pelko-i-na

‘earth’ ‘free’ ‘cat’ ‘fishing rod’ ‘house’ ‘fear’

The key issue is that in Finnish, forms with underlying long low vowels shorten but do not round (/aa/ → [a], *[o]), but forms with underlying short low vowels undergo rounding in the same context (/a/ → [o]). The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0073

2

Anna Ìubowicz

Chain shifts have been found in diverse areas, including dialectal variation (Labov et al. 1972; Labov 1994; Labov et al. 2006), language acquisition (Smith 1973; Braine 1976; Macken 1980; Dinnsen and Barlow 1998; Dinnsen et al. 2001; Jesney 2007), synchronic phonology (Rubach 1984; Hayes 1986; Clements 1991; Kirchner 1996; Parkinson 1996; McCarthy 1999; Moreton 2004; van Oostendorp 2004), and diachronic phonology (Bauer 1979, 1992; Lass 1999; Schendl and Ritt 2002; Minkova and Stockwell 2003; Ahn 2004). This chapter focuses on synchronic chain shifts. A compendium of synchronic chain shifts has been compiled by Moreton (2004), and a corpus of synchronic chain shifts can be also found in Moreton and Smolensky (2002). It is important to note that some diachronic chain shifts have also been given a synchronic analysis, as in Miglio and Morén (2003).1 Many types of chain shifts have been described in the literature (see references in §2). There is debate on whether the description of chain shift types is accurate. This chapter will address the typology of chain shift mappings in the context of various theoretical proposals. It will describe the types of chain shifts found in the literature, and address the validity of this typology under different analyses. Two background assumptions are made in this chapter. First, it is assumed that to describe the typology of chain shifts, it is important not only to provide an empirically correct analysis of a chain shift, but also to explain why chain shifts exist in phonology. That is, uncovering the motivation for chain shift mappings is essential to gain a full understanding of the genesis and acquisition of chain shifts. The second background assumption is that the typology of chain shifts can be (largely) described in terms of markedness, where markedness motivates a phonological process (see §4). Since, as will be shown, analyses based solely on markedness fail to account for the attested types of chain shifts, it is suggested that something in addition to markedness drives chain shifts. A possible solution to this problem is provided in the form of an analysis with contrast (see §3.3), where a phonological process can take place to preserve contrast as well as satisfying markedness. This idea gives rise to potential avenues of productive research on chain shifts and phonology in general.2 The rest of this chapter is organized as follows. §2 introduces the typology of chain shifts. §3 describes several analyses of chain shift mappings. Finally, §4 explores implications of the various analyses for the typology of chain shifts.

2

The typology

Chain shift mappings can be categorized by type of segment (see §2.1) and mapping (see §2.2) involved in the shift.3 1

For a discussion of diachronic processes in OT, see Holt (2003). See also chapter 93: sound change. I would like to thank an anonymous reviewer for comments on this point. 3 There are other possible ways to characterize chain shifts that are not discussed here: the number of steps involved in the shift (Gnanadesikan 1997), the trajectories of changes (Labov 1994), the extent of mergers (near mergers vs. full mergers) (Parkinson 1996), and the location of the mapping (the segment vs. the environment) (Ìubowicz 2004). 2

Chain Shifts

2.1

3

Segment type

Both vowels and consonants can be involved in a shift. Some examples of vowel height chain shifts come from Bassá (Bantu; Schmidt 1996), Gbanu (Niger-Congo; Bradshaw 1996), Kikuria (Bantu; Chacha and Odden 1998), Lena Spanish (Hualde 1989), Nzebi (Clements 1991), and Servigliano Italian (Kaze 1989); see also chapter 21: vowel height; chapter 110: metaphony in romance. These are mostly mappings involving raising (Parkinson 1996). Some examples of consonantal chain shifts come from Southern Paiute (Sapir 1930; McLaughlin 1984), Toba Batak (Hayes 1986), Estonian (Ultan 1970), Finnish (Ultan 1970), and Irish (chapter 117: celtic mutations; Ní Chiosáin 1991). These are mostly mappings involving lenition along either the voicing or consonantal stricture scale (Gnanadesikan 1997) (see also chapter 13: the stricture features; chapter 65: consonant mutation; chapter 66: lenition). See (3) and (4), respectively. (3)

Vowel shifts a. b.

(4)

New Zealand English (Labov 1994) Nzebi (Bantu; Clements 1991)

æ→e→i→q a → e → e → i; D → o → u

Consonantal shifts (Ultan 1970) a. b.

Southern Paiute (Uto-Aztecan; Sapir 1930) pp → p → v Toba Batak (Austronesian; Hayes 1986) np → pp → ?p

A comprehensive account of vowel and consonant shifts is provided by Gnanadesikan (1997). The crux of Gnanadesikan’s proposal is ternary scales that explain the type of segments involved in a chain shift mapping.

2.2

Mapping type

Another criterion for categorizing chain shifts, and the one that is the focus of this chapter, is the type of the mapping involved. This includes pull shifts, push shifts, circular shifts, and what I will refer to as regular shifts (these will be discussed in §2.2.4). All four kinds of chain shifts have been proposed in the literature. They are described below.

2.2.1 Push shifts Assume that /A/ maps onto [B] (A → B) and /B/ maps onto [C] (B → C), but, crucially, /A/ does not become [C]. Thus there is a chain shift effect of the form A → B → C. One type of a chain shift is a push shift. In a phonological push shift, the latter step in the shift, /B/ → [C], is a consequence of the prior step, /A/ → [B], and not an independently motivated phonological process (see Martinet 1952, 1955; Labov 1994; Schendl and Ritt 2002; Miglio and Morén 2003; Ahn 2004; Maclagan and Hay 2004; Hsieh 2005; Barrie 2006; amongst others). Some examples of push shifts described in the literature include the Swedish shift (Benediktsson 1970; Labov 1994): a → a( → D( → o( → u( → r(; the New Zealand shift (Bauer 1979, 1992; Trudgill et al. 1998; Gordon et al. 2004; Maclagan and Hay 2004): æ → e → i → ei/q (but see Labov 1994); the Northern Cities shift (Labov 1994): e → Z → D; the Great Vowel Shift in English (Luick 1914; Jespersen

4

Anna Ìubowicz

1949; Miglio and Morén 2003; Minkova and Stockwell 2003): e( → e( → i( → ai; D( → o( → u( → au; the Short Vowel Shift in Early Modern English (Lass 1999; Schendl and Ritt 2002): u → o → D → C; a → æ; and tone sandhi in Xiamen, a dialect of the Min language of the Sino-Tibetan family (Barrie 2006; Chen 1987) (see §4). The basic observation is that the latter mapping takes place as a consequence of the prior mapping (hence the push effect). Push shifts are further described in §4.1, using the example of tone sandhi. §4.1 also evaluates various theoretical proposals with respect to push shifts.

2.2.2 Pull shifts Another kind of shift found in the literature is a pull shift, also known as a drag shift (King 1969). Pull shifts are the opposite of push shifts. In a pull shift, the prior mapping in the shift, /A/ → [B], takes place as a consequence of the latter mapping, /B/ → [C], and is not an independently motivated phonological process (hence the pull effect). Some examples of pull shifts reported in the literature include diachronic changes involved in the Lettish and Lithuanian Chain Shift (Endzelin 1922; Labov 1994: 134): e: → iH → i( → ij and o( → uH → u( → uw; the North Frisian Chain Shift (Labov 1994: 136): i( → i → a; u( → u → a; the Middle Korean Vowel Shift (Labov 1994: 139): e → H → q → u → D → a; and the Northern Cities Shift (Labov 1994: 195): D → A → æ → >H; i → e → æ. The arguments for pull shifts often involve historical evidence, whereby the latter mapping in the shift, /B/ → [C], historically precedes the prior mapping, /A/ → [B] (Labov 1994). It remains a question whether pull shifts are possible synchronically. This chapter will show that synchronic pull shifts are not admitted under any of the theoretical proposals to be described (see §4.2).

2.2.3 Circular shifts In a circular shift, mappings form a circle or a semi-circle. Exchange processes (Anderson and Browne 1973) are examples of circular shifts: /A/ → [B] and /B/ → [A]. Circular shifts are often seen as morphologically conditioned (Anderson and Browne 1973; Moreton 1996; Alderete 2001a, 2001b; Horwood 2001). Some examples of circular shifts, or exchange mappings, are consonantal polarity in Luo (Gregersen 1972; Okoth-Okombo 1982; Alderete 2001a), plural formation in Diegueño verbs (Langdon 1970; Walker 1970), vowel shift in Brussels Flemish (Zonneveld 1976; but see Moreton 1996), and tonal circles, also known as tone sandhi, in Xiamen (Cheng 1968, 1973; Yip 1980; Chen 1987; Moreton 1996; Hsieh 2005; Barrie 2006).4 It remains a question whether circular shifts have been described accurately in the literature. For example, it has been proposed that some circular chain shifts are conditioned morphologically, rather than being phonological (also chapter 103: phonological sensitivity to morphological structure, but see Crowhurst 2000). Circular shifts are further discussed in §4.3.

2.2.4 Regular shifts Finally, in what I will refer to as a regular shift, both mappings in the shift take place independently in the language, but form a chain shift when put together. 4

For additional examples see Moreton (1996: 21).

Chain Shifts

5

For example, in Sea Dayak (Kenstowicz and Kisseberth 1979) there is a chain shift that includes cluster simplification (consonant deletion) and vowel nasalization: Iga → Ia → Iã. Both nasalization and cluster simplification are found independently in the language, but nasalization fails in the context of consonant deletion. A similar process interaction is found in Ulu Muar Malay (Hendon 1966; Lin, forthcoming). Regular shifts are discussed further in §4.4. The following sections will address the validity of the typology of chain shifts described above.

3

Theoretical accounts of chain shifts

Chain shifts are a type of opaque process (Kiparsky 1973; Rubach 1984). Chain shifts are opaque because in a phonological chain shift one (or more) of the phonological processes simply do not apply as they should, based on the phonology of the language. Depending on the perspective taken by the analyst, in a chain shift mapping a phonological process underapplies (it does not apply where expected) or overapplies (it applies where it should not). Both underapplication and overapplication have been shown to be examples of opaque processes (Benua 1997; McCarthy 1999, 2003a, 2003b). Underapplication and overapplication are also known as counterfeeding and counterbleeding opacity, respectively (Kiparsky 1973). In the following discussion, opaque processes are considered with respect to synchronic phonologies. Assuming a phonological chain shift A → B → C, the key property of chain shifts is that in the same context /A/ and /B/ map onto different outputs: /A/ goes to [B] and /B/ goes to [C]. This is unexpected. One might expect the two inputs to map onto the same output in one and the same grammar.5 (5)

Expected mapping A

B → C

A theoretical account of chain shifts has been sought for a long time. The rest of this section describes several accounts of chain shifts found in the literature, such as rule ordering (§3.1), local conjunction (§3.2), and Optimality Theory with contrast (§3.3). The predictions of these approaches will be compared in §4.6

3.1

Rule ordering

In rule-based phonology (Kenstowicz and Kisseberth 1979; Rubach 1984), chain shifts are most commonly accounted for by rule ordering. In a phonological chain shift A → B → C, it is proposed that /B/ → [C] precedes /A/ → [B]. Thus, derived [B]s do not map onto [C]. This is illustrated by the following derivation. There are two rules that apply in a fixed order: B → C / __ D precedes A → B / __ D. This ordering results in a chain shift effect. 5

Another possibility would be for /B/ → [C] not to occur, so both /A/ and /B/ would map onto [B]. Chain shifts have also been accounted for by underspecification in rule-based phonology. The underspecification account of chain shifts is not discussed here but see Kiparsky (1993). 6

6 (6)

Anna Ìubowicz Chain shift in rule ordering Input Rule 1: B → C / __ D Rule 2: A → B / __ D Output

/AD/ does not apply BD [BD]

/BD/ CD does not apply [CD]

In Sea Dayak (Kenstowicz and Kisseberth 1979: 298, 308), there is a rule of nasalization which specifies that a vowel nasalizes immediately after a nasal, and a rule of nasal cluster simplification that deletes an obstruent following a nasal. Kenstowicz and Kisseberth propose that the rule of nasalization precedes the rule of nasal cluster simplification. Therefore nasalization fails in forms where the consonant deletes. Due to the specific rule ordering, the form /naIga/ ‘set up a ladder’ maps onto [nãIa?], while /naIa/ ‘straighten’ maps onto [nãIã?], with glottal insertion in both forms. (7)

Sea Dayak in a rule ordering analysis Input Vowel nasalization Nasal cluster simplification Output

/naIga/ nãIga nãIa [nãIa?]

/naIa/ nãIã does not apply [nãIã?]

The predictions of rule ordering for the typology of chain shifts will be discussed in §4.

3.2

Local conjunction

There are different ways to account for chain shifts in Optimality Theory (OT) (Prince and Smolensky 1993; Kager 1999; McCarthy 2002). This section describes a common way of accounting for chain shifts in OT, “local constraint conjunction.” Other OT approaches to chain shifts include more general accounts of opacity in OT, such as Sympathy Theory (McCarthy 1999, 2003a), Stratal or Derivational OT (Kiparsky 2000; Rubach 2003; Bermúdez-Otero 2007), output–output correspondence (Benua 1997; Burzio 1998), targeted constraints (Wilson 2001), comparative markedness (McCarthy 2003b), turbidity (Goldrick and Smolensky 1999), gestural coordination theory (Lin, forthcoming), and candidate chain theory (McCarthy 2007). According to Gnanadesikan (1997) and Kirchner (1996), the solution to chain shift mappings in OT lies in an enriched theory of faithfulness (chapter 63: markedness and faithfulness constraints). Both researchers propose special types of faithfulness constraints that block two-step movements like /aai/ → [oi], thereby accounting for the discrepancy in phonological mappings between identical derived and underlying segments. Kirchner uses locally conjoined faithfulness constraints (Smolensky 1993, 1997), whereas Gnanadesikan distinguishes between classical Ident-type constraints and novel IdentAdjacent-type constraints on some scale of similarity. Local conjunction is defined in (8) (Smolensky 1993). (8)

The Local Conjunction of C1 and C2 in domain D, C1 &D C2, is violated when there is some domain of type D in which both C1 and C2 are violated.

Chain Shifts

7

If the domain of local conjunction is a segment, both C1 and C2 cannot be violated together in the same segment. The following tableaux give a schematic overview of the local conjunction analysis of chain shifts. A phonological process, /B/ → [C], applies only if there is no double violation of faithfulness in the same segment, [F(A → B) & F(B → C)]Seg.7 (9)

Chain shifts in local conjunction No violation of local conjunction (/B/ → [C]) /BD/

[F(A→B) & F(B→C)]Seg

a. BD

*BD

F(B→C)

F(A→B)

*! *

☞ b. CD Violation of local conjunction (/A/ → [B], *[C]) /AD/

[F(A→B) & F(B→C)]Seg

F(B→C)

*

☞ a. BD b. CD

*BD

F(A→B) *

*

*!

*

Local conjunction blocks /A/ from mapping all the way onto [C] (see the second tableau in (9)). In other words, /B/ → [C] is blocked for underlying [A]s, but applies to underlying [B]s (compare the two tableaux in (9)). Applied to Sea Dayak (shown in (7)), the local conjunction analysis would be as follows:8 (10)

Sea Dayak in a local conjunction analysis No violation of local conjunction /naIa/

[Ident[nas] & Max]AdjSeg *NV [Ident[nas]

a. nfIa

*!

Max

* **

☞ b. nfIf Violation of local conjunction /naIga/

[Ident[nas] & Max]AdjSeg *NV [Ident[nas] *

☞ a. nfIa b. nfIf

7

*!

Max

*

*

**

*

Faithfulness constraints in the conjunction [F(A→B) & F(B→C)]Seg refer to the dimensions of similarity between segments and do not require that [B] is visible in the output. As suggested by an editor of the Companion to Phonology, it can be rewritten as [F(A→B) & F(A→C)]Seg. The difference lies in the formulation of the latter faithfulness constraint, F(B→C) vs. F(A→C). For notational convenience, I have chosen to refer to the latter conjunct as F(B→C). 8 The constraints are as follows: *NV (no nasal followed by a non-nasal vowel), Max (no deletion), Ident[nas] (no change in nasality compared with the input), and [Ident[nas] & Max]AdjSeg (no change in nasality and deletion in adjacent segments).

8

Anna Ìubowicz

Due to local conjunction, there is no nasalization in the form that undergoes cluster simplification (see the second tableau in (10)). However, nasalization applies in the form with no cluster simplification (see the first tableau).

3.3

OT with contrast

An alternative explanation for chain shifts in OT starts from the observation that chain shifts always preserve one underlying contrast at the expense of neutralizing another underlying contrast (Ìubowicz 2003). In Finnish (shown in (1) and (2)), the contrast between underlying /aai/ and /ai/, originally one of length, is preserved, albeit in a different form – as a rounding contrast (underlying /aai/ vs. /ai/, surface [ai] vs. [oi]). The contrast between underlying /ai/ and /oi/, the original rounding contrast, is lost (both become [oi]). Thus: (11)

Input length contrast rounding contrast

Output → rounding contrast → neutralized

This is referred to as contrast transformation.9 This observation has given rise to Preserve Contrast (PC) theory (Ìubowicz 2003, 2004, 2007, forthcoming).10 The key idea is that contrast exists as an imperative in a phonological system formalized as a family of rankable and violable constraints which demand that contrast be preserved, “PC constraints” (cf. Flemming 1995, 2004; Padgett 1997, 2003; Padgett and Zygis 2007; amongst others). PC constraints demand that pairs of words that contrast underlyingly with respect to a given phonological property P contrast on the surface (not necessarily with respect to P). Such constraints are defined in (12). (12)

PC(P) For each pair of inputs contrasting in P that map onto the same output in a scenario, assign a violation mark. Formally, assign one mark for every pair of inputs, ina and inb, if ina has P and inb lacks P, ina → outk, and inb → outk. (Informally, if inputs are distinct in P, they need to remain distinct in the output (not necessarily in P).)11

P is a potentially contrastive phonological property, such as a distinctive feature, length, stress, or presence vs. absence of a segment. The properties P, then, are essentially the same as the properties governed by faithfulness constraints in standard OT. Indeed, PC(P) constraints are like faithfulness constraints in that they look at two levels of representation. But they are novel in that they evaluate contrasts for pairs of underlying words and corresponding output words instead 9

Earlier works on contrast include Kaye (1974, 1975), Gussmann (1976), and Kisseberth (1976). Other works on PC theory include Tessier (2004), Barrie (2006), Flack (2007), and Riggs (2008). Similar ideas are expressed in the Dispersion Theory of Contrast (Flemming 1995, 2004; Padgett 1997, 2003; Itô and Mester 2004; Bradley 2001; Padgett and Zygis 2007). 11 What it means to contrast in P is defined as follows: a pair of inputs, ina and inb, contrast in P when corresponding segments in those inputs, sega and segb, are such that sega has P and segb lacks P. 10

Chain Shifts

9

of evaluating individual input–output mappings. To evaluate constraints on contrast, candidates must be sets of mappings, called scenarios. There are also generalized faithfulness constraints in PC theory that will become important in the discussion of circular shifts in §4.3. The following tableau shows how PC theory accounts for chain shifts. The tableau compares three scenarios: a chain shift scenario, an identity scenario where all inputs map onto identical outputs, and a transparent scenario with no /B/ → [C]. The constraint ranking proposed below captures the observation that the initial mapping in the shift, /A/ → [B], is due to markedness, but the subsequent mapping, /B/ → [C], is facilitated by contrast preservation. The relevant ranking is: *A, PC(A/B) >> PC(B/C).12 (13)

Chain shifts under contrast Scenarios ☞ a. Chain shift

b. Identity

*A

PC(A/B) PC(B/C)

/A/ → [B] /B/ → [C] /C/ → [C] /A/ → [A] /B/ → [B] /C/ → [C]

c. Transparent /A/ → [B] /B/ → [B] /C/ → [C]

*

*!

*!

The identity scenario, scenario (13b), loses because it violates markedness *A. The transparent scenario, scenario (13c), neutralizes contrast, thus violating PC(A/B). The chain shift scenario, scenario (13a), wins under the proposed constraint ranking. In the chain shift scenario, the /B/ → [C] mapping applies to preserve contrast between /A/ and /B/.13 The predictions of PC theory will be further discussed in the following section.14

4

Implications for the typology of chain shifts

The goal of this section is to examine what chain shifts are predicted to occur under various approaches. Chain shifts will be subdivided on the basis of the relationship between various stages in the shift. This typology can be referred to as teleological, because it differentiates reasons as to why a chain shift takes place, rather than merely describing the chain shift. This view is not novel, and is also 12

It is important to point out that not all chain shifts are push shift mappings in this approach to phonology. PC theory can also describe the “regular” shifts (see §4.4). 13 A competing derived environment scenario where A → C and B → B incurs the same violations of PC and markedness constraints as the winning chain shift scenario. It has been proposed that the two scenarios would be differentiated in the second stage of Eval by generalized faithfulness constraints (Ìubowicz 2003). 14 For a comprehensive discussion of PC theory and its predictions, see Ìubowicz (2003, forthcoming).

Anna Ìubowicz

10

taken by Optimality Theory. Rule-based approaches are non-teleological, since rules do not look for the reason why a phonological process takes place. Predictions of both rule-based approaches and OT approaches to chain shifts are described below.15

4.1

Push shifts

Recall that in push shift mappings (see §2.2.1), the latter mapping in the shift is a consequence of the prior mapping and not an independently motivated phonological process. To argue that A → B → C is a push shift mapping, one must show that there is no independent motivation for the latter mapping in the shift, B → C (in OT, no markedness constraint that favors [C] over [B]). In addition, one must establish that the mapping /B/ → [C] is always linked with /A/ → [B]. The tonal system of Xiamen, a dialect of the Min language of the Sino-Tibetan family (see Chen 1987; Barrie 2006; chapter 107: chinese tone sandhi) provides an example of a push shift mapping. The diagram below (14) uses the following notation: [U] is upper register, [L] lower register, [lh] rising pitch, and [hl] falling pitch. Thus [U, lh], for example, is a high-rising tone, where the pitch moves from the low end of the upper register to the high end. (14)

Xiamen tone sandhi [L, hl] [U, lh]

[L, h]

[U, hl] [U, h]

Barrie (2006) observes that the tonal shift above is an example of a push shift, because the latter mapping in the shift, [L, h] → [L, hl], is a result of the prior mapping, [U, lh] → [L, h]. Specifically, the initial mapping in the shift, [U, lh] → [L, h], is due to markedness improvement, such as avoidance of rising tones (*[U, lh]). However, the subsequent mapping, [L, h] → [L, hl], does not improve on markedness, since it creates a contour tone from an input level tone, and research on tonal markedness has shown that contour tones are more marked than level tones (Alderete 2001a; Yip 2004; chapter 45: the representation of tone). Barrie proposes that the latter mapping in the shift, [L, h] → [L, hl], is a result of the initial mapping, [U, lh] → [L, h], and the need to maintain contrast between tones of various registers, [U] vs. [L]. In other words, due to the chain shift effect, the input tones [U, lh] and [L, h] map onto [L, h] and [L, hl], respectively. Otherwise, if there was no shift both input tones would map onto the same output, [L, h]. The remaining mappings in the shift are analyzed in a similar manner (for a complete analysis see Hsieh 2005 and Barrie 2006). Push shifts are problematic for rule-based approaches to phonology. In rulebased approaches, chain shifts are accounted for by rule ordering (see §3.1). But in a push shift mapping, there is no separate rule that accounts for the latter 15

Thanks to a reviewer for comments on this point.

Chain Shifts

11

mapping in the shift. If there was a separate rule for the latter mapping in the shift, it would be predicted that the process would occur outside of the chain shifting context. But in a push shift, the latter process is always linked with the prior process, and does not apply independently. In consequence, synchronic push shifts cannot be described under a rule-based analysis. Similarly, push shifts are problematic to OT without contrast (see §3.2). This approach does not admit push shifts, because in OT, a phonological process can only apply if there is a high-ranked markedness constraint against it (Moreton 1996). But in a push shift, /B/ → [C] is not due to markedness. In Xiamen (see (14)), both the initial mapping in the shift, [U, lh] → [L, h], and the subsequent mapping, [L, h] → [L, hl], would need to be forced by markedness constraints. The problem here is that [L, h] → [L, hl] cannot be accounted for by markedness improvement. Thus, OT without contrast would not account for push shifts, because in a push shift mapping there is no high-ranked markedness constraint that triggers the latter mapping in the shift.16 OT with contrast (see §3.3), on the other hand, proposes a solution to push shift mappings which makes use of contrast as an independent principle in a phonological system. The key observation is that the initial mapping in the shift, /A/ → [B], takes place to improve on markedness, but the latter mapping in the shift, /B/ → [C], is due to contrast preservation. In tone sandhi (shown in (14)) OT with contrast proposes that the latter mapping in the shift, [L, h] onto [L, hl], takes place to preserve contrast between tones of various registers, [U] vs. [L] (Barrie 2006). Thus, OT with contrast, unlike other approaches, admits push shifts. Depending on further evidence on push shift mapping, OT with contrast is promising in this respect.

4.2

Pull shifts

Another type of chain shift found in the literature is a pull shift mapping. In a pull shift mapping, also known as a drag shift, the prior mapping in the shift, /A/ → [B], takes place because the latter mapping, /B/ → [C], occurs (King 1969). In OT, the prior mapping in the pull shift does not occur due to markedness. Unlike push shifts, pull shift mappings are ruled out in all approaches to chain shifts discussed so far. Rule-based approaches do not allow pull shifts, because they require that there is a separate rule that accounts for each mapping in the shift (see §3.1). In a pull shift mapping, however, there is no separate rule that accounts for the initial mapping in the shift. Rather, the initial mapping in the shift is always linked with the latter mapping, and does not occur independently. Thus, pull shifts are predicted not to occur by rule ordering.17

16

In response to a reviewer’s comments, it is important to point out that additional mechanisms in OT such as strata, candidate chains, sympathy, etc. do not admit push shifts. All these models require markedness to force a phonological process, and in a push shift markedness is not enough. As discussed here, by the introduction of contrast constraints into Con, a possible account of push shifts is provided. 17 A reviewer points out that a rule-based approach to phonology is inherently non-teleological, unlike OT, and thus would not differentiate chain shifts based on what is the cause of a particular process, such as pull shifts vs. push shifts.

12

Anna Ìubowicz

In OT without contrast (see §3.2), similarly, pull shifts cannot be described. The only way to obtain a phonological mapping in standard OT is by markedness improvement (Moreton 1996). But in a pull shift mapping, it has been proposed that the initial mapping in the shift cannot be explained by markedness improvement and thus is not predicted to occur. OT with contrast (see §3.3) also rules out pull shifts. In OT with contrast, a phonological mapping can take place to improve on markedness or to preserve contrast. However, as will be shown below, in a pull shift mapping there is no improvement on either markedness or contrast. Consider A → B → C as an example of a pull shift. The mapping /B/ → [C] is forced by markedness, but the initial mapping /A/ → [B] is not. This is illustrated in the following tableau, which compares a no shift scenario, scenario (15a), to a pull shift scenario, scenario (15b), under the constraint ranking PC(A/B), *B >> PC(B/C). The crucial point is that there is no markedness constraint that compels the /A/ → [B] mapping. (15)

Pull shifts cannot be described in OT with contrast Scenarios ☞ a. No shift A

B

C

/A/ → [A] /B/ → [C] /C/ → [C]

C

/A/ → [B] /B/ → [C] /C/ → [C]

b. Pull shift A

B

PC(A/B)

*B

PC(B/C) *

*!

*

Both scenarios preserve the contrast between /A/ and /B/, because underlying /A/ and /B/ map onto different outputs in both scenarios. Both scenarios map /B/ onto [C] in accordance with the ranking of the markedness constraint *B above PC(B/C). But the pull shift scenario, scenario (15b), incurs a fatal violation of *B, and is thus harmonically bounded by the no shift scenario, (15a).18 Thus, in OT with contrast, pull shifts are predicted not to occur.19 In summary, synchronic pull shifts cannot be described in any of the approaches described above. Diachronic pull shifts are different, as they can be seen as different processes that apply at different stages in the development of the language (Holt 2003).

4.3

Circular shifts

Another type of chain shift found in the literature is a circular shift, also known as an exchange process (Anderson and Browne 1973). Examples of circular shifts 18

A harmonically bounded scenario incurs a superset of violation marks in comparison to a competing scenario. This is under the assumption that there is no other constraint in Con that favors scenario (15b) over (15a). 19 Though not allowing for pull shifts, OT with contrast admits a sequence of changes that resemble a pull shift effect. Take a situation where /A/ → [B] “wants to happen” (due to *A) but is blocked by /B/ → [B] (to avoid neutralization). Then /B/ → [C] comes along. Now the /A/ → [B] map can emerge. This scenario is what I refer to as a regular shift. It is not a pull shift, because there is a markedness constraint *A that drives /A/ → [B].

Chain Shifts

13

were given in §2.2.3. There is a debate in the literature on whether circular shifts exist and to what extent they are conditioned by morphology. Many of the circular shifts have been reanalyzed (see Moreton 1996) or, as we saw in Xiamen (see (14)), do not constitute a complete circle, as they contain a termination (neutralization) point. Rule-based approaches can describe circular shifts with no termination point (see McCarthy 1999).20 Unlike rule-based approaches, both standard OT (see §3.2) and the contrast approach (see §3.3) rule out circular shifts with no termination point, for example /A/ → [B] and /B/ → [A]. As shown by Moreton (1996), a circular shift is not admitted to non-contrast OT, since it does not improve on markedness. As will be shown below, another feature that makes circular shifts with no termination point incompatible with OT with contrast is the fact that they do not improve on markedness or contrast, and involve an unmotivated violation of generalized faithfulness. The following tableau compares a circular shift scenario, scenario (16a), to an identity scenario, scenario (16b). (16)

No circular shifts: Unmotivated violations of faithfulness Scenarios a. Circular shift /A/ → [D] /B/ → [C] A B /C/ → [B] /D/ → [A] D C ☞ b. Identity A

B

D

C

/A/ → [A] /B/ → [B] /C/ → [C] /D/ → [D]

PC

Markedness

Faith

[A] [B] [C] [D]

****! /A/ → [D] /B/ → [C] /C/ → [B] /D/ → [A]

[A] [B] [C] [D]

The circular shift is harmonically bounded by the competing identity scenario. Both scenarios satisfy PC constraints and incur the same violations of markedness (they have the same outputs). But the circular shift scenario, scenario (16a), is ruled out by generalized faithfulness. There are unmotivated violations of faithfulness in this scenario.21 Circular shifts with a termination point, for example /A/ → [B], /B/ → [C], /C/ → [B], are predicted to occur in OT with contrast, since the initial step in the shift improves on markedness (see §4.1). A circular shift with a termination point refers to a shift where one of the original inputs is never used as the output. In the example cited here, the input /A/ is never used as the output in the shift. The tone sandhi analyzed in Barrie (2006) and Hsieh (2005) is of that form. It is important to note that such a scenario would not be harmonically bounded by a no shift scenario, /A/ → [B], /B/ → [B], /C/ → [C], because the two scenarios differ as to whether they preserve contrast between underlying /A/ and /B/. 20 21

McCarthy (1999) characterizes these shifts as involving the “multi-process Duke of York gambit.” Some instances of circular shifts are also ruled out by relational PC (see Ìubowicz 2003).

14

Anna Ìubowicz

To sum up, circular shifts are not admitted to OT, with or without contrast. But they are predicted to occur in rule-based approaches.

4.4

Regular shifts

Finally, there are regular shifts where each mapping occurs independently, but one of the mappings is blocked when they co-occur (see §2.2.4). All approaches to chain shifts predict regular shifts. Rule-based approaches predict regular shifts by rule ordering (see §3.1). Standard OT describes regular shifts by blocking an otherwise regular phonological process, for example by special faithfulness constraints (see §3.2). Similarly, OT with contrast (see §3.3) accounts for regular shifts by blocking an otherwise regular phonological process by PC constraints, as shown below. Consider a regular chain shift A → B → C, where each step in the shift is forced by markedness constraints. But the process /B/ → [C] is blocked for underlying /A/ to maintain the contrast between underlying /A/ and /B/. The ranking and the relevant tableau are given below. A no shift scenario, scenario (18a), where both /A/ and /B/ map onto [C], is compared with a regular shift, scenario (18b). (17)

The role of contrast PC(A/B) >> *B >> PC(B/C)

(18)

(also *A >> *B)

Regular shifts exist

a. No shift A

B

C

Scenarios

PC(A/B)

/A/ → [C] /B/ → [C] /C/ → [C]

*!

☞ b. Regular shift /A/ → [B] /B/ → [C] /C/ → [C] A B C

*B

PC(B/C) *

*

*

Scenario (18a) loses since it merges the contrast between underlying /A/ and /B/, thus violating PC(A/B). Scenario (18b) wins because in this scenario /A/ and /B/ map onto distinct outputs. This is at the expense of violating the markedness constraint *B. Thus, OT with contrast predicts regular shifts to occur. To summarize, this section has contributed to the debate on what chain shift mappings are possible. It has provided an analysis on what types of chain shifts are predicted to occur under different theoretical proposals. OT with contrast has been shown to predict push shift mappings which are attested, but cannot be described under any of the other analyses.

5

Summary

This chapter has provided a typology of chain shift mappings in the context of a number of theoretical approaches to chain shifts. Chain shifts have puzzled

Chain Shifts

15

researchers for a long time. Many types of chain shifts have been described in the literature (see §2) and numerous theoretical approaches have been put forward to account for them (see §3). The goal of this chapter has been to evaluate the various approaches to chain shifts and compare their implications for the typology of chain shift mappings (see §4). Four types of chain shifts have been examined: push shifts, pull shifts, circular shifts, and regular shifts. Out of those, regular shifts have been the easiest to account for (see §4.4). Circular shifts are only allowed in rule-based phonology (see §4.3). Pull shifts are left with no straightforward phonological explanation (see §4.2). Push shifts are only admitted by OT with contrast (see §4.1). Depending on further evidence on push shifts, the contrast approach seems to be promising in this respect and differs crucially from other existing accounts of chain shifts in the predictions which it makes.

ACKNOWLEDGMENTS I would like to thank Marc van Oostendorp, Beth Hume, and two anonymous reviewers for their comments and valuable suggestions.

REFERENCES Ahn, Sang-Cheol. 2004. Towards the optimal account of diachronic chain shifts. Studies in Phonetics, Phonology and Morphology 10. 43–67. Alderete, John. 2001a. Morphologically governed accent in Optimality Theory. New York & London: Routledge. Alderete, John. 2001b. Dominance effects as transderivational anti-faithfulness. Phonology 18. 201–253. Anderson, Stephen R. & Wayles Browne. 1973. On keeping exchange rules in Czech. Papers in Linguistics 6. 445–482. Anttila, Arto. 1995. Deriving variation from grammar: A study of Finnish genitives. Unpublished ms., Stanford University (ROA-63). Anttila, Arto. 2002a. Morphologically conditioned phonological alternations. Natural Language and Linguistic Theory 20. 1–42. Anttila, Arto. 2002b. Variation and phonological theory. In J. K. Chambers, Peter Trudgill & Natalie Schilling-Estes (eds.) The handbook of language variation and change, 206–243. Oxford: Blackwell. Barrie, Michael. 2006. Tone circles and contrast preservation. Linguistic Inquiry 37. 131–141. Bauer, Laurie. 1979. The second Great Vowel Shift? Journal of the International Phonetic Association 9. 57–66. Bauer, Laurie. 1992. The second Great Vowel Shift revisited. English World-Wide 18. 253–268. Benediktsson, Hreinn (ed.) 1970. The Nordic languages and modern linguistics. Reykjavik: Societas Scientarium Islandica. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst. Published 2000 as Phonological relations between words. New York: Garland. Bermúdez-Otero, Ricardo. 2007. Morphological structure and phonological domains in Spanish denominal derivation. In Fernando Martínez-Gil & Sonia Colina (eds.) Optimality-theoretic studies in Spanish phonology, 278–311. Amsterdam & Philadelphia: John Benjamins.

16

Anna Ìubowicz

Bradley, Travis G. 2001. The phonetics and phonology of rhotic duration contrast and neutralization. Ph.D. dissertation, Pennsylvania State University (ROA-473). Bradshaw, Mary M. 1996. One-step raising in Gbanu. Ohio State Working Papers in Linguistics 48. 1–11. Braine, Martin D. S. 1976. Review of Smith (1973). Language 52. 489–498. Burzio, Luigi. 1998. Multiple correspondence. Lingua 104. 79–109. Chacha, Chacha Nyaigotti & David Odden. 1998. The phonology of vocalic height in Kikuria. Studies in African Linguistics 27. 129–158. Chen, Matthew Y. 1987. The syntax of Xiamen tone sandhi. Phonology Yearbook 4. 109–149. Cheng, Richard L. 1968. Tone sandhi in Taiwanese. Linguistics 41. 19–42. Cheng, Richard L. 1973. Some notes on tone sandhi in Taiwanese. Linguistics 100. 5–25. Clements, G. N. 1991. Vowel height assimilation in Bantu languages. Proceedings of the Annual Meeting, Berkeley Linguistics Society 17. 25–63. Crowhurst, Megan J. 2000. A flip-flop in Sirionó (Tupian): The mutual exchange of /i q/. International Journal of American Linguistics 66. 57–75. Dinnsen, Daniel A. & Jessica Barlow. 1998. On the characterization of a chain shift in normal and delayed phonological acquisition. Journal of Child Language 25. 61–94. Dinnsen, Daniel A., Kathleen O’Connor & Judith Gierut. 2001. The puzzle-puddle-pickle problem and the Duke-of-York gambit in acquisition. Journal of Linguistics 37. 503–525. Endzelin, J. 1922. Lettische Grammatik. Heidelberg: Carl Winter. Flack, Kathryn. 2007. Ambiguity avoidance as contrast preservation: Case and word order freezing in Japanese. University of Massachusetts Occasional Papers in Linguistics 32. 57–88. Flemming, Edward. 1995. Auditory representations in phonology. Ph.D. dissertation, University of California, Los Angeles. Flemming, Edward. 2004. Contrast and perceptual distinctness. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 232–276. Cambridge: Cambridge University Press. Gnanadesikan, Amalia E. 1997. Phonology with ternary scales. Ph.D. dissertation, University of Massachusetts (ROA-195). Goldrick, Matthew & Paul Smolensky. 1999. Opacity and turbid representations in Optimality Theory. Paper presented at the 35th Annual Meeting of the Chicago Linguistic Society. Gordon, Elizabeth, Lyle Campbell, Jennifer Hay, Margaret Maclagan, Andrea Sudbury & Peter Trudgill. 2004. New Zealand English: Its origins and evolution. Cambridge: Cambridge University Press. Gregersen, Edgar A. 1972. Consonant polarity in Nilotic. In Erhard Voeltz (ed.) 3rd Annual Conference on African Linguistics, 105–109. Bloomington: Indiana University. Gussmann, Edmund. 1976. Recoverable derivations and phonological change. Lingua 40. 281–303. Harrikari, Heli. 2000. Segmental length in Finnish: Studies within a constraint-based approach. Ph.D. dissertation, University of Helsinki. Hayes, Bruce. 1986. Assimilation as spreading in Toba Batak. Linguistic Inquiry 17. 467–499. Hendon, Rufus S. 1966. The phonology and morphology of Ulu Muar Malay. Harvard: Yale University. Holt, D. Eric (ed.) 2003. Optimality Theory and language change. Dordrecht: Kluwer. Horwood, Graham. 2001. Anti-faithfulness and subtractive morphology. Unpublished ms., Rutgers University (ROA-466). Hsieh, Feng-fan. 2005. Tonal chain-shifts as anti-neutralization-induced tone sandhi. In Sudha Arunachalam, Tatjana Scheffler, Sandhya Sundaresan & Joshua Tauberer (eds.) Proceedings of the 28th Annual Penn Linguistics Colloquium, 99–112. Philadelphia: Department of Linguistics, University of Pennsylvania.

Chain Shifts

17

Hualde, José Ignacio. 1989. Autosegmental and metrical spreading in the vowel-harmony systems of northwestern Spain. Linguistics 27. 773–805. Itô, Junko & Armin Mester. 2004. Morphological contrast and merger: Ranuki in Japanese. Journal of Japanese Linguistics 20. 1–18. Jesney, Karen. 2007. Child chain shifts as faithfulness to input prominence. In Alyona Belikova, Luisa Meroni & Mari Umeda (eds.) Proceedings of the 2nd Conference on Generative Approaches to Language Acquisition, North America, 188–199. Somerville, MA: Cascadilla Press. Jespersen, Otto. 1949. A Modern English grammar on historical principles. Part I: Sounds and spellings. London: George Allen & Unwin. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Karlsson, Fred. 1999. Finnish: An essential grammar. 2nd edn. London & New York: Routledge. Kaye, Jonathan. 1974. Opacity and recoverability in phonology. Canadian Journal of Linguistics 19. 134–149. Kaye, Jonathan. 1975. A functional explanation for rule ordering in phonology. Papers from the Annual Regional Meeting, Chicago Linguistic Society 11. 244–252. Kaze, Jeffrey W. 1989. Metaphony in Italian and Spanish dialects revisited. Ph.D. thesis, University of Illinois. Kenstowicz, Michael & Charles W. Kisseberth. 1979. Generative phonology: Description and theory. New York: Academic Press. Keyser, Samuel J. & Paul Kiparsky. 1984. Syllable structure in Finnish phonology. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 7–31. Cambridge, MA: MIT Press. King, Robert D. 1969. Push chains and drag chains. Glossa 3. 3–21. Kiparsky, Paul. 1973. Phonological representations: Abstractness, opacity, and global rules. In Osamu Fujimura (ed.) Three dimensions of linguistic theory, 57–86. Tokyo: Taikusha. Kiparsky, Paul. 1993. Blocking in nonderived environments. In Sharon Hargus & Ellen M. Kaisse (eds.) Studies in Lexical Phonology, 277–313. San Diego: Academic Press. Kiparsky, Paul. 2000. Opacity and cyclicity. The Linguistic Review 17. 351–365. Kirchner, Robert. 1996. Synchronic chain shifts in Optimality Theory. Linguistic Inquiry 27. 341–350. Kisseberth, Charles W. 1976. The interaction of phonological rules and the polarity of language. In Andreas Koutsoudas (ed.) The application and ordering of phonological rules, 41–54. The Hague: Mouton. Labov, William. 1994. Principles of linguistic change, vol. 1: Internal factors. Oxford: Blackwell. Labov, William, Sharon Ash & Charles Boberg. 2006. The atlas of North American English: Phonetics, phonology and sound change. Berlin & New York: Mouton de Gruyter. Labov, William, Malcah Yaeger & Richard Steiner. 1972. A quantitative study of sound change in progress. Philadelphia: U.S. Regional Survey. Langdon, Margaret. 1970. A grammar of Diegueño: The Mesa Grande dialect. Berkeley: University of California Press. Lass, Roger. 1999. Phonology and morphology. In Richard M. Hogg (ed.) The Cambridge history of the English language, vol. 3: 1476–1776, 56–186. Cambridge: Cambridge University Press. Lehtinen, Meri. 1967. Basic course in Finnish. Bloomington: Indiana University Publications. Lin, Hua. Forthcoming. Apparent deletion in a grammar of gestural coordination. Ph.D. dissertation, University of Southern California. Ìubowicz, Anna. 2003. Contrast preservation in phonological mappings. Ph.D. dissertation, University of Massachusetts, Amherst (ROA-554). Ìubowicz, Anna. 2004. Counter-feeding opacity as a chain shift effect. Proceedings of the West Coast Conference on Formal Linguistics 22. 315–327.

18

Anna Ìubowicz

Ìubowicz, Anna. 2007. Paradigmatic contrast in Polish. Journal of Slavic Linguistics 15. 229–262. Ìubowicz, Anna. Forthcoming. The phonology of contrast. London: Equinox. Luick, K. 1914. Historische Grammatik der englischen Sprache. 2 vols. Stuttgart: Tauchnitz. Macken, Marlys A. 1980. The child’s lexical representation: The “puzzle-puddle-pickle” evidence. Journal of Linguistics 16. 1–17. Maclagan, Margaret & Jennifer Hay. 2004. The rise and rise of New Zealand English DRESS. In S. Cassidy, F. Cox, R. Mannell & S. Palethorpe (eds.) Proceedings of the 10th Australian International Conference on Speech Science and Technology, 183–188. Sydney: Australian Speech Science and Technology Association. Martinet, André. 1952. Function, structure, and sound change. Word 8. 1–32. Martinet, André. 1955. Économie des changements phonétiques. Berne: Francke. McCarthy, John J. 1999. Sympathy and phonological opacity. Phonology 16. 331–399. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. McCarthy, John J. 2003a. Sympathy, cumulativity, and the Duke-of-York gambit. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 23–76. Cambridge: Cambridge University Press. McCarthy, John J. 2003b. Comparative markedness. Theoretical Linguistics 29. 1–51. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. McCawley, James D. 1964. The morphophonemics of the Finnish noun. Unpublished ms., MIT. McLaughlin, John E. 1984. A revised approach to Southern Paiute phonology. Kansas Working Papers in Linguistics 9. 47–79. Miglio, Viola & Bruce Morén. 2003. Merger avoidance and lexical reconstruction. In Holt (2003), 191–228. Minkova, Donka & Robert Stockwell. 2003. English vowel shifts and “optimal” diphthongs: Is there a logical link? In Holt (2003), 169–190. Moreton, Elliott. 1996. Non-computable functions in Optimality Theory. Unpublished ms., University of Massachusetts, Amherst (ROA-364). Moreton, Elliott. 2004. A compendium of synchronic chain shifts. Unpublished ms., University of North Carolina, Chapel Hill. Available (June 2010) at www.unc.edu/ ~moreton/Materials/Chainshifts.pdf. Moreton, Elliott & Paul Smolensky. 2002. Typological consequences of local constraint conjunction. Proceedings of the West Coast Conference on Formal Linguistics 21. 306–319. Ní Chiosáin, Máire. 1991. Topics in the phonology of Irish. Ph.D. dissertation, University of Massachusetts, Amherst. Okoth-Okombo, Duncan. 1982. Dholuo morphophonemics in a generative framework. Berlin: Dietrich Reimer. Oostendorp, Marc van. 2004. Phonological recoverability in dialects of Dutch. Unpublished ms., Meertens Instituut. Available (June 2010) at www.vanoostendorp.nl/pdf/ recoverable.pdf. Padgett, Jaye. 1997. Candidates as systems: Saussure lives! Paper presented at the Hopkins Optimality Theory Workshop/University of Maryland Mayfest 1997. Padgett, Jaye. 2003. Contrast and post-velar fronting in Russian. Natural Language and Linguistic Theory 21. 39–87. Padgett, Jaye & Marzena Zygis. 2007. The evolution of sibilants in Polish and Russian. Journal of Slavic Linguistics 15. 291–324. Parkinson, Frederick B. 1996. The representation of vowel height in phonology. Ph.D. dissertation, Ohio State University. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell.

Chain Shifts

19

Riggs, Daylen. 2008. Contrast preservation in the Yupik languages. Proceedings of the Western Conference on Linguistics 18. 217–234. Rubach, Jerzy. 1984. Cyclic and Lexical Phonology: The structure of Polish. Dordrecht: Foris. Rubach, Jerzy. 2003. Polish palatalization in Derivational Optimality Theory. Lingua 113. 197–237. Sapir, Edward. 1930. Southern Paiute, a Shoshonean language. Proceedings of the American Academy of Arts and Sciences 65. 1–296. Schendl, Herbert & Nikolaus Ritt. 2002. Of vowel shifts great, small, long and short. Language Sciences 24. 409–421. Schmidt, Deborah. 1996. Vowel raising in Basaa: A synchronic chain shift. Phonology 13. 239–267. Smith, Neil V. 1973. The acquisition of phonology: A case study. Cambridge: Cambridge University Press. Smolensky, Paul. 1993. Harmony, markedness, and phonological activity. Paper presented at the Rutgers Optimality Workshop 1, Rutgers University (ROA-87). Smolensky, Paul. 1997. Constraint interaction in generative grammar II: Local conjunction or random rules in Universal Grammar. Paper presented at the Hopkins Optimality Theory Workshop/University of Maryland Mayfest 1997. Tessier, Anne-Michelle. 2004. Input “clusters” and contrast preservation in OT. Proceedings of the West Coast Conference on Formal Linguistics 23. 759–772. Trudgill, Peter, Elizabeth Gordon & Gillian Lewis. 1998. New dialect formation and Southern Hemisphere English: The New Zealand short front vowels. Journal of Sociolinguistics 2. 35–51. Ultan, Russell. 1970. Some sources of consonant gradation. Working Papers on Language Universals 2. 1–30. Walker, Douglas C. 1970. Diegueño plural formation. Linguistic Notes from La Jolla 4. Wilson, Colin. 2001. Consonant cluster neutralisation and targeted constraints. Phonology 18. 147–197. Yip, Moira. 1980. The tonal phonology of Chinese. Ph.D. dissertation, MIT. Yip, Moira. 2004. Phonological markedness and allomorph selection in Zahao. Language and Linguistics 5. 969–1001. Zonneveld, Wim. 1976. An exchange rule in Flemish Brussels Dutch. Linguistic Analysis 2. 109–114.

74

Rule Ordering Joan Mascaró

1

The bases of rule ordering

The distributional properties of sound in natural languages are explained by appeal to a level of underlying structure in addition to the level of observed phonetic or surface representation (chapter 1: underlying representations), and to a function that maps underlying representations into surface representations. This function has been conceived since the beginning of generative grammar as an ordered set of rules. In this chapter I will first introduce the main properties of rule ordering and the arguments for ordering rules (§1), and I will review various proposals to modify rule ordering in early generative phonology (§2), including cyclic ordering (§3). In §4 I discuss feeding, bleeding, and similar interactions in more detail, §5 discusses serial ordering and parallel approaches, and §6 draws some conclusions. A rule expresses a significant generalization about the sound structure of a given natural language. The rules of generative phonology, as formalized in Chomsky and Halle (1968; SPE) and subsequent work, were formalized adaptations of descriptive statements about phonology of earlier frameworks, even though their function was not the same. Both the relationship of generative rules to statements of descriptive grammars and the reasons for imposing ordering on them can be gathered from the following example, taken from Halle (1962: 57–58). (1a)–(1d) are taken from the description of Sanskrit vowel sandhi in Whitney (1889). The rules in (1e)–(1h) are a formalization of the corresponding generative rules. For simplification, in (1e)–(1h) I have included only the rules that apply to front vowels. (1)

a.

Two similar simple vowels, short or long, coalesce and form the corresponding long vowel. (§126) b. An a-vowel combines with a following i-vowel to e; with a u-vowel, to o. (§127) c. The i-vowels, the u-vowels, and the r before a dissimilar vowel or a diphthong are each converted into its own corresponding semi-vowel, j or v or r. (§129) d. Of a diphthong, the final i- or u-element is changed into its corresponding semi-vowel j or v before any vowel or diphthong: thus e (really ai . . .) becomes aj, and o (that is au . . .) becomes ay. (§131)

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0074

Rule Ordering e. f. g. h.

ViVj → Vi( ai → e i → j / __ Vi i → j / Vi __ Vj

2

Vi = Vj Vi ≠ i Vj ≠ i

The similarity of the rules to the descriptive statements is obvious. But, as Halle notices, if the ordering (e)–(g)–(f) is imposed on the rules in (1), “significant simplifications can be achieved.” A similar comment is made by Chomsky and Halle (1968: 18) with respect to ordering: “it [is] possible to formulate grammatical processes that would otherwise not be expressible with comparable generality.” Indeed, the condition on dissimilarity of (1g) can be eliminated, since when (1g) applies, all similar VV sequences will have coalesced by the application of (1e). Moreover, (1h) can be dispensed with, because ViV sequences will not be turned into eV by (1f), since (1g) will have changed the vowel into a glide. We find here one of the main reasons for imposing ordered rules: ordering allows for simplification of grammars and for a better expression of linguistically significant generalizations. Another typical argument in favor of rule ordering is language variation. Since SPE relates underlying and surface representations via a set of ordered rules, it follows that language variation must be due to differences in underlying representations, in the set of rules and in their ordering. A famous example of difference in grammars stemming from different orderings of the same rules is Canadian Raising, an example introduced in Halle (1962: 63–64), based on data from Joos (1942), which is also discussed in Chomsky and Halle (1968: 342).1 In certain Canadian and US dialects the first elements in the diphthongs /a> aÁ/ are raised to [Z> ZÁ] before voiceless consonants.2 At the same time there is regular change of /t/ to [7] in the American English flapping environment. The interaction of these phenomena gives different results in two dialects, A and B. This causes, according to Joos, a dilemma: in a word like writer, which is pronounced [rZ>7Ì] in dialect A, Joos’s generalization that “/a/ is a lower-mid vowel . . . [only] in diphthongs followed by fortis [≈ voiceless] consonants” is not true – and in Joos’s view, descriptive statements are about surface representations, hence true of surface representations. Halle’s solution to the dilemma stems from the recognition that statements of regularities (“rules”) should be true of steps in the derivation, but need not be true of surface representations. This is the case if rules are ordered, and hence the application of a later rule can change the context that conditioned an earlier rule, as in this case, or the result of the rule itself. In other words, rule ordering solves Joos’s dilemma. (2) shows the derivation of typewriter with the diphthong /a>/ both before non-flapping (/p/) and flapping (/t/) environments in both dialects. The statement in (2c) (also in (2f)) is true of surface representations (2d) and (2h), but the rule in (2b) (also in (2g)) is true of (2h), but not of (2d), which contains the sequence [Z>d], if we

1

The opaque case (dialect A) had already been discussed by Harris (1951: 70–71) and Chomsky (1962: 156–157). 2 I transcribe the first vowel of the diphthong as [Z], and the voiced t as [7], following Chambers (1973, 2006); Joos’s phonetic description is slightly different (basically [ô] and [d], respectively). Canadian Raising has generated a great deal of discussion. Kaye (1990) casts some doubts on the existence of dialect B, which are not clearly formulated. Mielke et al. (2003) claim that the difference has been phonemicized, e.g. as /nZjf/ vs. /najv/, but Idsardi (2006) argues convincingly that there are actual alternations.

Joan Mascaró

3

interpret the rule in the sense that “/Z> ZÁ/ appear [phonetically] only before voiceless consonants.” In (2) I simplify the flapping context to V__V. (2)

Dialect A a. b. a → Z / __ [C, −voice] c. t → 7 / V __ V d. output

/ta>pra>tÌ/ tZ>prZ>tÌ tZ>prZ>7Ì [tZ>prZ>7Ì]

Dialect B e. f. t → 7 / V __ V g. a → Z / __ [C, –voice] h. output

/ta>pra>tÌ/ ta>pra>7Ì tZ>pra>7Ì [tZ>pra>7Ì]

Another example of the same argument for imposing ordering on rules, grammars differing only in rule ordering, is examined in Kiparsky (1982b: 65–66). German devoices obstruents in coda position (3a) (chapter 69: final devoicing and final laryngeal neutralization) and simplifies /Ig/ clusters to [I] (3b). Two of the inflectional forms of the adjective meaning ‘long’, lang and lange, contrast in two dialect groups, one showing [laIk], [laIH], the other [laI], [laIH], respectively. Application of any of the two rules renders the other rule inapplicable (a case of mutual bleeding, see §2), therefore only the first rule applies in each ordering in every instance: (3)

a.

Devoicing

!+C # [+obstr] → [−voice] / __ @ $ #

b.

g-deletion

g → Ø / [+nasal] __

c.

Dialect group I Devoicing g-deletion

/laIg/ /laIg+H/ laIk — — laIH

d. Dialect group II g-deletion Devoicing

/laIg/ laI —

/laIg+H/ laIH —

Rule ordering is closely connected to rule application. As shown by Whitney’s example, descriptive grammars and many versions of structuralist phonology implicitly assume simultaneous rule application (see Postal 1968: 140–152). This follows from the assumption that rules (or descriptive statements) are true of surface representation, i.e. they are generalizations about surface representation. In simultaneous rule application, the string is scanned for the structural description of each rule and all the rules whose structural description is met apply simultaneously. Chomsky and Halle (1968: 19) provide an interesting abstract example of simultaneous application, which is compared to rule ordering.3 I adapt 3

Chomsky and Halle’s (1968) example consists of the rules B → X / __ Y and A → Y / __ X and the input representations /ABY/ and /BAX/.

Rule Ordering

4

it with a hypothetical example. Consider rules (4a), (4b), the underlying representations (4c) and (4d), and the results of simultaneous application (4e) and of ordered rules (4f), (4g): (4)

a. b.

t → Œ / __ i e → i / __ Œ Underlying e.

c. /eti/ d. /teŒ/

Surface Simultaneous f. application eŒi tiŒ

Rules ordered g. Rules ordered (a)–(b) (b)–(a) iŒi eŒi tiŒ ŒiŒ

The problem is now empirical, i.e. the question to ask is whether natural languages have input–output relations like (4c), (4d) to (4e), or rather like (4c), (4d) to (4f) or (4c), (4d) to (4g). In the case of ordered rules, the first rule creates a representation that allows the application of the second rule (feeding). With the ordering (4a) < (4b) (“> S and G is active on i (i.e. it distinguishes the set of candidates Gen(i) when it applies) then S is not active on i. For a clear and interesting discussion of the relation between the Elsewhere Condition and PTC, see Bakovio (2006); also Prince (1997).

Rule Ordering

12

After stress has applied to individual words, the compound stress rule locates stressed vowels and maintains primary stress on the leftmost stressed vowel and weakens other stresses by one degree. After assigning vacuously primary stress in the first cycle to [bla1ck], to [boa1rd] and to [era1ser], it applies in the second cycle in the domain B to [black board], yielding [bla1ck boa2rd], and in the last cycle in the domain A to [bla1ck boa2rd era1ser] to give the final [bla1ck boa3rd era2ser]. Cyclicity was later applied in syntax as a result of the elimination of generalized transformations and the generation of embedded sentences by base rules (Chomsky 1965). Later on, Chomsky (1973) proposed a limitation on cyclic application in syntax, the Strict Cycle Condition (SCC, or “strict cyclicity”), by which no rule can apply to a constituent I in such a way as to affect solely a subconstituent of I. Kean (1974) presented two cases that argued for the application of this version of the SCC also in phonology. In (17), for instance, in the second cycle, cycle B, a rule cannot apply to the domain of B if it affects just c′. An actual example is the interaction of Glide Formation and Destressing in Catalan (Mascaró 1976: 24–36). Glide Formation applies to post-vocalic unstressed high vowels. In produirà ‘it will produce’ it cannot apply to [[p7udu’i]1’7a]2 at cycle 1, because post-vocalic /i/ is stressed. At cycle 2, a following stress causes destressing of /i/. Therefore at cycle 3, and at later cycles, the sequence /ui/ meets the structural description of the rule; but /ui/ is entirely within cycle 1 and the SCC blocks application, resulting in [p7uÏui’7a] *[p7uÏuj’7a]. The SCC was further refined in Mascaró (1976: 1–40), as in (19). Case (19b.i) corresponds roughly to the SCC as formulated in Chomsky (1973) and used by Kean (1974). (19)

Given a bracketed expression [. . . [. . . , [. . .]1, . . . ]n−1 . . . ]n, and a (partially ordered) set of cyclic rules C: a.

b.

C applies to the domain [. . .]j after having applied to the domain [. . .]j−1, each rule in C applying in the given order whenever it applies properly in j. Proper application of rules. For a cyclic rule R to apply properly in any given cycle j, it must make specific use of information proper to (i.e. introduced by virtue of) cycle j. This situation obtains if (i), (ii), or (iii) is met: i. R makes specific use of information uniquely in cycle j. That is, it refers specifically to some A in [XAY [. . .]j−1 Z]j or [Z [. . .]j−1 XAY]j. ii. R makes specific use of information within different constituents of the previous cycle which cannot be referred to simultaneously until cycle j. R refers thus to some A, B in [X [. . . A . . .]j−1 Y [. . . B . . .]j−1 Z]j. iii. R makes use of information assigned on cycle j by a rule applying before R.

A states the general procedure for cyclic application; B gives the conditions for proper application: morphologically derived environments in inflection (19b.i), derived environments by compounding or syntax (19b.ii), and rule-derived environments (19b.iii). Effects of derived environments on application of processes, irrespective of the theoretical mechanism they derive from, are usually referred

Joan Mascaró

13

to as derived environment effects (DEE). We just saw a case, the interaction of glide formation and destressing in Catalan, which falls under (19b.i). The rule of /t/ → [s] assibilation in Finnish illustrates both (19b.i) and (19b.ii). Assibilation (20a) applies in morphologically derived environments like (20c): the structural description [ti] is met by material in the root cycle and in the inflected word cycle. It also applies in rule-derived environments (20d): here the structural description [ti] is met because at its cycle of application, the rule of raising (20b) has created it. But it fails to apply in the non-derived environments (20e), because none of the conditions for proper application in (19b) is met: (20)

a. b. c. d. e.

t → s / __ i e → i / __## /halut-i/ → halus-i /vete/ → veti → vesi tila ‘place, room’ æiti ‘mother’ itikka ‘mosquito’

‘wanted’ ‘water (nom sg)’

cf. halut-a cf. vete-næ

‘to want’ ‘water (ess sg)’

An instance of (19b.ii) is the application of glide formation in Central Catalan to vowels of different words. As we have just seen, glide formation applying to post-vocalic high vowels is blocked in produirà [[p7uÏu’i]1’7a]2. Consider now produirà oxidació ‘it will produce oxidation’: (21) Cycle 2

[[[p7uÏu’i]1 ’7a]2 [[p7uÏu’i]1’7a]2 [[p7uÏui]1 ’7a]2

Cycle 3 Cycle 4

[[[p7uÏui]1 ’7a]2 [[[p7uÏui]1 ’7a]2

[[[’DksiÏ]1 ’a]2 ’sjo]3]4 [[’DksiÏ]1 ’a]2 [[uksiÏ]1 ’a]2 [[[uksiÏ]1 H]2 ’sjo]3 [[[uksiÏ]1 H]2 ’sjo]3 [[[uksiÏ]1 H]2 ’sjo]3]4 [[[wksiÏ]1 H]2 ’sjo]3]4

In the second word, at cycle 2, the initial /’D/ is destressed by a following stress and becomes [u] by a rule of vowel reduction. At cycle 4, the sequence /au/ meets the structural description of glide formation and the SCC does not block Glide Formation, the application being proper by (19b.ii), because /au/ is not within the domain of a single previous cycle: /a/ is in cycle 2; /u/ is in cycle 3. Hence the rule applies, yielding [’aw]. It was assumed that the SCC applied to cyclic, obligatory neutralization rules, and dealt with DEEs. These were previously accounted for by the Alternation Condition proposed by Kiparsky (1973b: 65), according to which “neutralization processes apply only to derived forms . . . [i.e.] if the input involves crucially a sequence that arises in morpheme combinations or through the application of phonological processes.” Cyclic application and derived environment effects were reformulated within Lexical Phonology through lexical strata and post-lexical phonology, which correspond to cycles and to the effect of the Elsewhere Condition from which DEEs are derived. In Stratal Optimality Theory (see chapter 85: cyclicity), cycles correspond to strata to which Gen and Eval apply successively. Within OT, output–output faithfulness constraints (Benua 1997) ensure similarity of larger constituents to its inner components. Strict cycle effects (DEEs) are

Rule Ordering

14

also obtained by local conjunction of markedness and faithfulness constraints (Ìubowicz 2002). To see how DEEs are derived from local conjunction, consider the interaction of Velar Palatalization and Spirantization in Polish (Ìubowicz 2002: §3). We find the following descriptive generalizations. Spirantization applies to rule-derived [–] (22a), but not to underlying /–/ (22b); similarly, in (22c) Velar Palatalization applies only to morphologically derived velar + [e i] sequences. (22)

a.

/rog-ek/ b. /ban–-o/ ‘horn’ ‘banjo’

Velar Palatalization

ro–-ek



Spirantization of /–/ output

roÚ-ek roÚ-ek

blocked ban–-o *banÚ-o

c. /xemik-ek/ ‘chemist-dim’ xemiŒ-ek blocked in xe — xemiŒ-ek *œemiŒ-ek

Let us examine rule-derived environments first. Given the ranking *– >> Ident[cont], we will normally have the mapping /–/ → [Ú]. The difference from derived and non-derived environments stems from the fact that in the first case the mapping is /g/ → – → [Ú], whereas in the second case it would be /–/ → [Ú]. The candidate with [–] deriving from /g/ will violate both *– and Ident[cor], hence also the constraint conjunction *– & Ident(cor). But if /–/ is underlying, *– will be violated, but not Ident[cor], therefore the conjunction *– & Ident[cor] will be satisfied. For morphologically derived environments, as in the example in (22c), Ìubowicz uses conjunction of markedness and Anchor. Velar palatalization applies to the morphologically derived sequence [k–e], but not to the non-derived sequence /xe/ in /xemik-ek/. Since the velar /k/ is stem-final, but not syllable-final, in [xe.mi.k]Stem-ek, the sequence k]Stem-e will violate R-Anchor(Stem, s), and it will also violate Pal, the constraint against velar + {e i} sequences that forces palatalization. It will therefore also violate [Pal & R-Anchor(Stem, s)]D. But since morphologically underived /xe/ satisfies R-Anchor(Stem, s), the conjunction will be satisfied in this case and palatalization will not take place.

4

Rule interaction, ordering, and applicability: Feeding and bleeding

In a system in which rules are ordered, rules can interact: both the applicability and the result of application of a rule can depend on the application of previous rules. The notions of feeding and bleeding that I made reference to in §2 were introduced by Kiparsky (1968) in order to explain the direction of linguistic change. These concepts have been widely used since. In this section I examine them in some detail. Since it is not uncommon to detect terminological inadequacies in the literature, in order to avoid confusion I will start with some terminological observations. In Kiparsky’s original terminology, feeding and bleeding relations between rules are distinguished from feeding order and bleeding order. Feeding and bleeding relations (or the terms “X feeds/bleeds Y”) are defined as functional relations

Joan Mascaró

15

between two rules, with no actual ordering between them presupposed. A feeds B if A “creates representations to which B is applicable”; A bleeds B if A “removes representations to which B would otherwise be applicable,” where “representations” means possible representations (Kiparsky 1968: 37, 39). The terms feeding order and bleeding order are relations between rules that are in a specific order. Since feeding and bleeding relations are functional relations between rules, whether two rules are in a feeding or bleeding relation can be determined by mere inspection of the rules.7 I will keep this distinction (feeding/bleeding relation vs. feeding/ bleeding order), but I will reserve the use of the predicates feed and bleed applied to arguments A and B for feeding/bleeding order, and I will make use of the predicates p-feed and p-bleed (“p” for “potentially”) in the case of feeding/bleeding relations. (23) provides an illustration using our previous German example (3): (23)

German, group II (g-deletion < Devoicing) a.

b.

Feeding/bleeding relation

A p-feeds/ p-bleeds B

devoicing p-bleeds g-deletion g-deletion p-bleeds devoicing

Feeding/bleeding order

A feeds/ bleeds B

g-deletion bleeds devoicing

Devoicing g-deletion

c.

+C [obstr] → [–voice] / __ ! # @ # $ g → Ø / [+nasal] __

Dialect group II g-deletion devoicing

/laIg/ laI —

Devoicing p-bleeds g-deletion by devoicing g in [+nasal] + g, and g-deletion p-bleeds devoicing by deleting g in the same context. Given the ordering g-deletion < devoicing, g-deletion bleeds devoicing, as shown in the derivation in (23). Feeding and bleeding relations can be formally defined as follows: (24)

Feeding and bleeding relations a.

b.

7

Rule A is in feeding relation with respect to B (or A p-feeds B) iff there is a possible input I such that B cannot apply to I, A can apply to I, and B can apply to the result of applying A to I. Rule A is in bleeding relation with respect to B (or A p-bleeds B) iff there is a possible input I such that B can apply to I, A can apply to I, and B cannot apply to the result of applying A to I.

Of course one might want to to relativize these notions to a given set of representations, e.g. the lexicon. For instance, a rule A that centralizes the place of articulation of all consonants in word-final position feeds a rule B that vocalizes /l/ to [w] in coda position, because it can create the representation . . . Vl]Coda## from /. . . VO]Coda##/, to which B is applicable. But in a language with a single lateral l, the feeding interaction will never take place. In such cases, in order to avoid terminological ambiguities we can say that A feeds B, but A doesn’t feed B for lexicon L, or that A doesn’t l-feed B. Similarly, if we relativize feeding and bleeding to specific derivations, we can say that a rule A does/does not d-feed or d-bleed a rule B, meaning that the feeding or bleeding relation is/is not actually instantiated in that particular derivation.

Rule Ordering

16

It is important to notice that in the definitions in (24) “apply” is usually interpreted as “apply non-vacuously.” In the German example in (3), in dialect group I, Devoicing bleeds g-deletion, (/laIg/ → /laIk/ → (n/a)). But for the word Bank ‘bank’, whose derivation is /baIk/ → (vacuous devoicing) baIk → (n/a), we don’t want to say that Devoicing bleeds g-deletion, because the input to Devoicing didn’t meet its structural description. Kiparsky’s (1968) terms “creates” and “removes,” cited above, already indicate that vacuous application doesn’t count. On the other hand, feeding order and bleeding order (or the terms A feeds B and A bleeds B) refer to relations between two rules A and B which presuppose both feeding/bleeding relations and the specific ordering A < B (i.e. A precedes B) in the grammar. Most definitions are formulated for cases in which A immediately precedes B, or cases in which intervening rules don’t interact with A and B. In such a situation the definitions become simpler: A is in feeding/bleeding order with respect to B iff A < B and A p-feeds/bleeds B. For the general case the definitions have to be refined as follows: (25)

Feeding order and bleeding order Let G be a grammar, A, B rules, and D a derivation of G. a. A is in feeding order with respect to B (or A feeds B) in grammar G iff i. A < B ii. There is a derivation D by G such that B would not apply to the input to A, and B applies to the output of A and would apply to all intermediate stages up to its own input. b. A is in bleeding order with respect to B (or A bleeds B) in grammar G iff i. A < B ii. There is a derivation D by G such that B would apply to the input to A, and B does not apply to the output of A and would not apply to all intermediate stages up to its own input.

When A immediately precedes B or in cases where intermediate rules don’t interact we get derivations like those in (26): (26a.i) is in feeding order with respect to (26a.ii) because the second rule (26a.ii) wouldn’t apply to AQ, but applies to BQ, the output of the first rule (26a.i); (26b.i) is in bleeding order with respect to (26b.ii) because the second rule (26b.ii) would apply to AQ, but doesn’t apply to BQ, the output of the first rule (26b.ii). (26)

a. Feeding order (No intervening interacting rules) i. A → B / __ Q ii. Q → R / B __

AQ BQ BR

b.

Bleeding order

i. ii.

A → B / __ Q Q → R / A __

AQ BQ —

The case of feeding order for two adjacent rules can be illustrated with the interaction of /æ£/ → [a(] and Umlaut in a group of Swiss German dialects (Kiparsky 1982b: 190). Bleeding order, also for adjacent rules, can be illustrated with our earlier example (2e)–(2h), Canadian Raising, in the word writer in dialect B:

Joan Mascaró

17 (27)

a.

Feeding: Swiss German (dialect group I) /æ£-li/ ‘egg-dim’ i. æ£ → a( / __ ! C # a(li @## $ ii. Umlaut (fronting) æli (does not apply non-vacuously to /æ£-li/)

b.

Bleeding: Canadian Raising (dialect B) /ra>tÌ/ i. t → 7 / V __ V ra>7Ì ii. a → Z / __ [C, −voice] — (applies non-vacuously to /ra>tÌ/)

Consider now the cases with interacting rules intervening between (i) and (ii) that motivate the definitions in (25). (28a) exemplifies feeding cases and (28b) bleeding cases. The rules (i) and (ii) are the rules in feeding/bleeding relation; (iii) is the intervening rule. (28)

a. Feeding order (Intervening interacting rules) QA i. Q → R / __ A RA iii. A → B / R __ RB ii. A → C / R __ —

b.

Bleeding order

i. iii. ii.

QA Q → R / __ A RA R → Q / __ A QA A → B / Q __ QB

In the feeding example, rule (i) p-feeds rule (ii) and precedes (ii), but given conditions (25a.ii) and (25b.ii), it does not feed rule (ii), because some rule ordered between them, namely (iii), undoes the change that caused the feeding (it bleeds rule (ii)). In terms of the definitions in (25), there are representations between the two rules, in particular the input to rule (ii), to which the second rule cannot apply. Similarly, in the bleeding example, rule (i) p-bleeds rule ii. and would indeed bleed rule (ii), if it were not for (iii), which feeds rule (ii). Of course a pair of rules can show non-feeding or non-bleeding interactions like those in (28) in some derivations, but feeding or bleeding interactions in other derivations. As already indicated in note 7, I will use the terms d-feed and d-bleed when feeding and bleeding is relativized to a specific derivation. English stress provides an actual example for bleeding. Stress is assigned twice in words like context [’kAn‘tekst] or Ahab [’e>‘hæb]. But after a light syllable the second stress is removed (the “Arab rule”; Ross 1972), as in Arab [’ærHb], and the destressed vowel reduces to [H]. Stress bleeds vowel reduction, but in the derivation of Arab destressing undoes the bleeding (/ærHb/ → ’æ‘ræb → ’æræb → [’ærHb]). Here we must say that stress bleeds reduction, because there are derivations that show actual bleeding, as in [’e>‘hæb], but if we relativize bleeding to specific derivations, some of them do not show a bleeding interaction: in these derivations stress doesn’t d-bleed reduction.8 There is yet another interesting case of p-feeding/bleeding with no actual feeding/bleeding. If a rule A p-feeds a rule B and precedes it, and there is a 8

Notice that the example is adequate only if we assume that the underlying /ærHb/ has no stress structure and the unstressed character of the second vowel is introduced by the stress rule.

Rule Ordering

18

representation to which A applies and B would not apply, it is possible, according to (25), to have no feeding order even if, contrary to what happens in the previous examples, B actually applies. The same is true, mutatis mutandis, of bleeding order. This happens in Duke of York derivations (Pullum 1976; McCarthy 2003), which are derivations in which a rule reverses the action of a previous rule, e.g. . . . A . . . → . . . B . . . → . . . A . . . Consider the derivations in (29), which contain a Duke of York subderivation, highlighted in bold (notice that (28b) above is also an instance of a Duke of York derivation): (29)

a. Feeding order b. Bleeding order (Intervening interacting rules; Duke of York derivations) i. iii. iv. ii.

Q → R / __ A A → B / R __ B → A / R __ A → C / R __

QA RA RB RA BC

i. iii. iv. ii.

Q → R / __ A R → Q / __ A A → B / Q __ A → C / Q __

QA RA QA QB __

In (29a) (i) p-feeds and precedes (ii), and (ii) does apply, but (i) does not feed (ii), because there is an intermediate representation to which the second rule would not apply, namely RB, created by (iii). In fact it is the other intervening rule, (iv), that now feeds (i). Similarly, in (29b) (i) p-bleeds the last rule (ii), but it does not bleed it, even if the rule does not apply, because of the intermediate representation QA created by (iii), to which the rule would apply. Here it is rule (iv) which actually bleeds (ii). Feeding and bleeding interactions have been used in different contexts and for different purposes, so it is conceivable to have slightly different changes in the definitions. One such change is desirable in cases in which usual definitions do not yield a feeding/bleeding relation, and yet this relation is intuitively correct. Consider a case like (30), in which glide formation, vowel reduction, and destressing interact in Central Catalan. (30a)–(30c) show that glide formation affects postvocalic high unstressed vowels (30a), but not non-high vowels (30b), where Osiris is a lexical exception to vowel reduction, or stressed vowels (30c). It also affects high unstressed vowels that are not underlying, as in (30d). a. ser[’a] [u]mid ‘it will be wet’

(30) Destressing Vowel Reduction Glide Formation output

’a w ser[’a] [w]mid c.

Destressing Vowel Reduction Glide Formation output

ser[’a] [’u]til ‘it will be useful’

ser[’a] [’u]til

b.

ser[’a] [o]siris ‘it will be Osiris’

ser[’a] [o]siris d. ser[’a] [’D]ciós ‘it will be idle’ ’a D ’a u ’a w ser[’a] [w]ciós

Joan Mascaró

19

Notice now that the structural description V V[+high, −stress] is met in (30d), because vowel reduction has turned [D] into a high vowel, but also because destressing has created the other condition for gliding. In such a case we want to say that these two rules jointly feed glide formation. The definitions in (25) can be changed accordingly, to meet such situations. Notice also that a rule can stand in both feeding and bleeding order with respect to another rule. In Majorcan Catalan stops assimilate in place to a following consonant (place assimilation), and the second consonant in a two-consonant coda cluster deletes before another consonant (cluster simplification). As shown in (31), deletion of the medial C causes bleeding when the medial C is the target of assimilation, and feeding when it intervenes between the trigger and the target of assimilation. (31)

a. Place assimilation Place X

Place Y

Place Y →

[−cont]

C

[−cont]

C

b. Cluster simplification Coda C→Ø/C — C c. input Cluster simplification Place assimilation output

Bleeding /’bujd ’t7ens/ ‘I empty trains’ ’buj ’t7ens — ’buj ’t7ens

Feeding /’t7ens ’bujds/ ‘empty trains’ ’t7en ’bujds ’t7em ’bujds ’t7em ’bujts

It should be observed that the fact that two rules A, B do not have a feeding or bleeding interaction does not mean that they don’t interact. In (32) rule (a) deletes final consonants, while (b) stresses the final syllable if it is heavy, otherwise the penult. Different orderings give different results, but the interaction isn’t either a feeding or a bleeding relation. (32)

a. b.

C → Ø / __ ## V → [+stress] / __ C(V)##

a b /satopek/ → satope → sa’tope b a /satopek/ → sato’pek → sato’pe

I will now examine counterfeeding and counterbleeding. These notions refer only to rules that are in a specific order (potential situations don’t make sense in this context). Basically, a counterfeeding/bleeding order is an order that would be feeding/bleeding if the order of the rules were reversed. Since there is some confusion in the use of the predicates, I will follow the practice in Koutsoudas et al. (1974) and use as subject of “counterfeed/counterbleed” the second rule in the ordering, e.g. B counterfeeds A means that A < B, and B would feed A if B < A.

Rule Ordering (33)

20

a.

A and B are in counterfeeding order (B counterfeeds A) in grammar G iff i. A < B ii. B p-feeds A

b.

A and B are in counterbleeding order (B counterbleeds A) in grammar G iff i. A < B ii. B p-bleeds A

Counterfeeding order can be illustrated with the same processes of Swiss German presented in (27) in another dialect group, group II, which shows the opposite ordering. Counterbleeding is illustrated with Canadian Raising in dialect A (2a): (34)

a.

Counterfeeding: Swiss German (dialect group II) /æ£-li/ ‘egg-dim’ i. Umlaut (fronting) — ii. !C# æ£ → a( / __ @ $ a(li ## The opposite, feeding ordering would yield /æ£-li/ → a(-li → æ(-li

b.

Counterbleeding: Canadian Raising (dialect A) /ra>tÌ/ i. a → Z / __ [C, −voice] rZ>tÌ ii. t → 7 / V __ V rZ>7Ì The opposite, bleeding ordering would yield /ra>tÌ/→ ra>7Ì→ (n/a)

“Counter” orderings have important properties. Assume the simple case where rules A and B are adjacent, and A < B. Since in feeding order (B < A) there must be at least one input I such that B is applicable to I, A is not applicable to I, and A is applicable to the output of B (35a), it follows that in the corresponding counterfeeding order where A < B there must be an input (namely I) to which the first rule, now A, does not apply and to which the second rule, now B, applies (35b). Hence the generalization expressed by A does not appear in the output: we can say, using McCarthy’s (1999) terms, that it is not surface-true, it is not true of the output of B, usually the surface representation. In the bleeding order B < A there must be by definition at least one input I such that both A and B are applicable to I (giving different results, I′ and I″, respectively), and A is not applicable to the output of B (35c). It follows that in the corresponding counterbleeding order A < B there can be an input to which the first rule, now A, applies and to which the second rule, now B, might apply and change the context of application of the first rule (35d). Hence the generalization expressed by A about the input I does not appear in the output: following McCarthy we can say that it is not surface-apparent, because the generalization A about I is not apparent in the output of A, usually the surface representation. (35)

a. Feeding I B. I′ A. I″

b.

Counterfeeding I A. — B. I″′

c.

Bleeding I B. I′ A. —

d.

Counterbleeding I A. I″ B. — /I″′

Joan Mascaró

21

In our previous example in (34) for counterfeeding in Swiss German, the fronting dictated by Umlaut is not true in the surface form [a(li]; for the counterbleeding in Canadian Raising, dialect A, the fact that [Z] derived from /a/ appears before voiceless consonants is not apparent in the surface form [rZ>7Ì]. It is also important to notice that the existential quantification in the definitions in (25) of feeding and bleeding orders (hence also of counterfeeding and counterbleeding orders) allows for the existence of multiple feeding and bleeding relations between two rules. For feeding, and given two ordered rules A < B, the requirement (25a.ii) that there be an input I whose derivation D meets the conditions required in (25a.ii) does not prevent the existence of another input I′ that meets the condition (25b.ii) for bleeding. Hence A can both feed and bleed B (and B can both counterfeed and counterbleed A).

5

Serial and parallel approaches

Rule interactions of the sort just discussed have become important in the theoretical comparative analysis of serial and parallel approaches, in particular in relation to opaque rule interactions. If we compare a standard serial theory like SPE with a parallel theory based on constraints like Optimality Theory (OT), pure feeding and pure bleeding order effects (i.e. those that are not also counterfeeding or counterbleeding) are transparent interactions and can be derived from both. Consider the well-known case of e-raising and /t/ → [s] interaction in Finnish (Kiparsky 1973b: 166–172), partially repeated from (20): (36) a. b.

e → i / __## t → s / __ i

vete ‘water-nom sg’

halut-i ‘wanted’

veti vesi

— halus-i

Because both (36a) and (36b) are statements that are true of surface forms, constraints of the form *e##, *ti, dominating conflicting faithfulness constraints, together with other constraints determining the choice of [i] and [s], will derive the output of /vesi/, /halus-i/. But counterfeeding and counterbleeding are opaque interactions and cause problems for a parallel approach. A process (37a) is opaque (Kiparsky 1973b: 79) to the extent that there are phonetic forms in (37b) or (37c); otherwise it is transparent. The derivations (37d) and (37e) illustrate (37b) and (37c), respectively. (37)

a.

Rule: A → B / C __ D

Opaque surface forms b. A in the environment C __ D c. B derived by (a) in an environment different from C __ D d. /EAD/ A → B / C __ D — E → C / __ A CAD e. /CAD/ A → B / C __ D CBD C → E / __ A EBD

Rule Ordering

22

In (37d) the generalization “A does not appear in C __ D; B appears instead” expressed by (37a) is not surface-true; the rule underapplies with respect to surface representations. In (37e) the generalization “underlying (or intermediate) A is represented by B in C __ D” is not true of the derivation, it is not surfaceapparent; the rule overapplies with respect to surface representations, since it applies outside its environment. To illustrate with a real example, consider counterfeeding in Madurese (Austronesian, Indonesia) (McCarthy 2002: 174–175). Nasality spreads rightwards onto following vowels, but is blocked by oral consonants, and voiced stops delete after a nasal (chapter 78: nasal harmony): (38)

a. b. c.

V → [+nas] / N __ /b d g/ → Ø / N __ Surface

/naIga?/ nãIga? nãIa? [nãIa?]

/naIa?/ nãIã? — [nãIã?]

In the first derivation, rule (38c) has deleted an oral consonant and has thus partially changed the context of application of rule (38b); rule (38b) underapplies, because if it did apply to the surface representation it would nasalize the second vowel, *[nãIã?]. The generalization that a nasal vowel nasalizes following vowels across non-oral consonants is not surface-true. Such an opaque interaction is derivable in an ordered rule system, but not in a system in which markedness generalizations are about surface forms. Consider now a model like OT. For an input /nãIa?/ (cf. the second derivation in (38)), the constraint hierarchy must favor candidate [nãIã?] over candidate *[nãIa?] (nasalization spreads across non-oral consonants). Therefore it will also favor the non-opaque candidate *[nãIã?] over candidate [nãIa?] if the input is /nãIga?/. Similar considerations apply to counterbleeding opacity. Consider our earlier example, Canadian Raising in dialect A. The change /a>t/ → [Z>t] does not appear as such in the phonetic representation of writer, because the second rule has modified the result of the change, turning the triggering voiceless /t/ into [7]: (39)

a. b. c.

a> → Z> / __ [C, –voice] t → 7 / V __ V

/ra>tÌ/ rZ>tÌ rZ>7Ì

(Z>7, not Z>t)

/ta>p/ tZ>p tZ>p

Here in order to obtain the transparent [tZ>p] in type, both *a>[C, −voice] and *VtV must be active. But for writer the input /ra>tÌ/, where both constraints are relevant, cannot have as output [rZ>7Ì], because the candidate [rZ>tÌ] also satisfies both markedness constraints and is, in addition, more faithful to the input: (40)

/ra>tÌ/ *a>[C, −voice] *VtV Faith[a>] Faith[t] a. ra>tÌ b. rZ>tÌ ☞ c. ra>7Ì d. rZ>7Ì

*!

* *!

* *!

* *

23

6

Joan Mascaró

Conclusion

Rules are generalizations about the distribution of sound in natural languages. Rule ordering is a specific theory about how these generalizations interact to derive a surface representation. The intensive study of many phonological systems using rule ordering has not only produced a rich body of descriptive work, but has also unveiled many deep properties of phonological systems and many theoretical problems that go beyond the model that generated them. When the problem of the theoretical status of phonology was first addressed seriously, it was immediately realized that phonological generalizations could not have two properties at the same time: they could not be absolute generalizations and generalizations about the surface representation. In other words, they could not map a lexical representation to a surface representation in one step (simultaneous rule application), as illustrated by Joos’s paradox discussed at the beginning of §1. The response to this fact was that the requirement that generalizations be true of surface representations should be abandoned, and hence that phonological processes had to be ordered. The conviction of many present-day phonologists that the right response is to abandon the other requirement, i.e. that generalizations be absolute, and keep the idea that they apply to surface representations, has been made possible by many decades of work in a framework based on rule ordering. Even if many things have changed since the days in which a phonological description could be based on a system with a depth of ordering of 20 or 30 (i.e. 20 or 30 rules that had to be linearly ordered),9 serial approaches haven’t achieved a total elimination of ordering through mechanisms like the ones described in §2.3. At the same time, many of the properties of phonological systems that have been discovered as the result of work on rule ordering – the existence of opacity, disjunctivity as predicted by the Elsewhere Condition, derived environment effects, and many morphology–phonology interactions – are still important problems that will stimulate further research, for both serial and parallel approaches.

REFERENCES Anderson, Stephen R. 1969. West Scandinavian vowel systems and the ordering of phonological rules. Ph.D. dissertation, MIT. Anderson, Stephen R. 1974. The organization of phonology. New York: Academic Press. Anderson, Stephen R. & Paul Kiparsky (eds.) 1973. A Festschrift for Morris Halle. New York: Holt, Rinehart & Winston. Bakovio, Eric. 2006. Elsewhere effects in Optimality Theory. In Eric Bakovio, Junko Itô & John J. McCarthy (eds.) Wondering at the natural fecundity of things: Essays in honor of Alan Prince, 23–70. Santa Cruz: Linguistics Research Center, University of California, Santa Cruz. Available at http://repositories.cdlib.org/lrc/prince/4. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst. Published 2000, New York: Garland. 9

“In the segment of the phonological component for Modern Hebrew presented in Chomsky (1951), a depth of ordering that reaches the range of twenty to thirty is demonstrated and this is surely an underestimate” (Chomsky 1964: 71).

Rule Ordering

24

Chafe, Wallace L. 1968. The ordering of phonological rules. International Journal of American Linguistics 34. 115–136. Chambers, J. K. 1973. Canadian Raising. Canadian Journal of Linguistics 18. 131–135. Chambers, J. K. (ed.) 1975. Canadian English: Origins and structures. Toronto: Methuen. Chambers, J. K. 2006. Canadian Raising retrospect and prospect. Canadian Journal of Linguistics 51. 105–118. Chomsky, Noam. 1951. Morphophonemics of modern Hebrew. M.A. thesis, University of Pennsylvania. Published 1979, New York: Garland. Chomsky, Noam. 1962. A transformational approach to syntax. In Archibald A. Hill (ed.) Proceedings of the 3rd Texas Conference on Problems of Linguistic Analysis in English, 124–158. Austin: University of Texas Press. Chomsky, Noam. 1964. Current issues in linguistic theory. The Hague & Paris: Mouton. Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press. Chomsky, Noam. 1973. Conditions on transformations. In Anderson & Kiparsky (1973), 232–286. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Chomsky, Noam, Morris Halle & Fred Lukoff. 1956. On accent and juncture in English. In Morris Halle, Horace Lunt, Hugh MacLean & Cornelis van Schooneveld (eds.) For Roman Jakobson: Essays on the occasion of his sixtieth birthday, 65–80. The Hague: Mouton. Halle, Morris. 1959. The sound pattern of Russian: A linguistic and acoustical investigation. The Hague: Mouton. Halle, Morris. 1962. Phonology in generative grammar. Word 18. 54–72. Halle, Morris. 1995. Comments on Luigi Burzio’s “The rise of Optimality Theory.” Glot International 1. 9–10. Harris, Zellig S. 1951. Methods in structural linguistics. Chicago: University of Chicago Press. Hooper, Joan B. 1976. An introduction to natural generative phonology. New York: Academic Press. Hyman, Larry M. 1993. Problems for rule ordering in Bantu: Two Bantu test cases. In John A. Goldsmith (ed.) The last phonological rule: Reflections on constraints and derivations, 195–222. Chicago & London: University of Chicago Press. Idsardi, William J. 2006. Canadian Raising, opacity, and rephonemicization. Canadian Journal of Linguistics 51. 119–126. Joos, Martin. 1942. A phonological dilemma in Canadian English. Language 18. 141–144. Reprinted in Chambers (1975), 79–82. Kaye, Jonathan. 1990. What ever happened to dialect B? In Joan Mascaró & Marina Nespor (eds.) Grammar in progress: Glow essays for Henk van Riemsdijk, 259–263. Dordrecht: Foris. Kean, Marie-Louise. 1974. The strict cycle in phonology. Linguistic Inquiry 5. 179–203. Kenstowicz, Michael & Charles W. Kisseberth. 1977. Topics in phonological theory. New York: Academic Press. Kiparsky, Paul. 1968. Linguistic universals and linguistic change. In Emmon Bach & Robert T. Harms (eds.) Universals in linguistic theory, 171–202. New York: Holt, Rinehart & Winston. Reprinted in Kiparsky (1982b), 13–43. Kiparsky, Paul. 1973a. “Elsewhere” in phonology. In Anderson & Kiparsky (1973), 93–106. Kiparsky, Paul. 1973b. Phonological representations: Abstractness, opacity, and global rules. In Osamu Fujimura (ed.) Three dimensions of linguistic theory, 57–86. Tokyo: Taikusha. Kiparsky, Paul. 1982a. Lexical morphology and phonology. In Linguistic Society of Korea (ed.) Linguistics in the morning calm, 3–91. Seoul: Hanshin. Kiparsky, Paul. 1982b. Explanation in phonology. Dordrecht: Foris. Kisseberth, Charles W. & Mohammad Imam Abasheikh. 1975. The perfective stem in Chimwi:ni and global rules. Studies in African Linguistics 6. 249–266. Koutsoudas, Andreas, Gerald Sanders & Craig Noll. 1974. The application of phonological rules. Language 50. 1–28.

25

Joan Mascaró

Ìubowicz, Anna. 2002. Derived environment effects in Optimality Theory. Lingua 112. 243–280. Mascaró, Joan. 1976. Catalan phonology and the phonological cycle. Ph.D. dissertation, MIT. Published 1978, Indiana University Linguistics Club. McCarthy, John J. 1999. Sympathy and phonological opacity. Phonology 16. 331–399. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. McCarthy, John J. 2003. Sympathy, cumulativity, and the Duke-of-York gambit. In Caroline Féry & Ruben van de Vijver (eds.) The syllable in Optimality Theory, 23–76. Cambridge: Cambridge University Press. Mielke, Jeff, Mike Armstrong & Elizabeth Hume. 2003. Looking through opacity. Theoretical Linguistics 29. 123–139. Myers, Scott. 1987. Vowel shortening in English. Natural Language and Linguistic Theory 5. 485–518. Myers, Scott. 1991. Persistent rules. Linguistic Inquiry 22. 315–344. Postal, Paul. 1968. Aspects of phonological theory. New York: Harper & Row. Prince, Alan. 1997. Elsewhere and otherwise. Glot International 2. 23–24 (ROA-217). Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Pullum, Geoffrey K. 1976. The Duke of York gambit. Journal of Linguistics 12. 83–102. Ross, John R. 1972. A reanalysis of English word stress. In Michael K. Brame (ed.) Contributions to generative phonology, 229–323. Austin: University of Texas Press. Saporta, Sol. 1965. Ordered rules, dialect differences, and historical processes. Language 41. 218–224. Whitney, William Dwight. 1889. Sanskrit grammar, including both the classical language, and the older dialects of Veda and Brahmana. 2nd edn. Cambridge, MA: Harvard University Press. Reprinted 1975, Cambridge, MA: Harvard University Press.

75 Consonant–Vowel Place Feature Interactions Jaye Padgett

1

Introduction

Both consonants and vowels are formed with constrictions in the oral cavity, made by the lips, the tongue blade, the tongue body, and/or the tongue root. Since they make demands on the same organs, it should not be surprising that the place features of consonants can influence those of vowels and vice versa. Indeed, such interactions are common: consonants and vowels frequently assimilate in place to one another, or dissimilate. But the empirical territory is not simple, and attempts to understand consonant–vowel place interactions (henceforth “C–V interactions”) have led to much unresolved debate in phonological theory. The questions most debated have had to do with the nature of the phonological features we assume, with questions of feature structure and with claims about the locality of phonological processes. However, as the field of phonology gravitated toward questions of constraint interaction under the influence of Optimality Theory (Prince and Smolensky 1993), attention toward these representational questions faded without having been resolved. Whatever the theoretical framework though, the empirical puzzles underlying the debate about C–V interactions remain, and remain interesting. The discussion in this chapter will necessarily reflect the open-endedness of the historical discussion, as well as the framework in which that discussion was held – autosegmental phonology and feature geometry. §2 begins by presenting a typology of C–V interactions. §3 puts forward an influential model of feature geometry as a point of departure and reviews the challenges raised for that model by C–V interactions. §4 discusses a prominent approach to these challenges, a “unified feature” approach to consonants and vowels advocated by Herzallah (1990), Clements (1991), Hume (1994, 1996), Clements and Hume (1995), and others. In §5 we pause to consider issues of locality and transparency in C–V interactions. §6 covers an alternative to the unified feature approach, due to Ní Chiosáin and Padgett (1993) and Flemming (1995, 2003), called the “inherent vowel place” approach here. §7 concludes.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0075

2

2

Jaye Padgett

A typology of C–V interactions

The typology given here is not meant as an exhaustive survey of the kinds of C–V interaction known. Instead the goal is to classify processes according to the challenges they have presented for phonological theory. In particular, a key distinction will be made between “within-category” C–V interactions and “cross-category” C–V interactions.1 Also, for space reasons the main focus will be on assimilations (chapter 81: local assimilation), with only occasional reference made to dissimilatory cases (see chapter 60: dissimilation).

2.1

Within-category interactions

It may seem incoherent to posit “within-category” interactions between the distinct categories of consonant and vowel. However, it is well known that consonants can have secondary articulations that are essentially vocalic in nature: vowel- or glide-like gestures, produced along with a consonant’s primary place of articulation. Some representative examples are illustrated in (1) (see also chapter 29: secondary and double articulation; chapter 71: palatalization; chapter 121: slavic palatalization).2 (1)

(Semi-)vocalic secondary articulations labialization tw

palatalization tj

velarization t:

pharyngealization t#

Indeed, glides themselves are consonants with vocalic properties (chapter 15: glides). “Within-category” interactions are those between a vowel and another (semi)-vocalic element, whether the latter is a secondary articulation or a primary one (a glide). Let us begin with interactions between vowels and glides. The examples in (2) are from Kabardian (Colarusso 1992: 32–33). Kabardian has a “vertical” vowel system arguably consisting of only the two phonemes /H a/. These vowels assimilate in backness and roundness to a following coda glide. According to Colarusso, the triggering glide is elided in all but careful speech, with some compensatory lengthening (not shown). Effects like this of glides on vowels, affecting either vowel color (backness and/or roundness) or height (chapter 19: vowel place; chapter 21: vowel height), seem common in languages. (2)

/q’Hw/ /psaw/

[q’uw] [psow]

‘swan’ ‘alive’

/bHj/ /tsaj/

[bij] [tsej]

‘enemy’ ‘one of wool (kind of coat)’

Turning to vocalic secondary articulations, non-low short vowels in Irish are front before palatalized consonants and back before non-palatalized consonants; the 1

These terms are borrowed from Clements (1991). Compare the “Type I” vs. “Type II” distinction of Ní Chiosáin and Padgett (1993). 2 This presentation simplifies reality in some ways. For example, sounds transcribed Cw or C: might be labial-velarized and not just labialized or velarized. In addition, “pharyngealized” sounds are more accurately described as “uvularized” in at least some cases (McCarthy 1994).

Consonant–Vowel Place Feature Interactions

3

latter are velarized. The symbols “I/E” denote underlying high and mid vowels (respectively) of indeterminate backness. (3)

/m:Idj/ /sjIvj/ /t:Etj/ /tjEp j/

[m:>dj] [œ>vj] [t:etj] [tjep j]

/p:Int:/ /skjIb:/ /b:Es:/ /ljEm:/

‘we/us’ ‘you (pl)’ ‘smoke’ ‘fail’

[p:Ánt:] [skjÁb:] [b:Zs:] [ljZm:]

‘pound’ ‘snatch’ ‘palm (of hand)’ ‘with me’

Similarly, labialized consonants can cause a neighboring vowel to be round, as in Kabardian /dHúw/ → [doúw] ‘thief’ (Colarusso 1992: 30). In a case involving pharyngealization (or uvularization; see note 2), emphatic consonants in Palestinian Arabic cause /a/ to ablaut to [u] instead of [i] in first measure imperfect verbs (Herzallah 1990): the imperfect form of [naÏ#am] ‘compose’ is [ji-nÏ#um] rather than the expected *[ji-nÏ#im] (cf. [katab], [ji-ktib] ‘write’). Herzallah argues that secondary pharyngealization involves a component of backness that spreads to the vowel in these cases. In a more typical case of emphasis spread, vowels in Ayt Seghrouchen Tamazight Berber are backed and lowered next to emphatic consonants (Rose 1996), as shown in (4). Rose argues that emphasis spread is the spreading of the feature [RTR] ([Retracted Tongue Root]) (see also chapter 25: pharyngeals; chapter 77: long-distance assimilation of consonants). (4)

a.

[izi] [llef] [nÏu]

‘fly’ ‘to divorce’ ‘to be shaken’

b.

/iz#i/ /t#t#ef/ /nÏ#u/

[ez#e] ‘bladder’ [t#t#ef] ‘to hold’ [nÏ#o] ‘to cross’

Consonants commonly acquire vocalic secondary articulations by assimilating to adjacent vowels. For example, Russian consonants are palatalized before certain suffixes beginning in [i] or [e] (Padgett, forthcoming; see also chapter 121: slavic palatalization): (5)

Nom sg

Nom sg (dim) Loc sg

stol dom Ïar zont

stoljik domjik Ïarjik zontjik

stolje domje Ïarje zontje

‘table’ ‘house’ ‘ball’ ‘umbrella’

A similar palatalization occurs in Nupe (Hyman 1970). Also in Nupe, consonants are rounded (or labial-velarized) before rounded vowels, e.g. [egwj] and [egwo] for /egj/ ‘mud’ and /ego/ ‘grass’ (tones not shown). It is worth noting that the examples of within-category assimilation presented above never involve a vowel changing the features of a glide or of a consonant’s secondary articulation. Such cases seem at best rare, but it is not clear why that should be. Again, what all within-category interactions have in common is interaction among overtly (semi-)vocalic elements. Glides are [−consonantal] in the feature theory of Chomsky and Halle (1968; SPE). Vocalic secondary articulations are likewise basically vocalic in constriction degree, even if they accompany primary constrictions that are [+consonantal]. For reasons that will become clear below,

Jaye Padgett

4

C–V interactions of this sort have created little controversy in phonological theory. This is in contrast to C–V interactions in which the primary articulation of a [+consonantal] segment (chapter 22: consonantal place of articulation) appears to interact with the place of a vowel, cases called “cross-category” here.

2.2

Cross-category interactions

Numerous cases are known in which plain (not rounded) labial consonants cause vowels to be round. This happens, for example, in a dialect of Mapila Malayalam described by Bright (1972).3 In this dialect, a vowel is inserted for apparently phonotactic reasons. The vowel is generally something like [q] (not a phoneme of the language), as shown in (6a); but it surfaces as [u] after [o] or [u] (6b) or after a labial consonant (6c). The rule is productive, applying even in borrowings like [trippu]. (6)

a.

pa(lq pandq kurva(nq dressq

‘milk’ ‘shake’ ‘Koran’ ‘dress’

b.

onnu nu(ru unnu o(Õu

‘one’ ‘hundred’ ‘dine!’ ‘run!’

c.

Œa(vu –appu isla(mu trippu

‘death’ ‘pound’ ‘Islam’ ‘trip’

Another well-known case occurs in Turkish (Lees 1961; Lightner 1972). Within historically native Turkish roots, a high vowel following [A] and any intervening consonants is normally [Q] (sometimes transcribed [q]). But it is [u] when a labial consonant intervenes, e.g. [jAvru] ‘cub, chick’, [Armud] ‘pear’. Cross-category dissimilations also occur. For example, in Cantonese a syllable rhyme cannot have both a rounded vowel and a labial coda, e.g. *[up] (Cheng 1991). Other languages showing C–V interactions involving vowel rounding and plain labial consonants are discussed by Hyman (1973), Campbell (1974), Sagey (1986), Clements (1991), Selkirk (1993), Flemming (1995), and Anttila (2002). There seems to be a similar connection between coronal place of articulation (chapter 12: coronals) and front vowels. A frequently cited example comes from Maltese Arabic (Brame 1972; Hume 1994, 1996). In imperfective Measure I verbs, the prefix vowel is normally identical to the vowel of the stem, as shown in (7a). However, when the stem begins with a coronal obstruent, the prefix vowel is [i], (7b). Note that some of the verbs in (7b) undergo an independently existing ablaut by which the imperfective stem vowel becomes [o]; this occurs in verbs without initial coronal obstruents too, e.g. [barad] vs. [jo-brod] ‘to file’. In these verbs the prefix vowel is normally [o]. (7) a.

3

perfective

imperfective

kotor ?asam peles nizel

jo-ktor ja-?sam je-ples ji-nzel

‘to ‘to ‘to ‘to

increase’ break’ set free’ descend’ UR = /nizil/

Bright relies on Upadhyaya (1968) for data. The Mapila Malayalam data resemble the more oftencited Tulu facts also discussed by Bright; in fact, Bright suggests that the Mapila Malayalam facts are due to contact with Tulu.

Consonant–Vowel Place Feature Interactions b.

dapal talab sepet –abar

ji-dpol ji-tlob ji-spet ji-–bor

‘to ‘to ‘to ‘to

5

enter’ pray’ curse’ collect’

Hume (1994, 1996) treats the general vowel copy as a case of feature stability: when the first vowel of the imperfective stem is deleted (by a normal syncope rule), its vowel place features surface on the underlyingly featureless prefix vowel. In (7b), however, a rule applies by which the prefix vowel acquires its frontness from a coronal obstruent; this rule takes precedence over the feature stability rule. In another widely cited case, non-low vowels in Cantonese must be front when between coronal consonants (Cheng 1991); next to [tit] ‘iron’ and [tøn] ‘a shield’ there are no forms like *[tut] or *[ton]. Similarly, in Kabardian the vowels /a/ and /H/ are allophonically fronted before coronal consonants, e.g. /œHd/ ‘donkey’ and /zaz/ ‘bile’ become [œed] and [zæz], respectively (Colarusso 1992: 30). Discussion of other cases can be found in Clements (1976, 1991), Hume (1994), and Flemming (1995, 2003). Dorsal consonants can trigger backing of vowels. This is clearly true when the consonants in question are uvular, analyzed by many as having a [pharyngeal] (Herzallah 1990; McCarthy 1994) or [RTR] (Rose 1996) component in their place of articulation, as well as a [dorsal] one. In fact, uvulars, which are [+back, –high] in the SPE framework, can trigger backing and/or lowering. The data in (8) from Inuktitut are taken from Buckley (2000), who cites Schultz-Lorentzen (1945) and Fortescue (1984). The high vowels seen in (8b) are lowered to mid before either of the language’s uvular segments, [q] and [ö], (8a). This is an allophonic change, since the vowel phonemes of Inuktitut are /i u a/. According to Rischel (1974), this rule involves retraction as well as (or even more than) lowering, though this is not obvious from the transcriptions. As Elorrieta (1991) notes, this is consistent with the notion that uvulars are pharyngealized dorsal consonants. (8)

a.

seöme-q ike-öput uvdlD-q

‘glacier’ ‘our wound’ ‘day’

b.

seömi-t iki-t uvdlu-t

‘glaciers’ ‘your wound’ ‘days’

In Kabardian, the phonemes /H a/ are backed before uvulars, e.g. /baq/ → [bAqh] ‘cow shed’ (Colarusso 1992: 30).4 Velar consonants, which are also [dorsal], can also cause backing, and/or raising. One example is from Maxakalí (Gudschinsky et al. 1970; Clements 1991). Tautosyllabic VC sequences tend to display an excrescent vowel which either replaces the consonant or forms a transition from vowel to consonant, depending on aspects of the environment. The place of this excrescent vowel depends on the place of the consonant. As shown in (9a), that vowel is /H/ before alveolars. But it is a high back vowel before velars, (9b). (The vowel is also [i] before “alveo-palatals,” and something like [Z] before labials.)5

4

Colarusso states that backing affects /H/ also, and his rule predicts [-], but he transcribes [H]. The relevant excrescent vowel is underlined. Along with the excrescent vowel, a preceding glide can appear. A breve indicates the vowel is non-syllabic. Gudschinsky et al. actually indicate a good deal of variation in these excrescent vowel qualities. 5

Jaye Padgett

6 (9)

a. b.

/mit/ /kot nak/ /no?ok/ /kQcakkQk/

[mbijHt¬] [kowH da‘x] [ndo?o‘x] [kQœaQkQx]

‘sound of a jaguar’s footsteps’ ‘dry manioc’ ‘to wave (something)’ ‘capybara (species of rodent)’

If Clements (1991) is right that [H] represents the basic quality of the excrescent vowel, then [k] (and also [I]) seem to cause it to raise and back. Similarly, in Yoruba, certain i-initial nouns show the /i/ backing to [u] when a velar precedes. This occurs in a reduplicative context, e.g. /ki + isD/ → [isDkusD] ‘saying, foolish/loose talk’ (Pulleyblank 1988: 245–246; tones omitted). Other crosscategory cases involving velars are discussed in Ní Chiosáin and Padgett (1993), Clements and Hume (1995), and references therein.6 Finally, pharyngeal consonants often cause vowels to lower and back, particularly to [a]. In fact, this can be a property of all “guttural” consonants – uvulars, pharyngeals, and (for some languages) laryngeals – which can all be analyzed as having a [pharyngeal] component to their place of articulation (McCarthy 1994) (see also chapter 25: pharyngeals). The examples in (10), taken from Rose (1996), who cites Cowell (1964), are from Syrian Arabic. The feminine suffix /-e/, seen in (10a), is realized as [a] after gutturals, (10b). (10)

a.

daraÚ-e œerk-e madras-e

‘step’ ‘society’ ‘school’

b.

wa(Úh-a mni(p-a dagga(ö-a

‘display’ ‘good’ ‘tanning’

The examples of cross-category assimilation discussed so far involve consonants affecting vowels. A striking fact is that consonant-to-vowel cross-category assimilations are notably missing (Ní Chiosáin and Padgett 1993). The one clear exception to this claim is the case of palatalizing mutations. As many have noted (e.g. Clements 1976; Mester and Itô 1989), front vowels, especially higher ones, often trigger mutations of velars or dentals/alveolars to palato-alveolar (or a similar) place of articulation. Hume (1996) cites a case of velar mutation in Slovak (Rubach 1993) by which /k g x :/ become [Œ – œ Ú] respectively before any of [j i e æ] (see chapter 65: consonant mutation for more on mutations): (11)

a. b. c. d.

vnuk tsveng strax bo:

‘grandson’ ‘sound’ ‘fright’ ‘god’

vnuŒik tsven–at j straœit j boÚe

(dim) ‘to sound’7 ‘frighten’ (voc)

Such cases are common, and clearly involve assimilation of a velar consonant to a front vowel. The existence of these cases might lead us to expect equally frequent assimilations to round vowels, such as /ku/ → [pu], or assimilations to [a], such as /fa/ → [pa]. But assimilations like these, or the many others that 6

Some researchers have suggested that cross-category assimilations of vowels to velars like these are unexpectedly rare, compared to cases involving labial or coronal consonants (Ihiunu and Kenstowicz 1994; Flemming 1995). This seems possible, but no comprehensive comparative survey has been done. Flemming’s claim that they do not exist at all seems too strong. 7 The triggering vowel is assumed to be /æ/, which backs later in the derivation.

Consonant–Vowel Place Feature Interactions

7

can be imagined if vowels can cause place assimilation of a consonant, are glaringly absent.8 Apart from this asymmetry, another interesting fact about cross-category assimilations should be noted. Compared to within-category assimilations, they seem “weak” in several respects. First, they appear to be much less frequent. This seems especially clear if we compare within-category effects in which a consonant’s secondary articulation affects a vowel to cross-category effects in which a consonant’s primary articulation affects a vowel, e.g. /kwq/ → [kwu] vs. /pq/ → [pu]. If we keep in mind that consonants with secondary articulations occur in a minority of languages while all languages have plain consonants (see e.g. Maddieson 1984), the difference is very striking. The seeming exception to this generalization involves gutturals, which, when present in a language, seem likely to trigger assimilation of a vowel (McCarthy 1994; Rose 1996). Second, cases in which a consonant’s primary place takes precedence over its secondary place in determining a vowel’s place features, e.g. /pjq/ → [pju], seem non-existent (Ní Chiosáin and Padgett 1993; Flemming 1995, 2003). Third, cross-category effects often seem to involve vowels that are “underspecified” (chapter 7: feature specification and underspecification) in the sense of being either epenthetic (chapter 67: vowel epenthesis), reduplicative (chapter 100: reduplication), or central (see also chapter 58: the emergence of the unmarked). The case of Mapila Malayalam is typical: the vowel [q] is not one of the language’s phonemes /i e a o u/, it is predictably inserted, and it is central. Central vowels (whether predictable or not) are often hypothesized to lack specification to some degree, whether phonetically (Browman and Goldstein 1992) or phonologically (Kaye et al. 1985; Clements 1991; Lombardi 2003) (see also chapter 26: schwa). That cross-category effects often seem to be linked to central and epenthetic vowels suggests that they often can be feature-filling, but not feature changing. Fourth, cross-category effects generally seem to be highly local: the consonant and vowel must be immediately adjacent or nearly so (chapter 81: local assimilation). Though some within-category effects seem to have this property too (e.g. rounding or palatalization of consonants by vowels), some clearly do not. For example, consonants can dissimilate across vowels, and yet cross-category dissimilations are local. Finally, some cross-category effects seem to need to “gang up” in order to apply. In the case of Cantonese vowel fronting mentioned above, the vowel must be surrounded by coronals in order to undergo the rule. In Fe?-fe?-Bamileke, a labial consonant causes an adjacent reduplicating vowel to be round, but only when a round vowel is also present; likewise, a coronal consonant causes it to be front only when a front vowel is also present (Hyman 1972).9 These facts about cross-category effects, suggesting that they are in some sense “weak,” should arguably follow from any account of them.

8

Some apparent counterexamples are discussed below and in Ní Chiosáin and Padgett (1993). (See also chapter 72: consonant harmony in child language for a discussion of this possibility in child language.) 9 If the neighboring vowel is high, it is sufficient to cause the change. Otherwise the vowel and consonant together cause it.

8

3

Jaye Padgett

C–V interactions and feature theory

Research on C–V interactions became particularly active within the context of the development of feature geometry theory. This chapter assumes a basic familiarity with the workings of autosegmental phonology and feature geometry (see chapter 14: autosegments and chapter 27: the organization of features.) A good starting point for our discussion is a feature geometry representation based on the influential work of Sagey (1986), with some modifications suggested by McCarthy (1988), shown in (12) (some details omitted). (12)

Feature geometry Root Laryngeal [voice]

[spr glottis]

Place [labial]

[nasal]

[coronal]

[round] [ant]

[distr] [high]

[continuant] [dorsal] [low]

[back]

Focusing on the place of articulation features, one notable property of Sagey’s model is its basis in active articulators of the vocal tract: [labial], involving the lips, [coronal], involving the tongue tip and/or blade, and [dorsal], involving the tongue body. In its grounding in articulation the model is in the tradition of SPE. However, Sagey’s model departs from SPE in various respects, including in holding that the articulator nodes [labial], [coronal], and [dorsal] are privative, and that they are organizational nodes in feature geometry, as shown in (12).10 As Sagey argues, an advantage of an articulator-based model like this is that it easily represents complex segments – segments that have more than one place of articulation, such as [f] (chapter 29: secondary and double articulation). In SPE, by comparison, velars are [−anterior, –coronal] and labials are [+anterior, –coronal]. In such a system it is unclear how to specify a segment that is both labial and velar. This point is relevant to us, since consonants bearing vocalic secondary articulations are complex segments. For example, in Sagey’s terms the segments [tw] and [tj] are represented as in (13a) and (13b), focusing only on place features.11 Vowels are also specified in terms of the features in (12) and are often complex segments themselves. The vowels [u] and [i], for instance, are specified as in (13c) and (13d). The representation in (13d) assumes that [i] has an active lip-spreading gesture, which requires involvement of the lips. Since all vowels are specified for tongue body features, all vowels have a [dorsal] specification. 10

For Sagey they are privative because they are class nodes rather than features. Like Clements (1991) and others, I interpret them as features and assume features can be dependent on other features. 11 Some features are omitted for simplicity, including [distributed], [high], and [low]. [tw] is understood as labialized, not labial-velarized.

Consonant–Vowel Place Feature Interactions (13)

[tw] Place

a.

[tj] Place

b.

[labial]

[coronal]

[coronal]

[dorsal]

[+round]

[+ant]

[+ant]

[–back]

[u] Place

c.

9

[i] Place

d.

[labial]

[dorsal]

[labial]

[dorsal]

[+round]

[+back]

[−round]

[−back]

To give a more complete understanding of this feature system, Table 75.1 shows place specifications for plain, palatalized, and rounded consonants of all three major places of articulation and for five representative vowels. The symbol “✓” indicates specification of a privative (major) place feature. In this theory, specification of a feature such as [round] or [back] is possible only if the relevant major place – here [labial] or [dorsal] respectively – is specified. Otherwise, full feature specification is assumed for the sake of discussion. For our purposes, what is particularly worth noticing is the disjointedness of the consonantal vs. vocalic place specifications. Unless they are labialized or palatalized, coronal consonants have nothing in common with vowels. Plain labials are like rounded vowels in having a [labial] specification – but not in being rounded. The exception to this disjointedness is with [dorsal] consonants, which, following SPE, are specified for vocalic tongue body features. This is in fact how velars are distinguished from uvulars ([−high, –low]) and pharyngeals ([−high, +low]). Within such a model, many within-tier interactions are straightforward to represent. For example, the dependence in Irish of a short vowel’s backness Table 75.1 Feature specifications (Sagey 1986)

[labial] [round]

p

pj

pw











+

[coronal]

}

}j

}w

kw

i

y







+

+







[anterior]

+

+

+

[distributed]

+

+

+

[dorsal]

k

kj

a

o

u









+

+

+





















[high]

+

+

+

+

+

+

+





+

[low]















+





[back]





+



+





+

+

+

Jaye Padgett

10

on the following consonant (see (3)) is an assimilation as shown in (14a).12 The assimilation of a plain consonant to a round vowel, as in Nupe [egwo] ‘grass’, is as in (14b). (The rule spreads [+round], and [labial] is inserted on the consonant by Node Interpolation (Sagey 1986). Alternatively, [labial] spreads to the consonant.) Apart from the advantages of an articulator-based representation, it is not the particular geometry assumed here that makes within-category effects easy to represent. The point is that vowels, glides, and (semi-)vocalic secondary articulations on consonants are assumed to employ the very same set of place features. If that is true, then whatever the geometry, there is no (at least general) difficulty in representing within-category processes. (14)

a. /m Idj/ → [m >dj] I Place

b. /ego/ → [egwo]

dj Place

g Place

[dorsal] [dorsal] [coronal]

o Place

[dorsal] ([labial]) [labial] [dorsal]

[+high] [−back]

[+round]

Compare the situation with cross-category assimilations. Recall that in Mapila Malayalam the vowel /q/ is rounded after a plain labial consonant (see (6)). Since it is a plain (not labialized) labial consonant in question, it cannot be specified [+round]. Whether the consonant is specified [−round] or unspecified for [round], spreading [labial] as in (15a) will not cause the vowel to round. In the case of Maltese Arabic (see (7)), the problem seems even worse. The feature [−back] needed to achieve [i] is not part of the representation of a coronal consonant.13 Spreading [coronal], the only seeming option, is of no help. The issue once again is not about the feature geometry pursued. The problem is that the place features that are assumed to define consonantal place are largely disjoint from those assumed to define vocalic place. There is therefore no straightforward way to explain why plain consonants can affect vowels in this way. (15)

a.

??

/–appq/ → [–appu] p Place

q Place

[labial] [dorsal]

b.

??

/jV-tlob/ → [ji-tlob] V Place

t Place

[dorsal] [coronal]

([−round]) This problem of disjoint place features for consonants and vowels has been noted for some time. For example, before the advent of autosegmental phonology or feature geometry, Campbell (1974) and Clements (1976) had pointed out the 12 13

Irrelevant detail will often be omitted in representations shown. Hume (1994) argues that the vowel’s height is a default value and so doesn’t need to be spread.

Consonant–Vowel Place Feature Interactions

11

problem for the SPE feature theory of processes like those of Mapila Malayalam and Maltese, respectively. Campbell noted the lack of connection between the features [labial] and [round], and Clements did the same for [coronal] and [back]. By comparison, cross-category effects involving [dorsal] consonants make sense in the feature theory of SPE/Sagey (1986), because such consonants are specified for [high], [low], and [back]. For example, the vowel lowering of Inuktitut (see (8)) can be represented as in (16). The point holds equally if Inuktitut is better interpreted as [RTR] spread from a consonant. To put it differently, according to this theory assimilations by vowels to [dorsal] consonants are, in a sense, withincategory effects: the influence of [dorsal] consonants is accomplished through vowel place features. (16)

/sermi-q/ → [serm -q] i Place

q Place

[dorsal] [dorsal] [−high]

4

Unified feature theory

The idea that consonants and vowels should be specified by the same set of place features has been motivated by researchers in diverse frameworks, including Schane (1984, 1987), Kaye et al. (1985), Anderson and Ewen (1987), Selkirk (1988, 1993), van der Hulst (1989), and Clements (1991). Selkirk and Clements cast the idea roughly in terms of the features of Sagey (1986), as shown in Table 75.2.14 Following McCarthy (1994), a feature [pharyngeal] is now included. McCarthy argues that uvular, pharyngeal and, for at least some languages, laryngeal consonants have in common a [pharyngeal] specification. Table 75.2 Unified place features for consonants and vowels (Clements 1991)

C-Place

[labial]

p

p j pw





}

}j

}w







k

kj

kw







[dorsal] [pharyngeal]

14

y

a

o

u











[labial] [coronal]

i



[coronal]

V-Place

p

✓ ✓

✓ ✓

✓ ✓

✓ ✓



[dorsal]



[pharyngeal]



However, Clements treats these features as binary valued. The features [anterior] and [distributed] are not shown here.

Jaye Padgett

12

The unified feature approach capitalizes on the apparent articulatory parallelism between consonants and vowels: both labial consonants and round vowels involve a constriction at the lips; both coronal consonants and front vowels involve a constriction at the tip/blade/front of the tongue; both dorsal consonants and back vowels involve a constriction at the tongue dorsum; and both pharyngeal consonants and low vowels involve a constriction between the tongue root and the pharynx wall. (The parallelism in the case of [coronal] is the most questionable, as we will see later.) For Clements (1991) and others working within this framework (including Herzallah 1990; Hume 1990, 1994), the consonant–vowel parallelism does not extend to vowel height (or stricture features in consonants). Distinct features are still needed for these properties of segments. This means that the unified feature approach obviates the vowel color features [back] and [round], but not [high] and [low]. Of course, vocalic rounding is not articulatorily identical to the labial constriction of a consonant; likewise for the other parallel features. For these unified features to be phonetically interpreted we require reference to a segment’s manner features. For example, if a sound is specified as [−consonantal] then [labial] is interpreted as lip rounding. Alternatively, the relevant information is read off feature-geometric structure. Thus Herzallah (1990), Clements (1991), Hume (1994, 1996), and Clements and Hume (1995) locate [labial], [coronal], [dorsal], and [pharyngeal] under separate C-Place and V-Place nodes, depending on whether a consonantal or vocalic constriction is intended. In these terms, the segments seen above in (13) are now rendered as in (17). Segment (17a) is interpreted as [tw] because [labial] is a V-Place feature while [coronal] is a C-Place feature; and so on for the other representations.15 (17)

[tw] C-Place

a.

c.

[coronal]

V-place

[coronal]

V-Place

[+ant]

[labial]

[+ant]

[coronal]

[u] C-Place V-Place [labial]

15

[tj] C-Place

b.

[dorsal]

d.

[i] C-Place V-Place [coronal]

These representations simplify the full geometry assumed by the references cited, to focus on what is crucial here. There are reasons for assuming that V-Place is a dependent of C-Place (or “Place” according to some), instead of a sister, for example, but a consideration of these would take us too far afield. See Clements (1991), Odden (1991), and Ní Chiosáin (1994) for discussion of this issue and for motivation of V-Place as a feature-geometric constituent.

Consonant–Vowel Place Feature Interactions

13

Naturally, it remains true in this theory that vowel place features and secondary vocalic articulation features on consonants are the same. Therefore it remains straightforward to characterize within-category assimilations as seen in (14) above, now understood as in (18). Cases such as Inuktitut (see (8)) are also arguably withincategory, as noted above. (18)

a. /m Idj/ → [m >dj]

b. /ego/ → [egwo]

I C-Place

dj C-Place

g C-Place

V-Place

V-Place [coronal]

[dorsal] (V-Place) V-Place

o C-Place

[labial] [dorsal]

[coronal]

What is new with unified features is the possibility of directly capturing crosscategory assimilations too. Compare the representations in (19) to the problematic (15) above. In (19a) (Mapila Malayalam), [labial] spreads from a consonant to a vowel. Notice that [labial] is linked to C-Place for the consonant and to V-Place for the vowel. It is therefore interpreted as consonantal lip constriction for the consonant and as rounding for the vowel. Similar reasoning holds for (19b) (Maltese; see Hume 1994, 1996). Cases of backing around dorsal consonants, as in Maxakalí (see (9)), similarly involve the spreading of [dorsal] from consonant to vowel. (19)

a. /–appq/ → [–appu]

b. /jV-tlob/ → [ji-tlob]

p q C-Place C-Place

V t C-Place C-Place

V-Place

V-Place [coronal]

[labial]

Finally, cases in which a vowel becomes [a] around a guttural consonant, as in Syrian Arabic (see (10)), are analyzed as the spreading of [pharyngeal] (Herzallah 1990; McCarthy 1994; Rose 1996), as shown below. (20)

/mni(p-e/ → [mni(p-a] e p C-Place C-Place V-Place [pharyngeal]

In short, unified feature theory solves the problem of cross-category interactions by eliminating the disjointedness of consonantal and vowel place features.

14

Jaye Padgett

If features can spread from C-Place to V-Place, as in (19) and (20), then we might expect that the reverse can happen. This is just what is proposed for palatalizing mutations of the sort seen in Slovak, where /k g x :/ become /Œ – œ Ú/ respectively before any of /j i e æ/ (see (11)). Since front vowels are characterized as [coronal] instead of [−back] in the unified theory, this kind of mutation can be viewed as assimilation, specifically “coronalization” (Broselow and Niyondagara 1989; Mester and Itô 1989; Pulleyblank 1989; Lahiri and Evers 1991; Hume 1996): (21)

/vnuk/ → [vnuŒik] k i C-Place C-Place V-Place [coronal]

The outputs of the Slovak rule are not simply [coronal]; they are fricated or affricated palato-alveolars. Hume (1994, 1996) reasonably assumes that front vowels are [−anterior] coronals. This entails that “coronalization” will output [−anterior] coronals too. The rest must follow from redundancy rules like [−anterior] → [+delayed release]. As noted above, this approach to unified features does not attempt to unify features for vowel height and consonantal stricture. This means that, when such features are affected by cross-category assimilation, it must be for independent reasons. For example, Hume (1994, 1996) argues that the front vowel derived in Maltese is [+high] [i] because this is the default height for vowels in the language. On the other hand, since [+low] is not a likely default height in Arabic, Herzallah (1990: 185) assumes that [pharyngeal] spreading as in (20) leads to a low vowel because of a redundancy rule [pharyngeal] → [+low].16 Since gutturals seem to cause assimilation to [a] typically, this redundancy rule will be needed for other cases too. This might be seen to somewhat undermine the argument of unified feature theory. The point of unified features is to capture the assimilatory nature of cross-category effects. The lowering that occurs around gutturals seems just as assimilatory as the spreading of the pharyngeal constriction, so why treat it differently? The case of Maxakalí (see (9)) also supports the view that consonants can affect vowel height as well as vowel color. Recall that the inserted vowel in that language is [H] before alveolars; Clements (1991) suggests that this is the default inserted vowel. But before velars the inserted vowel is both back and high, [Q]. A redundancy rule [dorsal] → [+high] might work, but, side-by-side with [pharyngeal] → [+low], it begs the question why we do not allow that consonants directly affect height as well as color. This question aside, unified features have a clear appeal. They explain crosscategory effects, because they assume that coronal consonants and front vowels form a natural class, as do labial consonants and round vowels, etc. Apart from 16

Herzallah discusses Palestinian Arabic in this context. She also employs the vowel aperture features of Clements (1991), rather than [high] and [low].

Consonant–Vowel Place Feature Interactions

15

assimilations and dissimilations like those already seen, more evidence for such natural classes comes from instances of vowel strengthening or consonantal weakening. For example, when the vowel [i] and glide [j] are strengthened to consonants, they are strengthened to coronal consonants, or at least consonants with a coronal component – palatals or palato-alveolars. This occurs in Porteño Spanish when the (semi-)vowel is in onset position, as in /jelo/ (or /ielo/) → [Úelo] ‘ice’ (see Harris and Kaisse 1999 and references therein). A relevant example of weakening comes from Irish lenition (chapter 66: lenition), whereby /b m/ are reduced to [w] (see Ní Chiosáin 1991). If front vowels and palato-alveolars are both [coronal], and if labial consonants and [w] are both [labial], then these processes can be understood as the “promotion” or “demotion” of those features in the C-Place/V-Place representation. As with the assimilations, however, such accounts will often need the help of redundancy rules.

5

Non-interaction and locality

Our discussion of C–V interactions has so far ignored an important issue. There are ways in which consonants and vowels apparently fail to interact, and our theory needs to explain these too. Perhaps the most basic question arises from the simple observation that consonants typically seem to be transparent to vowel harmonies and other kinds of vowel-to-vowel place assimilations (chapter 91: vowel harmony: opaque and transparent vowels; chapter 118: turkish vowel harmony; chapter 123: hungarian vowel harmony). In Turkish, for example, vowels harmonize for roundness as well as backness (Lees 1961; Clements and Sezer 1982). Most consonants are transparent to the harmony.17 Particularly relevant to the discussion here, labial consonants are transparent to round harmony, as in [somun] ‘loaf’, and coronal consonants are transparent to backness harmony, as in [økyz] ‘ox’.18 The issue raised by such cases is schematized in (22). (22)

a.

o

m

[labial]

[labial]

I

b.

y

z

I

[coronal] [coronal]

Unification of place features for consonants and vowels is motivated by the cross-category interactions we have seen so far. However, the ability to block spreading is also a kind of interaction (if a passive one), and the principles of autosegmental phonology imply that spreading as in (22) should be blocked. A similar implication arises for vocalic [dorsal] spreading through dorsal consonants (not shown). These representations cross lines, a maneuver ruled out within autosegmental phonology for features on the same tier.19 17

The exceptions are palatalized consonants in certain limited circumstances. Since palatalization is a V-Place specification, this blocking is within-category. Blocking in such cases is the rule across languages, in contrast to the situation with plain consonants. 18 These examples involve harmony in the root, but the observation about transparency holds equally for harmony between a stem and a suffix. 19 See Hammond (1988), Sagey (1988), Bird and Klein (1990), Coleman (1991), Scobbie (1991), and Archangeli and Pulleyblank (1994) on deducing the ill-formedness of line crossing within the theory.

Jaye Padgett

16

As we have seen, Clements (1991), Clements and Hume (1995), and others working within this unified features framework locate vocalic and consonantal place features under distinct nodes in feature geometry, V-Place and C-Place respectively. The full representations for the scenarios in (22) (modulo some irrelevant simplifications) are shown in (23) below. In feature geometry, a plane on which association lines spread is defined by adjacent tiers. Therefore, the plane defined by the [labial] tier and the C-Place tier in (23a) is different from that defined by the [labial] tier and the V-Place tier. Clements (1991) and Clements and Hume (1995) suggest that, even when the same feature such as [labial] is involved in the spreading, line crossing is prohibited only within a plane. Therefore spreading, as in (23a) and (23b), is allowed. (Put differently, apparent line-crossing is only a problem when the crossed lines link to the same mother node in the geometry.) (23)

a.

o C-Place

m C-Place

V-Place [labial]

I C-Place V-Place

[labial]

b.

y C-Place

z C-Place

V-Place

I C-Place V-Place

[coronal] [coronal]

This suggestion raises questions about the formal understanding of tiers and planes that have not been fully explored. In any case, the worry about non-interaction of C-Place and V-Place features goes beyond this kind of spreading. For example, many languages place restrictions on homorganic consonants occurring within forms (chapter 86: morpheme structure constraints). In autosegmental phonology these have been explained by means of the Obligatory Contour Principle, which prohibits tier-adjacent identical feature specifications (see for example McCarthy 1986; Mester 1986; Yip 1989; Frisch et al. 2004 and references therein) (see also chapter 14: autosegments). Such restrictions can apply to consonants separated by vowels, and, crucially, do not seem to be blocked even by vowels of the “same” place of articulation; that is, forms such as [bom] are as ill-formed as [bam].20 But given unified features, the consonants’ [labial] features in (24a) are not tier-adjacent, since the vowel’s [labial] intervenes. Why should this form be dispreferred? (The answer is not because sequences such as [bo] or [om] themselves are ruled out; they are not in most languages having this kind of dissimilation.) Analogous issues arise with front vowels and coronal consonants, etc., and the same general issue arises in reverse when vowels dissimilate across all consonants, as in Ainu (Itô 1984).21 To deal with this question, Clements (1991) suggests that instances of a feature are not on the same tier when they are dominated by different mother nodes, and in fact to highlight this point he 20

One exception is Akkadian (McCarthy 1979; Yip 1988; Hume 1994; Odden 1994). A prefix /m/ dissimilates to [n], given another labial consonant in the stem. The sounds [u w] do not trigger dissimilation, but they do block it. 21 There are other theoretical approaches to dissimilation that do not appeal to tier-adjacency, including the idea of local self-conjunction advocated by Itô and Mester (1996) and Alderete (1997). Such approaches might avoid the question raised by (24).

Consonant–Vowel Place Feature Interactions

17

draws representations as in (24b).22 Hume (1994, 1996) and Clements and Hume (1995) frame a similar idea differently: there is one [labial] tier, as in (24a), but two instances of a feature can fail to interact when they are dominated by different mother nodes. Therefore the [labial] features dominated by C-Place may interact with each other and may each fail to interact with the intervening [labial]. (24)

a.

b C-Place

o C-Place

m C-Place

V-Place [labial]

[labial]

b.

b C-Place

o C-Place

m C-Place

[labial]

V-Place

[labial]

[labial]

[labial]

Obviously there should be concern at this point about losing the gains made with unified features. If C-Place and V-Place features are on different tiers, or if they can fail to interact because C-Place and V-Place are different mother nodes, then why do C-Place and V-Place features ever interact? Selkirk (1988, 1993) considers many of the same issues, and makes the important observation that cross-category dissimilations and co-occurrence restrictions seem to hold only under segmental adjacency.23 Both Selkirk and Clements suggest that this observation be elevated to a principle. A similar, though more general, observation was made at the end of §2 above: cross-category dissimilations and assimilations, unlike within-category cases, are always highly local. To summarize: according to a unified features view of C–V interactions, consonant and vowel place features are unified, and so can interact. But cross-category interaction seems limited to (near-)segmental adjacency, and a unified feature theory must address this limitation by separate stipulation (e.g. interaction can happen across tiers/with different mother nodes only under (near-)segmental adjacency). If the empirical observations here are on the right track, one might still raise questions about the account. In particular, the very motivation for unified features seems weakened by the need to stipulate non-interaction except under close adjacency. In addition, the latter stipulation does not follow from anything else in the theory; cross-category effects are limited in a way not really explained.

6

Inherent vowel place specifications

An alternative approach to cross-category C–V interactions, also couched within feature geometry theory, is proposed by Ní Chiosáin and Padgett (1993) and Flemming (1995, 2003): perhaps “plain” consonants are not as plain as they are assumed to be. 22

This redefinition of the notion “tier” may render unnecessary the reference to planes in (23). If consonantal and vocalic [labial] are on different tiers, then there is by definition no line crossing (in the relevant sense) in such cases. 23 Selkirk’s notion of “cross-category” is actually more abstract than that employed here. Her notion makes some different empirical predictions.

18

Jaye Padgett

To begin with, many consonants have been claimed to be complex segments, specified for both consonantal and vowel place features, even though they are “plain” in the sense of lacking secondary articulations that are transcribed. For example, there is a long-standing view that palatals, alveolo-palatals, and at least certain palato-alveolars are inherently specified for features indicating a high and front tongue body, and studies of their articulatory properties support this view (see Keating 1988, 1991; Keating and Lahiri 1993). This fact also explains why these segments are the most common outputs of palatalizing mutations of velars and coronals, assuming these mutations involve assimilation. Similarly, uvulars and pharyngeals may involve inherent specifications for vocalic tongue body and/or root position. In this view, they cause lowering or retraction of vowels because they are themselves specified for a feature like [−high] or [RTR] (see Chomsky and Halle 1968; and more recently Elorrieta 1991; Halle 1995; Rose 1996). If these consonants involve inherent vowel place specifications – in addition to independent consonantal specifications – then effects such as the lowering or retraction of vowels before uvulars in Inuktitut, or the raising and fronting of vowels before palatals and palato-alveolars in Kabardian, e.g. /ÚHù / → [Ú>ù] ‘tree’ and /Úaœ/ → [Úeœ] ‘(to) be bored, tired’ (Colarusso 1992: 30), are not cross-category assimilations at all. For example, the raising and fronting seen in Kabardian might be understood as in (25). For the sake of discussion, we revert to the familiar vowel place features of SPE, but following the literature on V-Place constituency (see note 15) continue to assume this aspect of the geometry. The palatal fricative [ ù] is assumed to have (at least) a primary [dorsal] specification. The point is that what spreads in this case are features of tongue body height and frontness that are uncontroversially relevant to vowels; in effect, palatals (and palato-alveolars in Kabardian) are understood as inherently palatalized segments. (25)

/ÚHù / → [Ú>ù] H C-Place

ù C-Place

V-Place

V-Place

[dorsal]

[+high]

[−back]

[−high]

[+back]

The question raised by work such as Ní Chiosáin and Padgett (1993) and Flemming (1995, 2003) is to what extent features relevant to vowel place might inhere within other “plain” consonants. If all cross-category C–V interactions were caused by such inherent features, then “cross-category,” though a useful classificatory term, would lose any theoretical import: all C–V interactions would be within-category. Ní Chiosáin and Padgett (1993) approach the phonetic claim from an articulatory point of view, following the general articulatory approach to features in the SPE tradition. They note that consonantal constrictions involve offsets and onsets, i.e. movement from a previous position into the consonant, and movement from the consonant into a following position. The idea is diagrammed as in (26),

Consonant–Vowel Place Feature Interactions

19

for a labial consonant in an intervocalic context such as [qbq]. During the onset transition there is a short period of time when the constriction is not yet consonantal, and yet this vocalic period is shaped by the impending consonant. Likewise, early in the release the constriction becomes vocalic while still shaped by the preceding consonant. As the diagram implies, this must have effects on the quality of the vowel near the closure. (26)

Stages of consonant production u q b vowel onset closure

u

offset

q vowel

Flemming (1995) emphasizes these acoustic or auditory effects of the transitions into and out of consonants, and in fact argues for the incorporation of auditorybased features into phonological theory. It is well known that consonantal offsets and onsets influence vowel formants; in fact, the resulting dynamic formant transitions are an important cue to both vocalic and consonantal place. Flemming notes that labial consonants, for example, lower vocalic F2 (second formant) values, while dental and alveolar coronals raise F2.24 Since vowel place (particularly vowel color) is cued by the location of F2, consonants therefore have the inherent ability to affect perception of vowel place. Flemming (2003) discusses the articulatory basis of the acoustic effects in more detail, focusing on coronal consonants. According to his survey of relevant studies, anterior coronals and palato-alveolars tend to cause tongue body fronting, due to coupling between the tongue blade/tip and the tongue body, and this is the reason for the rise in F2 around such consonants. With this in mind consider once again the fronting of vowels by coronal consonants, as in Maltese Arabic (see (7)). The idea is that fronting happens not because the consonant spreads its primary [coronal] articulation as in (27a), but because the consonant has some inherent tongue fronting, with a concomitant effect on F2, and this is what spreads to the vowel, (27b). For the sake of discussion this is formalized in terms of the traditional feature [back]. (27)

a. /jV-tlob/ → [ji-tlob]

b. /jV-tlob/ → [ji-tlob]

V t C-Place C-Place

V t C-Place C-Place

V-Place

V-Place V-Place [coronal] [coronal]

[−back]

Similarly Mapila Malayalam (see (6)) involves not spreading of primary [labial] as in (28a) but spreading of the inherent vocalic labial constriction, and lowering of F2, formalized here by means of [+round], (28b). 24

Discussion of vowel formants and of formant transitions can be found in e.g. Stevens (1998) and Johnson (2003). Another advocate of auditory features in phonology is Boersma (e.g. 1998).

20 (28)

Jaye Padgett a. /–appq/ → [–appu]

b. /–appq/ → [–appu]

p q C-Place C-Place

p q C-Place C-Place

V-Place

[labial] V-Place V-Place

[labial]

[+round]

In an analogous fashion, velars can spread inherent [+back] and/or [+high] in cases like Maxakalí (see (9)), and gutturals can spread inherent [+low] in cases like Syrian Arabic (see (10)). It is uncontroversial that consonants phonetically affect vowels both articulatorily and acoustically, as this approach to “cross-category” interactions assumes. However, incorporating these relatively small phonetic effects into phonology raises questions. One question is prompted by our use of the conventional vowel place features [back], [round], etc. in the account above. How are we justified in specifying the Maltese [t] as [−back] if it isn’t palatalized [t j], or the Malayalam [p] as [+round] if it isn’t rounded [pw]? The answer suggested by Ní Chiosáin and Padgett (1993) is that such specifications are realized as bona fide secondary articulations when they are contrastive, but not when they are redundant. In a language such as Russian, where palatalized coronals contrast with non-palatalized ones, [−back] on [t] is realized as [t j]; in a language without this contrast, redundant [−back] is realized as [t i] (notation borrowed from Ní Chiosáin and Padgett 2001; Flemming 2003), a coronal with only the inherent tongue body fronting described above. This answer to the question predicts that non-palatalized coronals in Russian could not be [−back] even inherently, since this specification is reserved for palatalization. This prediction is correct; in fact, non-palatalized sounds in Russian are velarized, i.e. [t:]. The inherent vowel place approach to “cross-category” effects arguably has some advantages over the unified feature approach. First, it helps explain an asymmetry in cross-category assimilations noted in §2: while vowel-to-consonant assimilation occurs, apparent consonant-to-vowel assimilations are strikingly underattested (Ní Chiosáin and Padgett 1993). The most robust example of the latter, as we saw, involves palatalizing mutations as in Slovak, where /k g x :/ become [Œ – œ Ú], respectively, before any of [j i e æ]. Within a unified features approach, some have argued that these are instances of “coronalization” (Broselow and Niyondagara 1989; Mester and Itô 1989; Pulleyblank 1989; Lahiri and Evers 1991; Hume 1996), as in (29a). A challenge for this claim is the absence of mutations resembling those in (29b) and (29c), or indeed resembling most of the logically possible consonant-to-vowel assimilations, if place features can link to both V-Place and C-Place.25

25

Ní Chiosáin and Padgett (1993) dismiss some claimed cases that are attested as sound changes but have no synchonic reflex. See that work for discussion of other apparent counterexamples.

Consonant–Vowel Place Feature Interactions (29)

a. /vnuk/ → [vnuŒik]

b. /vnup/ → [vnukQk]

k i C-Place C-Place

p Q C-Place C-Place

V-Place

V-Place

[coronal]

[dorsal]

21

c. /vnut/ → [vnupuk] t u C-Place C-Place V-Place [labial] Ní Chiosáin and Padgett (1993) argue that consonants simply do not assimilate to vowels in this way, having their primary place displaced, and that the theory should not allow them to. There is a natural explanation within inherent vowel place theory for this asymmetry in the direction of “cross-category” assimilations: consonants can affect vowels because they can have secondary vocalic articulations (whether distinctive or redundant). But vowels by definition do not have C-Place features with which to affect a consonant; they only have V-Place features, features that can only impose secondary vocalic articulations on consonants. Put differently, all C–V interactions are interactions between vowel place features. To handle palatalizing mutations, Ní Chiosáin and Padgett, following many others, assume that assimilation only partially derives the output, as in (30a). Further changes from /kj/ to e.g. [Œ] must be due to language-particular segmental wellformedness conditions, leading to something like (30b).26 (30)

a. /vnuk/ → vnukjik k i C-Place C-Place [dorsal] (V-Place) V-Place [−back]



b. /vnuŒik/ Œ i C-Place C-Place

Restructuring

[coronal] V-Place V-Place [−ant]

[−back]

As Clements and Hume (1995: 295–296; see also references therein) point out, however, such restructuring fails to explain why [coronal] in particular results in the context of a front vowel, just the fact that unified feature theory 26

Though expressed as a rule here, in line with all of this discussion, the idea is commonly expressed in Optimality Theory by means of constraint rankings such as *kj >> Œ. This ranking, along with a high-ranking constraint driving [−back] assimilation, will lead to the output [Œ].

Jaye Padgett

22

explains. On the other hand, one might seek perceptual explanations for such arbitrary articulatory connections, by means of auditory-based features (see Flemming 1995 for this case in particular), or perhaps via the “p-map” (Steriade 2001). Another advantage to the inherent vowel place approach is that it allows for more nuance in the ways that consonants can affect vowels (Flemming 1995, 2003). For example, Flemming (2003) (citing Emeneau 1970; Ebert 1996) notes that vowels are backed before retroflex coronals in the Dravidian language KoÒagu. Therefore there are words having central or back rounded vowels before retroflex consonants, as in (31a) and (31b), respectively, but there are no forms with front vowels before them, such as (31c). (31)

a.

QÕi kQ(ÎQ -Ke k-(ÕQ

‘the whole’ ‘lower, below’ ‘double’ ‘ruin’

b.

uÕQku(ÎQ oKakko(Õ-

‘to put on (sari)’ ‘cooked rice’ ‘to dry’ ‘monkey’

c.

*iÕ *i(Î *eK *e(Õ

Flemming cites studies showing that retroflexes can be articulated with a retracted tongue body. Unlike other coronals, they tend not to be articulated with a fronted tongue body, because this leads to articulatory difficulty. As Flemming points out, unified feature theory predicts that even retroflex coronals should cause vowel fronting, if fronting is [coronal] spreading. But this does not occur. Third, because it posits that all C–V interactions are within-category, by means of vowel place features, inherent vowel place theory does not require any adjustment of our understanding of tiers or interaction. Vowels do not block consonantal place dissimilations, because they do not have consonant place features. Consonants block vowel dissimilations or assimilations only when they do have vowel place specifications. What about the “weakness” of “cross-category” effects? As we noted at the end of §2, they are weaker than “within-category” effects by a range of diagnostics. Why are they comparatively infrequent? Why are they confined to roughly segmental adjacency? Why do “within-category” effects always win out over “cross-category” ones when both are in theory possible (*/pjq/ → [pju])? Why do “cross-category” effects often seem to target only “underspecified” vowel types? Why do they sometimes need to “gang up” to cause effects? We might attribute these signs of weakness to the intrinsic weakness of inherent vowel place features. As discussed at the outset of this section, the effects that “plain” consonants have on neighboring vowels are rather brief and slight. The hypothesis here has been that such effects can play a direct role in the phonology. However, feature theory, at least as traditionally conceived, provides no means of encoding this hypothesized difference between, e.g. “strong” (contrastive) and “weak” (inherent, redundant) [+round]. Until this idea is fleshed out, it is only a promissory note of the inherent vowel place approach. Another weakness of the inherent vowel place approach to C–V interactions is that it has no immediate explanation for the natural classes of consonants and vowels evidenced by vowel strengthenings, as in Porteño Spanish /jelo/ (or /ielo/) → [Úelo] ‘ice’, and consonant lenitions, as in Irish /b m/ → [w] (see §4). For processes like these, the unified feature approach has a clear advantage.

Consonant–Vowel Place Feature Interactions

7

23

Conclusion

Consonant–vowel interactions are a rich source of data for phonological theory. This chapter has focused on the ways in which they have influenced the theory of feature make-up and structure, and for reasons of space it has approached even this circumscribed area selectively. Though much of the field has shifted its focus away from these representational questions in recent years, with a concomitant rise in the focus on questions of constraint interaction, this shift has not tended to shed new light on the questions raised in this chapter. It is to be hoped that new trends in the field will eventually allow us to address these questions in a newly productive way.

REFERENCES Alderete, John. 1997. Dissimilation as local conjunction. Papers from the Annual Meeting of the North East Linguistic Society 27. 17–32. Anderson, John M. & Colin J. Ewen. 1987. Principles of dependency phonology. Cambridge: Cambridge University Press. Anttila, Arto. 2002. Morphologically conditioned phonological alternations. Natural Language and Linguistic Theory 20. 1– 42. Archangeli, Diana & Douglas Pulleyblank. 1994. Grounded phonology. Cambridge, MA: MIT Press. Bird, Steven & Ewan Klein. 1990. Phonological events. Journal of Linguistics 26. 33 –56. Boersma, Paul. 1998. Functional phonology: Formalizing the interactions between articulatory and perceptual drives. The Hague: Holland Academic Graphics. Brame, Michael K. 1972. On the abstractness of phonology: Maltese #. In Michael K. Brame (ed.) Contributions to generative phonology, 22–61. Austin: University of Texas Press. Bright, William. 1972. The enunciative vowel. International Journal of Dravidian Linguistics 1. 26 –55. Broselow, Ellen & Alice Niyondagara. 1989. Feature geometry and Kirundi palatalization. Studies in the Linguistic Sciences 20. 71–88. Browman, Catherine P. & Louis Goldstein. 1992. “Targetless” schwa: An articulatory analysis. In Gerard J. Docherty & D. Robert Ladd (eds.) Papers in laboratory phonology II: Gesture, segment, prosody, 26 –56. Cambridge: Cambridge University Press. Buckley, Eugene. 2000. On the naturalness of unnatural rules. Proceedings from the 2nd Workshop on American Indigenous Languages, 1–14. Santa Barbara: Department of Linguistics, University of California, Santa Barbara. Campbell, Lyle. 1974. Phonological features: Problems and proposals. Language 50. 52–65. Cheng, Lisa L. 1991. Feature geometry of vowels and co-occurrence restrictions in Cantonese. Proceedings of the West Coast Conference on Formal Linguistics 9. 107–124. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1976. Palatalization: Linking or assimilation? Papers from the Annual Regional Meeting, Chicago Linguistic Society 12. 96 –109. Clements, G. N. 1991. Place of articulation in consonants and vowels: A unified theory. Working Papers of the Cornell Phonetics Laboratory 5. 77–123. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In John A. Goldsmith (ed.) The handbook of phonological theory, 245–306. Cambridge, MA & Oxford: Blackwell.

24

Jaye Padgett

Clements, G. N. & Engin Sezer. 1982. Vowel and consonant disharmony in Turkish. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 213–255. Dordrecht: Foris. Colarusso, John. 1992. A grammar of the Kabardian language. Calgary: University of Calgary Press. Coleman, John & John Local. 1991. The “No Crossing Constraint” in autosegmental phonology. Linguistics and Philosophy 14. 295 –338. Cowell, Mark W. 1964. A reference grammar of Syrian Arabic (based on the dialect of Damascus). Washington, DC: Georgetown University Press. Ebert, Karen. 1996. KoÒava. Munich: Lincom Europa. Elorrieta, Jabier. 1991. The feature specification of uvulars. Proceedings of the West Coast Conference on Formal Linguistics 10. 139–149. Emeneau, M. B. 1970. KoÒagu vowels. Journal of the American Oriental Society 90. 145 –158. Flemming, Edward. 1995. Auditory representations in phonology. Ph.D. dissertation, University of California, Los Angeles. Published 2002, London & New York: Routledge. Flemming, Edward. 2003. The relationship between coronal place and vowel backness. Phonology 20. 335 –373. Fortescue, Michael. 1984. West Greenlandic. London: Croom Helm. Frisch, Stefan A., Janet B. Pierrehumbert & Michael B. Broe. 2004. Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22. 179 –228. Gudschinsky, Sarah, Harold Popovich & Frances Popovich. 1970. Native reaction and phonetic similarity in Maxakalí phonology. Language 46. 77–88. Halle, Morris. 1995. Feature geometry and feature spreading. Linguistic Inquiry 26. 1– 46. Hammond, Michael. 1988. On deriving the well-formedness condition. Linguistic Inquiry 19. 319 –325. Harris, James W. & Ellen M. Kaisse. 1999. Palatal vowels, glides and obstruents in Argentinian Spanish. Phonology 16. 117–190. Herzallah, Rukayyah. 1990. Aspects of Palestinian Arabic phonology: A non-linear approach. Ph.D. dissertation, Cornell University. Hulst, Harry van der. 1989. Atoms of segmental structure: Components, gestures and dependency. Phonology 6. 253 –284. Hume, Elizabeth. 1990. Front vowels, palatal consonants and the rule of umlaut in Korean. Papers from the Annual Meeting of the North East Linguistic Society 20. 230 –243. Hume, Elizabeth. 1994. Front vowels, coronal consonants and their interaction in nonlinear phonology. New York: Garland. Hume, Elizabeth. 1996. Coronal consonant, front vowel parallels in Maltese. Natural Language and Linguistic Theory 14. 163–203. Hyman, Larry M. 1970. How concrete is phonology? Language 46. 58 –76. Hyman, Larry M. 1972. A phonological study of Fe?-fe?-Bamileke. Studies in African Linguistics. Supplement 4. Hyman, Larry M. 1973. The feature [grave] in phonological theory. Journal of Phonetics 1. 329–337. Ihiunu, Peter & Michael Kenstowicz. 1994. Two notes on Igbo vowels. Unpublished ms., MIT. Itô, Junko. 1984. Melodic dissimilation in Ainu. Linguistic Inquiry 15. 505 –513. Itô, Junko & Armin Mester. 1996. Rendaku 1: Constraint conjunction and the OCP. Paper presented at the Kobe Phonology Forum. Johnson, Keith. 2003. Acoustic and auditory phonetics. 2nd edn. Malden, MA: Blackwell. Kaye, Jonathan, Jean Lowenstamm & Jean-Roger Vergnaud. 1985. The internal structure of phonological elements: A theory of charm and government. Phonology Yearbook 2. 305–328. Keating, Patricia. 1988. Palatals as complex segments: X-ray evidence. UCLA Working Papers in Phonetics 69. 77–91.

Consonant–Vowel Place Feature Interactions

25

Keating, Patricia. 1991. Coronal places of articulation. In Paradis & Prunet (1991), 29–48. Keating, Patricia & Aditi Lahiri. 1993. Fronted velars, palatalized velars, and palatals. Phonetica 50. 73 –101. Lahiri, Aditi & Vincent Evers. 1991. Palatalization and coronality. In Paradis & Prunet (1991), 79–100. Lees, Robert B. 1961. The phonology of Modern Standard Turkish. Bloomington: Indiana University. Lightner, Theodore M. 1972. Problems in the theory of phonology: Russian phonology and Turkish phonology. Edmonton: Linguistic Research Inc. Lombardi, Linda. 2003. Markedness and the typology of epenthetic vowels. Unpublished ms., University of Maryland at College Park (ROA-578). Maddieson, Ian. 1984. Patterns of sounds. Cambridge: Cambridge University Press. McCarthy, John J. 1979. Formal problems in Semitic phonology and morphology. Ph.D. dissertation, MIT. McCarthy, John J. 1986. Features and tiers: The structure of Semitic roots. Paper presented at Brandeis University. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. McCarthy, John J. 1994. The phonetics and phonology of Semitic pharyngeals. In Patricia Keating (ed.) Phonological structure and phonetic form: Papers in laboratory phonology III, 191–233. Cambridge: Cambridge University Press. Mester, Armin. 1986. Studies in tier structure. Ph.D. dissertation, University of Massachusetts, Amherst. Mester, Armin & Junko Itô. 1989. Feature predictability and underspecification: Palatal prosody in Japanese mimetics. Language 65. 258 –293. Ní Chiosáin, Máire. 1991. Topics in the phonology of Irish. Ph.D. dissertation, University of Massachusetts, Amherst. Ní Chiosáin, Máire. 1994. Irish palatalisation and the representation of place features. Phonology 11. 89 –106. Ní Chiosáin, Máire & Jaye Padgett. 1993. Inherent V-Place. Report LRC-93-09, Linguistics Research Center, University of California, Santa Cruz. Ní Chiosáin, Máire & Jaye Padgett. 2001. Markedness, segment realization, and locality in spreading. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 118–156. Cambridge: Cambridge University Press. Odden, David. 1991. Vowel geometry. Phonology 8. 261–289. Odden, David. 1994. Adjacency parameters in phonology. Language 70. 289 –330. Padgett, Jaye. Forthcoming. Russian consonant–vowel interactions and derivational opacity. In Wayles Brown, Adam Cooper, Alison Fisher, Esra Kesici, Nicola Predolac & Draga Zec (eds.) Proceedings of the 18th Formal Approaches to Slavic Linguistics meeting. Ann Arbor: Michigan Slavic Publications. Paradis, Carole & Jean-François Prunet (eds.) 1991. The special status of coronals: Internal and external evidence. San Diego: Academic Press. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Pulleyblank, Douglas. 1988. Vocalic underspecification in Yoruba. Linguistic Inquiry 19. 233–270. Pulleyblank, Edwin G. 1989. The role of coronal in articulator based features. Papers from the Annual Regional Meeting, Chicago Linguistic Society 25. 379 –393. Rischel, Jørgen. 1974. Topics in West Greenlandic phonology: Regularities underlying the phonetic appearance of wordforms in a polysynthetic language. Copenhagen: Akademisk Forlag. Rose, Sharon. 1996. Variable laryngeals and vowel lowering. Phonology 13. 73 –117. Rubach, Jerzy. 1993. The Lexical Phonology of Slovak. Oxford: Clarendon Press.

26

Jaye Padgett

Sagey, Elizabeth. 1986. The representation of features and relations in nonlinear phonology. Ph.D. dissertation, MIT. Sagey, Elizabeth. 1988. On the ill-formedness of crossing association lines. Linguistic Inquiry 19. 109–118. Schane, Sanford A. 1984. The fundamentals of particle phonology. Phonology Yearbook 1. 129–155. Schane, Sanford A. 1987. The resolution of hiatus. Papers from the Annual Regional Meeting, Chicago Linguistic Society 23(2). 279–290. Schultz-Lorentzen, C. W. 1945. A grammar of the West Greenland language. Copenhagen: C. A. Reitzel. Scobbie, James M. 1991. Attribute value phonology. Ph.D. dissertation, University of Edinburgh. Selkirk, Elisabeth. 1988. Dependency, place and the notion “tier.” Paper presented at the 63rd Annual Meeting of the Linguistic Society of America, New Orleans. Selkirk, Elisabeth. 1993. [Labial] relations. Unpublished ms., University of Massachusetts, Amherst. Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual account. In Elizabeth Hume & Keith Johnson (eds.) The role of speech perception in phonology, 219–250. San Diego: Academic Press. Stevens, Kenneth N. 1998. Acoustic phonetics. Cambridge, MA: MIT Press. Upadhyaya, Suseela P. 1968. Mapila Malayalam (a descriptive and comparative study). Ph.D. dissertation, University of Poona. Yip, Moira. 1988. The Obligatory Contour Principle and phonological rules: A loss of identity. Linguistic Inquiry 19. 65–100. Yip, Moira. 1989. Feature geometry and cooccurrence restrictions. Phonology 6. 349 –374.

76

Structure Preservation: The Resilience of Distinctive Information Carole Paradis Darlene LaCharité

1

Introduction

All languages have a phonemic inventory, including a set of distinctive vowels and consonants, i.e. linguistic sounds that contribute to the meaning of a word. For instance, chip [Œ>p] contrasts with cheap [Œip] in English, on the basis of the vowel quality; in the first case, the high front vowel is lax, whereas in the second one it is tense. We therefore say that />/ and /i/ are two distinct phonemes (segments) in English (chapter 11: the phoneme) and that [tense] is a distinctive feature (chapter 17: distinctive features) for high vowels in this language.1 While phonemic inventories are built in agreement with the principles of Universal Grammar (UG), the exact composition of a phonemic inventory varies from one language to another. Along with the suprasegmental inventory, the phonemic inventory is a good part of what allows a listener to identify a language at first glance and to distinguish it from other languages. We expect speakers to resist either dropping phonemes or phonemic contrasts from their language’s inventory, or introducing new phonemes and phonemic contrasts – although this constitutes the bread and butter of language change – since the automatic consequence of such moves is a different system. We believe that resistance to change cannot be due simply to inertia – it is not passive. In this chapter we will try to show that resistance to change is, above all, a question of contrast/category pattern resilience in the mind of the speaker, which is expressed intralinguistically (i.e. resistance to change due to the passage of time, dialect contact, etc.) and also interlinguistically (between L2 and L1, as will be illustrated in §3 with respect to loanwords). We will link contrast resilience to the traditional notion of Structure Preservation, providing a history of this notion in generative grammar in §2, and considering in §3 the question of whether it is still pertinent now that phonological rules have given way to constraints. We will also address the relation between Structure Preservation and phoneme/structure resilience in loanword adaptation from the point of view of L1 and L2. We conclude in §4. 1

Even if /i/ and />/ were to be distinguished by vowel length instead of tenseness, as proposed by some authors, the point made here would stand.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0076

2

2

Carole Paradis & Darlene LaCharité

The history of Structure Preservation

It has long been noted that, intralinguistically, languages (or, more properly, their speakers) resist phonemic change before succumbing to and accepting a new phonemic contrast. Changes to a given phonemic inventory follow defined steps, which are gradual, and characteristically occur over a long period of time (chapter 2: contrast). Although such sound changes can sometimes occur relatively rapidly, it is not unusual for them to take centuries to complete. Broadly speaking, a small phonetic detail becomes sufficiently large over time that what begins by distinguishing phonetic variants ends up being categorical, i.e. phonemic (see Harris 1990 and Bybee 2008 for a detailed description of these steps). Clearly, though, the forces of change are counterbalanced by resistance to change, or intralingual change would typically proceed at a much faster rate and produce much more dramatic results than it usually does (chapter 94: lexical phonology and the lexical syndrome). The lexicon is the crucial place where the battle between the forces of change and resistance to change takes place. In Lexical Phonology, the resistance to using non-phonemic sounds or sound combinations at the lexical level was expressed through the notion of Structure Preservation (SP). In Kiparsky (1982, 1985), SP regulated the application of phonological rules, constituting a ban on the introduction of phonemes at the lexical level that are not part of the underlying inventory. (1)

Structure Preservation (Kiparsky 1985: 88) If a certain feature is non-distinctive in a language we shall say that it may not be specified in the lexicon. This means that it may not figure in non-derived lexical items, nor be introduced by any lexical rule, and therefore may not play any role at all in the lexical phonology.2

The model assumed by Kiparsky is basically that in Figure 76.1:

Restricted Dictionary Underlying phonological inventory; underived lexical items.

LEXICON Word-formation rules, lexical phonology; domain of application of SP.

SYNTAX Syntactic rules, post-lexical phonology; SP does not apply at this level. Figure 76.1 2

Lexical Phonology (Kiparsky 1982)

Kiparsky does not present this constraint formally. The constraint given here is a description of SP as presented in the text by Kiparsky (1985: 88).

Structure Preservation

3

As Harris (1987: 255) puts it, “the lexical segment inventory of a language (the output of the lexical rules) must be isomorphic with the underlying inventory.” Bybee (2008: 111) adds: “. . . alternations that are restricted to the word level involve only contrastive features. Segments or feature combinations that are non-contrastive must be introduced by postlexical rules . . .”3 Mohanan (1986) considers the formulation of SP given in (1) to be too restrictive. According to Mohanan, the Malayalam and English facts cannot be explained if SP is interpreted as in (1), so he softens it, saying instead, “the alphabet used for syntactico-phonological representations is the lexical alphabet” (1986: 174). The lexical alphabet refers to the phoneme inventory at the lexical level, which is the result of lexical (as opposed to post-lexical) application of phonological rules.4 In Figure 76.1, Mohanan’s lexical alphabet would be generated in the lexicon module by phonological rules that apply at this level. Of particular present relevance, the lexical alphabet in the view of Mohanan and Mohanan (1984) and Mohanan (1986) can contain distinctions that are absent from the underlying inventory (found in the restricted dictionary in Figure 76.1). The Malayalam case, detailed in Mohanan and Mohanan (1984), focuses primarily on contrasts in the system of nasals. The crux of the issue is that to achieve an elegant analysis of apparently complicated surface distributional restrictions on stops and nasals in Malayalam, one needs to assume that at the underlying level there are three nasals (bilabial, alveolar, retroflex), but that at the lexical level there are seven (bilabial, dental, alveolar, palato-alveolar, retroflex, palatal, velar). At the heart of their analysis is the clearly lexical application of two phonological rules (one that changes post-nasal voiced stops to nasals, and another that changes intervocalic [−continuant] velars to palatals when preceded by front vowels), which produces nasals with places of articulation that are not underlying for that class of sounds. Still, the most widespread interpretation of SP in Lexical Phonology remains essentially the same: phonological rules are not expected to generate new phonemes or phonemic contrasts at the lexical level, nor are phonemes expected to undergo absolute neutralization at this level (chapter 80: mergers and neutralization). Any operations that introduce features that are not distinctive underlyingly are predicted to be necessarily post-lexical. For instance, French has a rich vocalic system that includes the mid-back lax and tense vowels /D/ and /o/ (e.g. hotte [Dt] ‘hood’ vs. haute [ot] ‘high’). Although both vowels are frequent, /D/ is prohibited word-finally in French (*/D/#), at the lexical level. If a morphological operation produces a word-final /D/ in the course of a derivation in French, it is systematically turned into [o]. Various morphological operations generate such a result; they include abbreviation (e.g. Caroline [kaöDlin] → Caro [kaöo], condominium [k6dDminjDm] ‘condominium’ → condo [k6do]), gender inflection (e.g. sotte [sDt] ‘silly (fem)’ vs. sot [so] ‘silly (masc)’), verbal and adjectival derivation (e.g. roter 3

Sproat (1985: 454) says that Structure Preservation could be interpreted as a restriction on contrasts that are the output of lexical rules, rather than a restriction on underlying representations. However, as Harris (1987: 259) points out, “given its inherent circularity [this interpretation] of Structure Preservation is hardly worthy of serious consideration.” 4 Mohanan and Mohanan (1984: 590) and Mohanan (1986: 12) consider that phonological rules are all part of a single independent phonological module and that they interact with either the lexical or post-lexical level or both, according to their domain specifications. Application in one domain or the other is subject to different restrictions. Notably, lexical application is subject to SP, provided that SP is interpreted in the less stringent manner indicated above.

4

Carole Paradis & Darlene LaCharité

[öDte] ‘to belch’ vs. rot [öo] ‘belch’ (n)), reduplication (e.g. dormir [dDömiö] ‘to sleep’ > dodo [dodo] ‘sleep (n, child language)’), etc. The result is always the same: /D/# → [o]#. Nonetheless, the restriction */D/# in French does not apply at the post-lexical level. For instance, in Quebec French, final /a/s are systematically pronounced either as [A] or [D] (e.g. chocolat [œDkDlA], [œDkDlD] ‘chocolat’; matelas [matlA], [matlD] ‘mattress’) or something in between, despite the fact that the lexical restriction */D/# also applies in this variety of French. We know that [D] and [A] are variants of /a/ in such cases because derivatives such as chocolaté [œDkDlate] ‘with chocolat’ and matelassé [matlase] ‘padded’ indicate that the underlying vowel is /a/. Gradient and unstable rules such as /a/ → [A] or [D] in Quebec French are typically post-syntactic rules that are predicted not to occur at the lexical level (Mohanan 1986: 174). The existence of clearly necessary categorical constraints, such as */D/#, alongside the existence of forms that clearly do not obey them is the kind of case that SP is intended to explain: a phonetic process can apply at the post-syntactic level in spite of the fact that its effect contradicts that of a phonotactic constraint at the lexical level. Another classic example of a lexical (hence structure-preserving) rule, is velar softening in English (and also in French), where /k/ yields [s] before a high front vowel (e.g. electric [ilektr>k] vs. electricity [ilektr>s>ti]). As reiterated by Bybee (2008: 112), this rule does not apply between words, is unproductive, lexically restricted, and morphologically conditioned. In contrast to this lexical alternation, /k/ in English, as in French, has a palatal variant [c] before a front vowel, as in key /ki/ → [ci], kiss /k>s/ → [c>s], etc. (in French, qui /ki/ ‘who’ → [ci], quitter /kite/ ‘to leave’ → [cite], etc.). The emergence of the palatal variant in both languages is automatic, productive, and neither lexically nor morphologically restricted. SP embodies the claim that such an assimilation rule could not apply at the lexical level because, in both French and English, it would introduce at this level a sound, [c], that is not part of the phonemic inventory of either language (see Kiparsky 1985 for a discussion of many other assimilation and harmony processes that are non-structure-preserving and which he shows are post-lexical). However, SP has been challenged on a variety of fronts. Its domain of application has been debated vigorously. Kiparsky (1982, 1985) proposes that lexical rule application is subject to SP but post-lexical application is not. However this neat division of territory between structure-preserving and non-structure-preserving rule application has proven to be doubtful. For example, Kaisse (1990), Rice (1990), and Hyman (1993) agree that SP is not necessarily turned off in the post-lexical component. In other words, post-lexical rules can be subject to SP. On the other hand, Harris (1987: 256) argues, mainly on the basis of a certain type of vowel harmony in southern Bantu languages, that “failure to preserve structure cannot be reliably considered proof of a rule’s postlexical status.” That is, lexical rule application is not necessarily subject to SP. Harris (1987, 1989, 1990) discusses other allophonic processes that must be lexical, but that are not structure-preserving (see also discussions in Mohanan 1995 and Steriade 1995), but one of the best-known problematic cases is the distribution of [ç] and [x] in modern German. According to Hall (1989), there is no underlying contrast between velar and palatal fricatives in German; the feature [back], though distinctive for vowels in German, is not distinctive for fricatives. The [ç] vs. [x] contrast results from a rule of fricative assimilation that spreads the backness feature from a vowel to a following voiceless high fricative. Crucially, fricative

Structure Preservation

5

assimilation applies lexically and it produces a phoneme/phoneme contrast that is not underlying. Hall (1989: 1) concludes that the rule of fricative assimilation is “a blatant counterexample to SP.” We will address this case more thoroughly later. Macfarland and Pierrehumbert (1991) propose an alternative view, which is intended to salvage the integrity of SP. In their view, non-distinctive features introduced at the lexical level stem from spreading, resulting in doubly linked structures. In the German case just discussed, the [back] feature of the vowel spreads to the feature matrix of the following fricative /X/, which is unspecified for backness. This results in the feature [back] being simultaneously linked to the vowel and the following fricative consonant. By virtue of their double linking, such structures are technically exempt from SP (as well as from a condition they call the marking condition). Though the solution might work for this and some other problematic cases that challenge SP, it does so at the cost of seriously weakening the SP constraint. Iverson (1993) proposes instead an approach that re-examines the relationship among some of the constellation of properties originally intended to distinguish between lexical and post-lexical rules or rule application, namely SP, and the restriction of applying to derived environments. In classical Lexical Phonology, a lexical rule had certain properties, two of which were that it preserved structure and that it applied in derived environments. SP is a consequence of a rule’s lexical status in that view. Iverson turns this relationship on its head (1993: 265); if a given rule preserves structure, then it observes the derived environment constraint. This arguably explains the clustering of properties previously considered to be diagnostics of a rule’s lexical or post-lexical status, but it implies that SP is a property of some, but not necessarily all, lexical rule applications. Indeed, Iverson concludes (1993: 270) that “. . . structure-building applications of lexical rules need not (though may) be structure-preserving.” As the previous discussion suggests, SP, as formulated by Kiparsky (1982, 1985) was inextricably linked to the overall architecture and other principles (e.g. the Strict Cycle Condition and the Derived Environment Constraint) and theoretical tools (e.g. underspecification) of Lexical Phonology. To reiterate, SP was part of a set of properties that distinguished lexical from post-lexical rule applications. Like many other notions of Lexical Phonology, SP was found to be problematic for a variety of reasons. For instance, even if we accept the view of Mohanan and Mohanan (1984: 589) that phonological rules whose domain of application is lexical yield the “lexical alphabet” – to be distinguished from the underlying one, found in the restricted dictionary in Figure 76.1 – it seems clear, as indicated by the work of a number of phonologists working on several different languages, that we cannot uphold the position that lexical rule application is necessarily structure-preserving while post-lexical rule application is not. For example, as mentioned previously, Iverson (1993) argues that lexical rules are not necessarily structure-preserving, while Rice (1990) argues that post-lexical rules may be. However, this is only one problem facing SP. Another is that SP has resisted formulation, interpretation, or application in any way that can be universally applied to yield felicitous results. Different attempts, including reformulation (e.g. Mohanan 1986: 174; Borowsky 1989: 148), reinterpretation (e.g. Macfarland and Pierrehumbert 1991: 179; Iverson 1993: 265), or restriction of its application to some, but not all, lexical levels – often on a language-specific basis (e.g. Borowsky 1986, 1989) – yield no universally satisfactory outcome. Moreover, such attempts often

6

Carole Paradis & Darlene LaCharité

have extremely damaging consequences for the theory and the SP principle. For example, the effect of Mohanan and Mohanan’s (1984) analysis of Malayalam and their distinction between an underlying and a lexical alphabet is to allow rules to introduce contrasts that are not underlying. This is clearly at odds with Kiparsky’s (1982) view of the role that SP plays. A closely related problem is that, under no formulation or interpretation, in any language through which the principle has been test-driven to any extent, has SP been found to be exceptionless (see the discussion of Bybee 2008 below). What has the outcome of the challenges to SP been? Sproat (1985), who rejects Lexical Phonology’s approach to word formation altogether, considers SP completely dispensable, along with the rest of the theory. However, few phonologists would go this far. Although some, such as Mohanan (1989: 609) and Hall (1992: 233), have given up on reformulating SP, or tweaking the conditions of its application, and concluded that it is not a linguistic universal, they still consider it a crosslinguistic tendency. As Steriade (2007: 146) asserts, “. . . Structure Preservation cannot be abandoned altogether. . . .” Bybee (2008) has picked up the idea of SP as a cross-linguistic tendency rather than a true synchronic generalization or principle of language. She proposes to interpret the constraint as a result of “paths of change,” saying “Three welldocumented universal paths of change occur in parallel and lead to the synchronic situation that is described as Structure Preservation” (Bybee 2008: 114). She also says that it is some sort of restatement of the older structuralist principle of “separation of levels” where phones are distributed by phonetic criteria and phonemes by lexical and morphological ones.5 In Bybee’s view, because SP is an emergent property of recurring mechanisms of language change (that feed and complement each other), counterexamples to this constraint are unavoidable and expected. The thinking behind this is that since the transition from phonetic to phonemic status is gradual, there will always be linguistic sounds that are introduced in a language lexicon with the initial status of phone, either native or foreign, which will later acquire the status of phoneme. However, while some instances make the transition from variant to phoneme, others instances do not, or at least not at the same time. More concretely, Bybee explains that purely phonetic sounds can gradually be disassociated from their phonetic conditioning and become associated with particular lexical or morphological conditions. An example discussed extensively in the structuralist and generativist literature, and already pointed out in this section, is the case of German [x] and [ç] (see Bybee 2008: 112 for a synopsis and Hall 1992 for more detailed discussion). In brief, [ç] and [x] were originally variants, with [ç] occurring after a front vowel in German. When the German diminutive suffix -ichen [içHn] lost its conditioning front vowel and the shortened suffix -chen [çHn] started to appear after a back vowel, [x] and [ç] (arguably) became distinctive (e.g. Kuhchen [ku(çHn] ‘little cow’ vs. Kuchen [ku(xHn] ‘cake’).6 The distinctiveness of /ç/ vs. /x/ was reinforced by the fact that /ç/ could also occur at the beginning of loanwords in some German dialects, where 5

Except that SP avoids the duplication problem that classical phonemics (structuralists) faced. Indeed, in classical phonemics, a generalization had to be stated twice, once at the level of phonemes and once again at the level of phones, because of the separation of levels. 6 However, Macfarland and Pierrehumbert (1991: 171) do not recognize this as a minimal pair because “Kuchen is a monomorphemic noun [as opposed to Kuhchen ‘little cow’].” They maintain that there are no true minimal pairs distinguishable only by [ç] vs. [x] in German.

Structure Preservation

7

the initial phonetic conditioning (the preceding front vowel) is obviously absent.7 To take another example, this time from English, the non-anterior voiced fricative /Ú/, which initially occurred in the Early Modern English Period as a result of stress-conditioned palatalization (/zj/ > [Ú]; e.g. pleasure [pleÚHP] from French plaisir), has begun to be allowed in word-final and even word-initial position under the influence of more recent French borrowings such as rouge [ruÚ], beige [beÚ], garage [gHPAÚ], massage [mHsAÚ], camouflage [kæmHflAÚ], luge [luÚ], genre [Ú√r], joie de vivre [ÚwAdeviv], etc. (see Millward 1996: 252–253). Similar to the German situation, the appearance of /Ú/ in these environments cannot be due to phonetic conditioning. In short, the picture that emerges is that what originated as phonetic variants in the German and English examples might have become phonemes (albeit ones with sometimes restricted distribution) due to, among other things, the pressure of loanwords. This is one path of change; concurrent with that are two others. The second is that small phonetic changes tend to become larger ones over time, leading to a greater phonetic distance between the original sound and its variant. If we take the [ç] ~ [x] alternation, the variant [ç], which is unstable and phonetically close to /x/, is becoming more stable and more clearly distinct phonetically from /x/ over time (see Bybee 2008: 113). The third related path of change discussed by Bybee is loss of productivity as phonetic processes become lexicalized. To sum up, variants come to occur at the lexical/morphological level because of the diachronic tendency of phonetic changes to become linked to particular lexical items or morphological processes, creating a shift from the purely phonetic to the lexical level. As a result of being linked to particular morphological or lexical conditions, the phonetic conditions that originally give rise to the variant can lose their automatic productive power. Once the link between a sound and its (phonetic) conditioning environment is broken, the sound is “liberated,” as it were, and free to enjoy wider phonotactic/syllabic distribution, giving it phonemic as opposed to purely phonetic status (see also Harris 1990: 93). SP has exceptions because such change does not affect the entire vocabulary at once, but rather proceeds via normal processes of lexical diffusion (see e.g. Phillips 2006 on lexical diffusion and its links to various sound-based phenomena). Is that the end of the story? Kiparsky (2008) clearly disagrees with the diachronic view. He summarizes the situation as follows: An increasingly popular research program seeks the causes of typological generalizations in recurrent historical processes, or even claims that all principled explanations for universals reside in diachrony. Structural and generative grammar has more commonly pursued the reverse direction of explanation, which grounds the way language changes in its structural properties. (Kiparsky 2008: 52)

Kiparsky points out (2008: 27) that, once spelled out, historical explanations like those proposed by Neogrammarians or, more recently, those working in the diachronic view (e.g. Bybee 2008) often turn out to appeal implicitly to tendencies that are themselves in need of explanation. In other words, there must be principles 7

According to Marc van Oostendorp (personal communication), there might remain some sort of phonetic conditioning in loanwords, however, since [ç] can occur only before a front vowel word-initially.

Carole Paradis & Darlene LaCharité

8

governing the nature and extent of change, which are ultimately responsible for the tendencies that are observed. He proposes criteria to distinguish true universals, which constrain language change, from typological generalizations, which result from language change, and adds: “The issue goes well beyond the simple question how cross-linguistic generalizations originate. It is about the nature of those generalizations themselves” (Kiparsky 2008: 27). He also raises the possibility (2008: 25) that functional explanations for language change might have become biologized within UG itself, through language use, thereby constraining change also via acquisition. Though the argument we present below does suggest that a guiding principle of grammars is the pressure to preserve structure, which cannot be simply a sideeffect of sound change over time, our goal in this chapter is not to argue whether SP is a basic principle of UG, as opposed to an emergent property of converging processes of language change. Rather, we hope to show, using primarily the phonological treatment of loanwords, that distinctive information is, indeed, highly resistant to destruction or alteration at the lexical level, not only intralinguistically, but interlinguistically too. Whatever problems SP has faced, or continues to face, there is no doubt that distinctive phonological information is resistant to change, so some notion of Structure Preservation is still needed, even under current constraint-based approaches, both derivational and non-derivational. The essence of our argument is that if there were no notion of Structure Preservation synchronically active in grammars, we could not explain why a borrower works so hard to preserve distinctive information from a foreign system (L2) in his/her own language (L1). Why should he/she care in the first place?

3

A broader perspective of Structure Preservation

3.1

Structure Preservation in loanword adaptation

Languages . . . which have undergone striking changes in their lexicons through the additions of thousands of borrowed words can no doubt be expected to trouble phonologists for some time. (Kaisse 1990: 141)

Because borrowing normally takes words conforming to the sound patterns and restrictions of one language (the source language, L2) and makes them conform to those of another (the borrowing language, L1), loanwords routinely present the need to modify or destroy phonological information (cf. also chapter 95: loanword phonology). A priori, borrowing includes three phenomena that seem to challenge the notion of Structure Preservation. These are the modification of sounds, the deletion of sounds, and, apparently paradoxically, the importation of sounds at the lexical level. Of these three phenomena, deletion and importation are, on first impression, the most problematic. However, as we will see, phoneme deletion seldom occurs and importation is respectful of L2’s phonological integrity. We believe this, along with other facts to be discussed, makes loanwords especially relevant to the study of Structure Preservation, provided one accepts an enlargement of its scope. We will henceforth use the full form, “Structure Preservation,” to refer to this larger conception of the constraint. SP will refer to the notion of Structure Preservation as defined by, and linked to, Lexical Phonology.

Structure Preservation

9

If, instead of seeing the Structure Preservation constraint as just a ban on the introduction of non-phonemic sounds or distinctions at the lexical level, we interpret it as a form of pressure to preserve any contrastive information (features, phonemes, phonemic patterns, unpredictable syllabic information, etc.), then observing the way loanwords are adapted becomes extremely relevant. This is what we propose to discuss here. As we will show in the next sections, especially in §3.2 where the statistics from a large loanword database, that of the CoPho Project,8 are presented, L2 distinctive information is very seldom squarely destroyed in L1. Instead, L2 distinctive information, when it is not imported, is normally phonologically “adapted” in the borrowing language, with as few adjustments as possible, i.e. minimally. If there were no constraint on synchronic grammars to preserve structure, then there should be no reason for phoneme deletion to be so scarce in loanwords and for adaptations to be minimal. The notion of minimal adaptation is closely tied to the generally agreed idea that in loanword adaptation, languages normally seek to replace unacceptable foreign sounds with those that are “closest.” There is disagreement over how closeness is defined: a matter of some contention in the field of loanword adaptation is whether closeness is determined primarily on phonological grounds, as we maintain, or whether it is determined mainly phonetically (chapter 98: speech perception and phonology). Our present purpose is not to debate this issue, but rather to present statistics from large corpora of loanwords in several languages that indicate that L2 distinctive information is routinely maintained to the maximum allowed by the L1 phonological constraints. In the parlance of the Theory of Constraints and Repair Strategies (TC),9 this is attributable mainly to two principles, the Preservation Principle and the Minimality Principle, which conspire, we believe, to produce this result. As we will endeavor to show, both principles could be instantiations of the pressure to preserve structure in the larger sense that we propose here.

3.2

Adaptation, deletion, and Structure Preservation

The data used to illustrate the effects of preservation in loanwords are taken from the CoPho Project’s loanword database, which includes general corpora of French borrowings in Canadian English, Moroccan Arabic, Kinyarwanda, and Lingala, and English borrowings in Calabrese Italian, Japanese, Mexican Spanish, Quebec French, Parisian French, etc. The main findings yielded by the analysis of the CoPho database are summarized in Table 76.1. The first relevant point to note about the figures in Table 76.1 is that L2 distinctive phonological information is systematically adapted in L1 (34,070/50,092 cases, i.e. 68 percent), as opposed to being deleted (3.3 percent of cases). Phonological adaptation, which is simply called “adaptation” here, is the modification/replacement (i.e. repair) of an L2 sound or structure to comply with one or more L1 phonological constraints. Adaptation is linked to Structure Preservation insofar as it is geared to ensuring that the L1 contrastive system remains unchanged (see

8

CoPho stands for constraints (Co) in phonology (Pho). The project is supervised by Carole Paradis at Laval University, Quebec City. 9 TC (previously TCRS) was originally proposed by Paradis (1988).

10

Carole Paradis & Darlene LaCharité

Table 76.1 The CoPho Project loanword database of phonemic and supraphonemic malformations (updated August 2009) General information

Corpora

Loans

Malformations

Forms

Nonphonological cases

a

Total

Phonological cases

Total

Adaptationsb

Importations

Deletions

Total

English borrowings in: Old Quebec French

485

597

489

398 81.4%

298 74.9%

78 19.6%

22 5.5%

91 18.6%

Parisian French

901

2,576

3,153

2,749 87.2%

1,570 57.1%

987 35.9%

192 7%

404 12.8%

Quebec City French

949

2,416

2,434

2,183 89.7%

1,479 67.7%

602 27.6%

102 4.7%

251 10.3%

Montreal French

949

2,248

2,285

2,099 91.9%

1,262 60.1%

747 35.6%

90 4.3%

186 8.1%

Mexican Spanish I

1,045

1,514

3,137

3,008 95.9%

1,583 52.6%

1,317 43.8%

108 3.6%

129 4.1%

Mexican Spanish II

1,034

2,342

5,645

4,490 79.5%

2,836 63.2%

1,569 34.9%

85 1.9%

1,155 20.5%

Japanese

1,167

2,991

7,760

7,373 95%

6,778 91.9%

492 6.7%

103 1.4%

387 5%

Calabrese Italian

2,161

5,191

14,740

14,438 98%

6,182 42.8%

7,821 54.2%

435 3%

302 2%

French borrowings in: Canadian English

674

1,667

1,034

748 72.3%

555 74.2%

137 18.3%

56 7.5%

286 27.7%

Moroccan Arabic

1,127

2,685

4,275

3,979 93.1%

3,104 78%

568 14.3%

307 7.7%

296 6.9%

Kinyarwanda

756

2,130

4,639

4,207 90.7%

4,119 97.9%

26 0.6%

62 1.5%

432 9.3%

Lingala

672

1,917

3,734

3,408 91.3%

3,396 99.6%

2 0.1%

10 0.3%

326 8.7%

Fula

532

1,081

1,118

1,012 90.5%

908 89.7%

45 4.5%

59 5.8%

106 9.5%

12,452

29,355

54,443

50,092 92%

34,070 68%

14,391 28.7%

1,631 3.3%

4,351 8%

Total for all corpora a

Percentages of phonological and non-phonological cases are calculated on the total number of malformations. b Percentages of adaptations, non-adaptations, and deletions are calculated on the total number of phonological cases.

Structure Preservation

11

LaCharité and Paradis 2005, Paradis and Tremblay 2009, and Paradis and LaCharité, forthcoming, for discussion), and is the norm. Table 76.1 also shows that if foreign sounds are not adapted, they are normally imported, i.e. left unadapted, as opposed to being deleted. Importations/nonadaptations account for 14,391/50,092 (28.7 percent) of the phonological cases (importation is discussed in §3.5). Deletions, which might be seen as prima facie counterexamples to the idea that structure is preserved in loanword adaptation, are rare in the database overall. Deletions that have been classed as phonological are those that can be explained by the phonological principles of the theory; the others have been classed as non-phonological precisely because they cannot be predicted on phonological grounds. In the phonological cases, the rate is well below 10 percent in any individual corpus and is only 3.3 percent in the corpora overall (1,631/50,092 phonological cases). Non-phonological cases represent only 4,351/54,443 cases (8 percent). Therefore, whether we consider only phonological cases, or we include non-phonological cases (8 percent) as well, deletion of a phoneme is uncommon in the CoPho loanword database. Moreover, not all nonphonological cases involve deletions, and those that do may not always be best explained as such. As shown in Paradis and LaCharité (2008), who address the treatment of non-phonological cases in three corpora of old and recent Quebec French, phoneme deletion is uncommon even in non-phonological cases. Very often, it results from analogy, real or false (e.g. QF [l>ps>J] instead of [l>ps>Ik] for English lip-sync; the absence of /k/ in the QF borrowing is not a case of phoneme deletion per se, but rather a case of false analogy to the English verb to sing). Lexical truncation, such as QF tan for English (sun) tan, is also sometimes responsible for the disappearance of L2 phonemes (here sun); cf. also lexical truncation in QF parking from English parking lot and French pull from English pullover. In our view, these lexical truncations should not be seen as phoneme deletion, since deletion does not occur on the basis of phonemes, but of lexical items. Paradis and LaCharité (2008, forthcoming) suggest that these non-phonological processes, along with hypercorrection, phonetic approximations, etc., are responsible for many so-called “divergent repairs” and “unnecessary repairs” (see chapter 95: loanword phonology). We attribute the rarity of deletion to the Preservation Principle in (2) (see e.g. Paradis et al. 1994; Paradis and LaCharité 1997). (2)

Preservation Principle Phonemic information is maximally preserved, within the limits of constraint conflicts.

The Preservation Principle is a TC mechanism first proposed by Paradis et al. (1994) and used extensively to analyze the CoPho loanword database (see e.g. Paradis and LaCharité 1997).10 However, we are not the only ones working in loanword adaptation to have seen the need for such constraints; for example, Calabrese (2005)

10

TC is not restricted to loanwords; we do not maintain that the processes observed in loanwords are independent of general phonology. Nonetheless, it might be that the Preservation Principle is more evident in loanword adaptation than in native phonology, where the influence of morphology and residual historical processes play a greater role.

12

Carole Paradis & Darlene LaCharité

invokes comparable mechanisms, the Principles of Economy and Last Resort.11 As previously stated, deletions are prima facie violations of (2). However, not only are L2 phoneme deletions rare in the CoPho database, but most are also highly predictable. As we will try to show, deletion is, for the most part, phonologically predictable, and most phonologically predictable deletion can be reconciled with the notion of Structure Preservation. There are two main scenarios in which phoneme deletion occurs. The first scenario involves deletion of a guttural – sounds characterized by a Pharyngeal node – by languages that do not use this primitive in the representation of the sounds of their native inventories. A language that does not employ this primitive cannot adapt a guttural (see Paradis and LaCharité 2001 for a detailed discussion on the treatment of gutturals in loanwords). For instance, neither French nor Italian have a guttural in their phonemic inventories,12 so neither language is equipped to adapt a phonemic guttural such as English laryngeal /h/. Instead, they delete it (e.g. English hamburger [hæmbHPgHP] yields Quebec French (QF) [_ambHPgHP] and Italian [_amburgHr]). The systematicity of guttural deletion is indicated by the figures of the three contemporary QF corpora (see Paradis and LaCharité 2001: 264). There are, overall, 173 cases of /h/ in English loanwords in QF; deletion applies in 163 cases (94.2 percent). The remaining ten cases are importations in the Montreal French corpus. The figures for the Calabrese Italian corpus reinforce this point and further illustrate the fact that guttural deletion accounts for the preponderance of deletions in the CoPho database. In the Calabrese Italian corpus of English loanwords, there are 296 cases of /h/ in the English input. In only 23/296 cases (7.8 percent) is English /h/ imported; the rest of the time it is deleted, meaning that there are 273 /h/-deletions in the Calabrese Italian corpus. Since there are only 278 deletion cases in that corpus overall, this means that /h/-deletion accounts for 98.2 percent of them (273/278). The vast majority of the 3.3 percent of deletions in the CoPho database concern guttural consonants in languages that do not exploit the Pharyngeal node. Such deletions would not be a violation of Structure Preservation from the point of view of L1, because the borrowing languages do not have a native guttural contrast to preserve and are not phonologically equipped to preserve that of L2, as argued in Paradis and LaCharité (2001). The second general source of phoneme deletion involves the loss of a coda /r/ in borrowings by languages that do not allow (rhotic) codas (chapter 30: the representation of rhotics).13 This is the case in Japanese, which allows only N or the first part of a geminate in codas (see Itô 1986 for details of the coda condition in Japanese). Coda consonants in English borrowings in Japanese are systematically adapted by vowel insertion; this has the effect of moving the problematic coda consonant to the onset of the following new syllable (e.g. English 11

Calabrese’s (2005: 20) principles of Economy (“Use the minimal amount of maximally relevant units”) and Last Resort (“Use a maximally relevant operation minimally”) are highly reminiscent of TC’s Preservation and Minimality Principles, and serve the same structure-preserving function. 12 French [ö] and [ú] are just two of the numerous variants for the coronal /r/ in French; in contrast with /ú/ in Arabic, these sounds are not phonemic in French. 13 Another predictable, but statistically marginal, source of deletion results from violation of the Threshold Principle (Paradis and LaCharité 1997). This principle offers an explanation for several “atypical” deletion cases, including vowel-initial deletion in French polysyllabic loans introduced in Moroccan Arabic (see Paradis and Béland 2002 for an in-depth discussion of this case and, more generally, Paradis and LaCharité, forthcoming).

Structure Preservation

13

optimism [AptHm>zHm] > Japanese [opQtimizQmQ]). However when the coda is a rhotic, instead of having vowel insertion, merger of the rhotic with the preceding vowel occurs (804/804 cases; e.g. English order [oPdHP] and corner [koPnHP] > Japanese [o(da(] and [ko(na(]).14 Deletion of /r/ also causes vowel lengthening in Thai, where it is not permitted in the coda. For instance, English care [keP], carbon [kAPbHn], cartoon [kAPtun], party [pAPti], poker [pokHP], and star [stAP] are pronounced [khe(], [kha(bDn], [ga(tu(n],15 [pa(ti], [pokH(], and [sHta(], respectively.16 Deletion of /r/ is not limited to loanwords that come from English. It also occurs in loanwords from French. For instance, French Argentine [aöÚ√tin], arrière [aöjeö], beurre [bœö], carte [kaöt], carton [kaöt6], orchidée [Dökide], and radar [öadaö] yield Khmer [a(z√tqn],17 [a(ja(], [+H(], [ka(t], [ka(t6], [o(ki(ee(], and [ra(ea(], respectively. In some cases, the French rhotic is replaced with /a/ or a glottal stop, as in [o(pa(lua] and [pe?mi(] from French haut-parleur [oparlœö] and permis [peömi]. As can be seen, the deletion of /r/, which is prohibited in coda position in Khmer, causes vowel lengthening even in closed syllables, as in [ka(t] from French carte.18 This latter set of examples in Khmer shows that apparent /r/-deletion is not influenced by the pronunciation of /r/ in the donor language, since French and English have very different rhotics. Vowel lengthening suggests that /r/ might not really be deleted but rather fused with the preceding vowel, when it is not replaced with /a/ or a glottal stop. This is why we have not incorporated these cases in the deletion column of the statistics in Table 76.1. If we are correct in viewing vowel lengthening as /r/-adaptation rather than /r/-deletion, it does not contradict the idea of Structure Preservation that is invoked here. However, even if we did consider these cases of deletion, the deletion rate would still remain very low (2,435/50,091 – 4.9 percent instead of 3.3 percent). In Paradis and LaCharité (forthcoming), we attribute /r/-deletion to the fact that /r/ is vowel-like and can easily be fused with the preceding vowel, whether it results in vowel lengthening or not. We envision that, perhaps, as in the case of /h/-deletion in English loanwords in French, Italian, Portugese, etc., the answer lies in the phonological structure of /r/. Rhotics with a variety of phonetic realizations are prone to deletion, cross-linguistically, and they exhibit several phonological behaviors that are not yet well understood. For example, in many different languages, where the rhotics exhibit diverse phonetic realizations, a coda /r/ is deleted, or merged with, transformed into, or replaced by, a vowel. To cite just a few of many possible examples, in German, where /r/ is phonetically uvular, coda /r/ can lower to something akin to a low vowel, so that tür ‘door’ is realized as [ty(ô] in the singular (Wiese 1996) but türen [tyrHn] in the plural, that is with the full rhotic, where it is in onset position. In Quebec French, coda /r/, which can be realized as a uvular or a coronal, is often deleted word-finally in informal speech (e.g. bonjour /b6Úur/ ‘good day’ → [b6Úu(()]). During the Middle Ages, /r/-deletion prevailed for so long in French that /r/ almost disappeared

14

Tones are omitted here, because they are irrelevant. [g] is a variant of unaspirated /k/ in Thai. 16 Data gathered during fieldwork in Thailand in February and March 2010. 17 Even though French is no longer spoken by young people in Cambodia and Laos, Lao and Khmer speakers, both young and old, almost always import the French nasal vowels in French loanwords, which are very numerous in both languages. 18 Data gathered during fieldwork in Cambodia in March 2010. 15

14

Carole Paradis & Darlene LaCharité

as a coda phoneme (Zinc 1986). The deletion of /r/ also applies in many Spanish dialects (Moreno de Alba 1988; Rojas 1988; e.g. mar ‘sea’ → [ma] in Caribbean Spanish and cliquear from English to click, which is realized as [klikea] in Spanglish). Interestingly, English short is realized [Œo] in Spanglish when it is singular but [Œores] in the plural, i.e. with the full rhotic when it is no longer in coda position, thus indicating that the rhotic is present in the L1 lexical representation of the borrowing. Examples of this type are common cross-linguistically. It could be that a post-vocalic rhotic is actually part of a diphthong, as proposed by several phonologists (e.g. see Nikiema and Bhatt 2003 for their analysis of post-vocalic /r/-deletion in Haitian Creole). Nonetheless, we know there are some languages where coda /r/ is ill-formed, but its deletion does not yield vowel lengthening. Although the general CoPho loanword database does not include such languages, we would view these cases as true deletions and, as such, challenges to the idea that structure is preserved in loanword adaptation. Our targeted corpus on aspiration in Mandarin Chinese (MC) (see also Hall-Lew 2002 on this) shows that this is what happens in English loans in Mandarin Chinese.19 The English coda rhotic, which is disallowed in Mandarin Chinese, is dropped without yielding systematic vowel lengthening (e.g. English laser [lezHP], cigar [s>gAP], cartoon [kAPtun], and sardine [sAPdin], which yield MC [lej ÏH_], [sjH tsja_], [kha t huI] and [Ïa_ tiI], respectively). The coda rhotic is not adapted in /l/, as in onset position (e.g. English radar [PedAP] and trust [tPZst] > MC [lHj ta] and [t h(w)H las]). The net result with respect to a discussion of Structure Preservation is that one must consider /r/-deletion/fusion to be very common across languages in both native and borrowed words, and that there seems to be a phonological explanation for many such cases that avoids conflicts with Structure Preservation. However, even when /r/-deletion does not lead to lengthening, it may not be a problem for Structure Preservation from the point of view of loanword adaptation, because in many cases such deletions stem from a native process. For example, it is common for rhotics to be deleted when they are included in a complex onset (chapter 55: onsets). For instance, Quebec French trois [töwa] ‘three’, Lacroix [laköwa] (a proper name), and fruit ‘fruit’ [fö8i] are often pronounced [twa], [lakwa], and [f8i] in casual speech. In Thai, /r/ in complex onsets is only pronounced in very formal speech (on television, for instance). In less formal/casual speech it might be replaced with /l/, but most of the time the liquid disappears altogether. For instance, the famous shopping center of Bangkok, Maboonkrong, is pronounced with the rhotic only in very formal speech. Otherwise, it is pronounced [ma(bu(IkhDI], with no rhotic; this is what taxi drivers say, with [ma(bu(IkhlDI], a more prestigious pronunciation with the lateral, being used much less frequently. The same happens with the Thai city Trat, which is systematically pronounced [tat]; cf. also Thai [phro(m] ‘carpet’, which is pronounced [pho(m] except in very formal speech (this information on /r/-deletion, as well as on /l/-deletion, is readily available in any grammar of Thai). Native /r/-deletion in complex onsets is frequent in Asian languages, so it is not surprising to see /r/-deletion apply to 19

A targeted corpus, as opposed to a general one, is a (normally smaller) corpus of loanwords collected to test a particular hypothesis (e.g. aspiration in MC, Hindi, Thai, and Lao, palatalization in Russian, etc.). Therefore, all borrowings in a targeted corpus contain a particular sound or contrast of interest to the hypothesis being tested.

Structure Preservation

15

their loanwords (e.g. French crème [köem] ‘cream’ which yields [kem] in Vietnamese, according to chapter 95: loanword phonology; cf. also English credit card [kPed>t kAPd] and brake [bPek], which are pronounced [kh(l)e(dit ga(t] and [b(l)e>k] in Thai, respectively, and French programme [pöDgöam] ‘program’ and groupe ‘group’ [göup], which yield Lao [pog(l)am] and [g(l)Áp], respectively). Rhotic deletion in cases such as English loanwords in Thai should not be interpreted as a repair of an ill-formed L2 structure or as an “unnecessary loss,” since it stems from a very productive native process related to speech register/dialectal differences. To sum up this discussion, the vast majority of deletions are phonologically predictable and thus, despite initial impressions, pose little threat to the idea that input structure is preserved in loanword adaptation. However, the real story is that, together, predictable and unpredictable deletion affect only a small percentage (less than 5 percent) of input phonemes in the CoPho loanword database. We conclude from this that the loss of L2 phonemes is strongly avoided in loanword adaptation.20 In the case of ill-formed sounds, feature adjustments apply systematically; in the case of ill-formed clusters, that are perceived as unsyllabifiable by L1, phoneme insertion is the norm. For instance, French drapeau [döapo] yields [darapo] in Fula, not *[dapo] or *[rapo] (see also French force [fDös] > Fula [fDrsD], not *[fDs] or *[fDr], and French ministre [ministö] > Kinyarwanda [minisitiri], not *[mini]). This pattern consistently predominates in the general corpora of the CoPho loanword database, as well as in more recently assembled targeted corpora such as the Kashmiri one. When an English borrowing contains a cluster that is disallowed in Kashmiri, the sequence undergoes vowel insertion, not consonant deletion, despite the fact that consonant deletion would solve the problem equally well. For instance, English silk [s>lk], snow [sno], and flag [flæg] result in Kashmiri [silqk], [sono], and [fHlag], and not in *[sik], *[so/no], or *[fag], for example.21 Why does L1 resort to phoneme insertion instead of phoneme deletion when it has to handle a problematic L2 cluster? We attribute this to the Preservation Principle in (2), which seeks to safeguard contrastive information, and which can be seen as a constraint in the Theory of Constraints and Repair Strategies to preserve structure. However, TC is a derivational constraint-based theory. One might immediately wonder whether Optimality Theory (OT), a non-derivational (non-serial) filter-based theory, can dispense with the need for SP. The crux of the issue is that standard OT posits that, underlyingly, anything goes (cf. Richness of the Base, following Prince and Smolensky 1993). The patterns that emerge from the lexicon are the result of universal surface filters, which are ranked on a language-specific basis. In short, the basic architecture and tenets of classical OT, with constraints acting as filters, suggest that there should be no particular underlying phoneme or structure inventory to protect. Itô and Mester (2001: 265) examine the possibility that some of the devices and principles of Lexical Phonology might have outlived their usefulness and have no place in a putatively non-serial framework such as OT. Is SP one such device? Itô and Mester argue for recognizing, within OT, the need for stratal organization, with lexical outputs 20

Phoneme deletion outside the context of malformations, that is when the phoneme and structure that contains it are both permissible in L1, is also very rare (see Paradis and Prunet 2000). As shown by Paradis and LaCharité (2008) and Paradis and LaCharité (forthcoming), these rare cases result mostly from analogy, morphological truncation, phonetic approximation, and hypercorrection. 21 Data gathered during fieldwork in North India in April 2009.

16

Carole Paradis & Darlene LaCharité

being structure-preserving, which they define (2001: 289) as “limitation to a restricted inventory of elements and structures. . . .” Bermúdez-Otero and McMahon (2006) work within the framework of stratal OT and maintain, contra Itô and Mester, that “. . . the issue of Structure Preservation does not arise in Stratal OT . . .” (2006: 396). However, even if one agrees with that, and rejects Itô and Mester’s point of view, OT analyses still rely on some notion of preservation, in the form of faithfulness constraints, which occupy a high-ranked – though not necessarily undominated – place in most OT analyses (chapter 63: markedness and faithfulness constraints). All this clearly suggests that OT requires some notion of contrast preservation, an issue that some OT analyses have confronted directly (e.g. Krämer 2006). Our goal here is to point out that no current constraint-based or filter-based theory completely does away with the need for some notion closely related to Structure Preservation. Indeed, it seems more likely that, for all phonologists, it will be important to reconsider the idea of Structure Preservation and to determine its mandate in the context of our particular theories. The remainder of the discussion is framed in the TC model, because Structure Preservation has been more directly addressed in this framework, but we assume that all phonological theories need to confront the same observations concerning what appears to be preserved, as evidenced in loanword adaptation. In other words, our focus will be on the facts, not the theory used to handle them. The preceding discussion has shown that L2 phonemes are adapted rather than deleted, that the repair of illicit clusters via epenthesis is preferred over their repair via deletion, and that when deletion does occur, it is largely predictable on phonological grounds. The study of loanword adaptation reveals a further implication of Structure Preservation: the violation of L1 constraints is generally solved with as little loss of phonological information as possible. Thus, an ill-formed L2 phoneme is not deleted if a feature can be added or deleted to solve the problem; an ill-formed syllabic structure is not deleted if insertion of a phoneme or, in the case of a constraint conflict, the loss of a phoneme will suffice, etc. Another key observation in loanword adaptation (we see this as another side-effect of Structure Preservation), for which any theory must account, is the limited range of adaptations that predominate cross-linguistically. This issue is addressed in LaCharité and Paradis (2005). For example, English /æ/ is systematically adapted as /a/, not as /i/, /e/, /o/, or /u/, in the CoPho database.22 In Mexican Spanish, adaptation of */æ/ to /a/ occurs in 354/360 cases (98.33 percent); in French it occurs in 1,405/1,405 cases (100 percent), in Japanese it occurs in 536/536 (100 percent) cases, and in Calabrese Italian 1,121/1,214 cases (92.3 percent). As another example, English />/ is predictably adapted as /i/. In Mexican Spanish, */>/ adapts to /i/ in 387/388 adaptation cases (99.7 percent); in Japanese, adaptation of */>/ to /i (i()/ occurs in 631/649 adaptation cases (97.2 percent); in Calabrese Italian, this adaptation occurs in 1,588/1,588 adaptations (100 percent). Even when more than one adaptation for a given sound is attested, either cross-linguistically or within a single language, the range of results is small and predictable. For 22

In Quebec French, there are cases where English /æ/ surfaces as [e] in loans such as band, gang, and pantry. We believe that this is because these words are often pronounced with the variant [e] in English (e.g. [bend], [geI], and [pentPi]). In these cases, we say that the English variant is imported. It sounds more “anglophone” i.e. more “in,” to pronounce these words with [e], although they can be pronounced with [a] too.

Structure Preservation

17

example, English /v/ is adapted as /b/, /f/, or /w/ cross-linguistically. English /Z/ is adapted as either /a/ or /o(D)/. Why should a borrowing language not simply replace any illicit sounds arbitrarily, or with default/high-frequency sounds, if there were no pressure to remain close to the input? Even though particular sounds are illicit from the point of view of the borrowing language, as much as possible is salvaged or, conversely, as little as possible is lost. For instance, in the common cross-linguistic adaptation of */v/ to /b/, only the continuant value changes; in the adaptation to /f/, which is also found cross-linguistically in loanword adaptation, only the voicing value is modified, whereas in the adaptation to /w/, a slightly less frequent but nonetheless common adaptation, it is the sonorant value which is targeted. The adaptation of /v/ to /w/ is systematic in Fula (French civil [sivil] ‘civilian’ > Fula [siwil]; Paradis and LaCharité 1997) and in several Asian languages, including Thai in word-initial and intervocalic positions (e.g. English vitamin [vajtHm>n]/[v>tHm>n] and travel agent [tPævHl e–Hnt] > Thai [wittamin] and [t(r)awHl ejen], respectively; word-finally it is adapted as /p/ for phonotactic reasons; e.g. English serve [sHPv] > Thai [sH(p]). Within the context of TC, this has been attributed to the Minimality Principle in (3). (3)

Minimality Principle a. b.

A repair strategy must apply at the lowest phonological level to which the violated constraint refers. Repair must involve as few strategies (steps) as possible.

The lowest phonological level referred to in (3a) is determined by the phonological level hierarchy (metrical level > syllabic level > skeletal level > root node > feature), an independently required organization of phonological information. Clearly, the Minimality Principle (whose effects are addressed in Paradis and LaCharité 1997) is intrinsically related to the notion of preservation. If preservation were not an issue, then why should repair not often, or even routinely, operate at a higher-than-needed level, guided by some notion of “better safe than sorry”?

3.3

Preservation of L2 phonemic contrast patterns in L1: English loanwords in Chinese and Hindi

Not only are individual L2 phonemes conserved, to the greatest extent possible within the limits allowed by the L1, but L2 phonemic contrast patterns are also maintained to the greatest extent possible permitted by the phonology of L1. For example, Chinese does not have a voicing distinction among stops; it does, however, distinguish stops on the basis of another laryngeal feature, aspiration. In the adaptation of English loanwords in Mandarin Chinese (MC), English voiceless stops (/p t k/) systematically yield aspirated voiceless stops (/ph t h kh/), and English voiced stops (/b d g/) are systematically replaced by unaspirated voiceless ones (/p t k/; see Paradis and Tremblay 2009 for an in-depth discussion of this issue, with figures and statistics). For example, English pizza [pitsH], hippies [h>piz], and tank [tæIk] yield MC [phi sa], [si phiÏ], and [than khH], respectively, whereas English Boeing [bo>I], radar [PedAP], and golf [gAlf] are adapted as MC [pHin], [lHj ta], and [kaw Hr fu]. This pattern of adaptation is not restricted to English loans; it also applies to French loans in MC (e.g. French Pierre Cardin [pjeökaödÂ] and Chirac [œiöak] >

Carole Paradis & Darlene LaCharité

18

MC [phi Hr kha tan] and [si la khH]), despite the fact that voiceless stops are not aspirated in French as they are in English before a stressed vowel. The same type of pattern transfer is found in other Chinese dialects, such as Cantonese. This indicates that Chinese borrowers are aware of the systematic distinction between voiced and voiceless stops in English, and that adaptation seeks to preserve this L2 distinctive pattern, using the contrastive resources provided by the L1. Comparable facts are found in Hindi. Hindi has a voicing distinction for stops. Thus English voiced and voiceless stops yield Hindi voiced and voiceless stops, respectively (e.g. English bellboy [belbDj], baggage [bægH–], coffee [kAfi], and frock [frAk] > Hindi [beÎbDj], [bage–], [kDfi], and [frDk]). However, English voiced and voiceless alveolar stops are adapted as retroflex stops /Í/ and /Õ/, which contrast with dental stops /t/ and /d/ in Hindi (e.g. English agreement [HgPimHnt], beauty parlor [bjuti pAPlHP], badminton [bædm>ntHn], and baking powder [bek>I pawdHP] > Hindi [agrimHnÍ], [bwuÍi pAÈlHÈ], [baÕminÍHn], and [bek>I pawÕeÈ], respectively), while English interdentals /h/ and /Ï/ are adapted as plain dental stops, that is /t/ and /d/, respectively (e.g. Thatcher [hæŒHP] and brother [bPZÏHP] yield Hindi [taŒHr] and [brDdHr]). Again, the L2 contrast pattern is preserved in L1, using the contrastive resources provided by the latter. Adaptation of interdentals to fricatives would yield a greater loss of information because Hindi has only /s/, not /z/ (except in borrowings, especially from Arabic). The voicing contrast of English interdentals would then be lost. On the other hand, if English alveolar stops were adapted as phonetically more expected dental /t/ and /d/ in Hindi, there would not be any slot left for the adaptation of the interdentals, which would have to merge with the English alveolar stops in Hindi.

3.4

Preservation of L2 syllabic contrasts in L1: French loanwords in Russian

The adaptation of French loanwords in Russian suggests that unpredictable syllabic structure might also be preserved in loanword adaptation. French diphthongs have to be marked underlyingly, as they are unpredictable. Pairs such as oiseau [waso] ‘bird’ vs. watt [wat] ‘watt’ show this. In oiseau, wa is a diphthong (e.g. l’oiseau [lwazo] ‘the bird’), making the word vowel-initial, whereas in watt it is an onset–nucleus sequence, as it is in English, making watt consonant-initial (e.g. le watt [lH wat]; see Kaye and Lowenstamm 1984 on diphthongs in French). The presence or absence of an onset is shown by, among other things, the choice of singular and plural definite articles. Before vowel-initial words the singular definite article is [l], as with l’arbre [laöbö] ‘the tree’. Moreover, the plural definite article triggers liaison (les oiseaux [le zwazo] ‘the birds’, as with les arbres [le zaöbö] ‘the trees’). Preceding a consonant, the definite articles are le [lH] and les [le], respectively (le watt [lH wat] ‘the watt’, not *[lwat], and les watts [le wat] ‘the watts’, not *[le zwat], as with le bateau [lH bato] ‘the boat’ and les bateaux [le bato] ‘the boats’). In French loanwords in Russian, /wa/ is adapted as a bisyllabic sequence of /u+a/ when it is part of a diphthong (e.g. French voile [vwal] ‘veil’, mémoire [memwaö] ‘memory’, and couloir [kulwaö] ‘corridor’ > Russian [vuAl], [mjemuarq], and [kuluarq]), whereas when /w/ constitutes an onset, it is systematically adapted as /v/ (French watt [wat] > Russian [vAt];23 English whisky [w>ski] and 23

This loan was introduced via French, even though it originates from English.

Structure Preservation

19

tramway [træmwej] > Russian [vj>skj>]) and [tramvaj]). These examples might suggest that the difference in adaptation is due to the fact that /wa/ in French loans is preceded by a consonant, whereas in English loans it is not. However, English borrowings such as sweater [swetHP], swap [swAp], and swing [sw>I], which yield Russian [svj>ter], [svop], and [sv>jI],24 not *[su>j>ter] or *[suater], etc., invalidate this hypothesis. The fact that /wV/ is treated differently when it is a diphthong than when it is an onset–nucleus sequence is interesting, because it suggests that where syllabic affiliation is unpredictable – when it is contrastive and would have to be indicated underlyingly – it is preserved. This interesting question remains to be investigated more thoroughly.

3.5

Importation and Structure Preservation

Non-adaptations in loanwords – that is the importation of foreign phonemes in words borrowed from another language (L2) – present a challenge to SP since they consist in the introduction of new phonemes at the lexical level. Two cases that were previously mentioned are /ç/ in German and /Ú/ in English. One might object that the German case as presented here is oversimplified and does not present an uncontroversial picture of the facts (consider, for example, the contradiction between Hall’s 1989 position and the scenario advanced by Bybee 2008), or that, in English, the phonemic status of /Ú/ is not well established, given that its distribution is restricted to intervocalic position, except in loanwords (see Iverson and Salmons 2005: 210 on /Ú/ in English). However, the German case seems to be problematic for SP no matter which view one takes; either a sound/sound distinction ([ç] vs. [x]) that does not exist at the underlying level is introduced at the lexical level (Hall’s view) or non-native /ç/ has become phonemic in German over time, under the influence of loanwords (Bybee’s view). As for the voiced palatal fricative in English, even if /Ú/ were validly considered a phonetic variant intervocally in native English words, the fact that it is tolerated (unadapted) at the end and now at the beginning of borrowings indicates that it is a phoneme in English, though a marginal or peripheral one, in the terminology of Itô and Mester (1995). The challenge goes beyond German and English: the literature on loanwords reports abundant cases of importation (see e.g. Ulrich 1997: 432 on the importation of an English coda palatal in Lama, and Mohanan and Mohanan 2003 on the importation of English /f/ in Malayalee English). In the case of particular phonemes, importation can even be the norm. In the Moroccan Arabic corpus of the Project CoPho loanword database, /p/ is widely imported (320/454 cases, 70.5 percent) (e.g. French pape [pap] ‘pope’ > Moroccan Arabic [pap] instead of expected [bab]). Another example is /œ/ in the CoPho corpus of English loanwords in Mexican Spanish, which is imported in 102/138 cases (74 percent) (e.g. English shorts [œoPts] and carwash [kAPwAœ] > Mexican Spanish [œDPts] and [ka7wDœ], not [ŒD7ts] and [ka7wDŒ], as expected). While some foreign sounds are only occasionally, or never, left unadapted, others are imported more often than they are adapted. In some language situations, such as Spanish loanwords in Guarani, importations from Spanish are systematic, i.e. Spanish phonemes are never adapted (see Oñederra 2009 for a similar situation with Spanish loans in Basque). 24

[sv>Ik] also exists as a variant; it is perceived by some Russian speakers as more “English,” possibly because of hypercorrection.

20

Carole Paradis & Darlene LaCharité

We must remember, though, that in phonological situations involving language contact, including loanword adaptation, two languages are in play. Our hypothesis is that under certain sociolinguistic conditions, such as when borrowers are highly bilingual and society generally tolerant of importations (such as when the L2 enjoys widespread prestige), the preservation of the L2 system also becomes an issue, one that can be at odds with the preservation of the L1 system. In cases of adaptation, preservation of the L2 system becomes subordinated to the preservation of the L1 system; in cases of importation, the reverse occurs. This, and the fact that loanword adaptations are generally minimal, supports the view that the structural integrity of L2 is rarely left out of consideration altogether, so it is not unlikely that such concern may sometimes come to predominate. If this interpretation is correct, then identification and preservation of contrastive information is important in both L1 and L2, despite inevitable conflicts between the demands of each of the two linguistic codes.

4

Conclusion

As shown in §2, SP, as referred to in Lexical Phonology, was regularly challenged by the facts, even though most phonologists agree that it plays some role, i.e. nonphonemic sounds are generally not generated in the lexicon. Languages tend to preserve their phonemic integrity at this level. Nonetheless, the numerous exceptions to SP reported by many different authors, providing evidence from numerous different languages, might give the impression that Structure Preservation is either misguided or just an artifact of some other principles/processes with no intrinsic validity. There remains little doubt these days that SP as conceived in Kiparsky (1982, 1985) is too restrictive, not to mention its being linked to a network of other assumptions and principles that have themselves been seriously challenged. In fact, even the notions of phonological rules and their application has been subjected to a major rethinking. SP limited the power of phonological rule application, but modern frameworks eschew rules in favor of constraints; if rules have not been abandoned altogether, they have certainly lost their driving force. In a derivational constraint-based theory, such as TC, rules are context-free and functionally motivated, being limited to repairing constraint violations. Thus, their power is intrinsically more circumscribed than was the case of SPE-type rules that were, in and of themselves, the motivation for phonological change (i.e. they were essentially descriptive devices with little or no explanatory power). In a filter-based theory, such as OT, filters, including any that favor the preservation of input structure or contrasts (i.e. faithfulness filters), are ranked on a language-specific basis. Given that feature of the theory, it is not obvious how OT would deal with a cross-linguistic tendency to preserve input structure/contrasts (see Paradis 1996 on this issue). If the faithfulness constraint Max-X (which prevents deletion of a phoneme or feature that is in the input; previously Parse) is shown to play a consistently high-ranked (though, as already mentioned, not necessarily undominated) role in OT analyses, then OT too might need to appeal to a mechanism that accounts for Structure Preservation. Therefore, the real question is: does Structure Preservation have any kind of intrinsic validity for phonological theory? We maintain that it does. In §3, we used the adaptation of loanwords to underline the continued need for some notion of Structure Preservation. However, we see this principle as

Structure Preservation

21

having a broader scope than that defined for SP in Lexical Phonology. Not only do languages tend to preserve their own phonemic inventories at the lexical level as much as possible (in the spirit of SP in Lexical Phonology), but they also tend to maximally preserve the phonemic contrasts and contrast patterns of the languages from which they borrow words. Thus, the resistance to change is, above all, a question of contrast/category pattern preservation, which is expressed interlinguistically (i.e. between L2 and L1, as was illustrated in this chapter with the treatment of loanwords), as well as intralinguistically (as was illustrated with the [D/o] alternation case in French and Velar Softening in the English and French cases). As mentioned at the outset of the chapter, it is not just a question of inertia. Speakers work hard to preserve L1 or L2 phonological patterns.25 If there were no (universal) pressure to preserve an input’s contrastive information, then why would deletion be so rare in loanword adaptation? Why would it not occur randomly in some 50 percent of the cases? Moreover, when deletion does occur, why is it so largely predictable on the basis of phonology? Among adaptations, why are the changes to ill-formed sounds and structures so consistently predictable in terms of minimality, and why is there such a limited range of adaptations found cross-linguistically? This is because distinctive information is as resilient and resistant to change in L1 as it is in L2 in the mind of borrowers. When L2 wins, the result is an importation (a non-adaptation), i.e. the introduction of a new phoneme or structure in L1, as discussed in §3.5. Extensive language contact is required for this to happen, though. When L1 wins, which is more generally the case in the first stages of borrowing, we obtain an adaptation, whose goal is to produce a form that meets the phonological demands of the borrowing language’s phonology. This means that some L2 contrastive information will inevitably, though minimally, be sacrificed, because the preservation of the L2 contrastive information is often at odds with preservation of the contrasts of the L1 phonological system. However, in focusing on phoneme modification (i.e. adaptation), we risk undervaluing the fact that, to the greatest extent possible, an adaptation retains most properties of the source form. In this chapter, the properties referred to have included distinctive phonemic information, as illustrated with loanwords from French and English in many different languages (Japanese, Khmer, Thai, Fula, Kinyarwanda, Kashmiri, etc.), phonemic contrast patterns from English loans in Chinese and Hindi, and syllabic contrast pattern from French loans in Russian. This is not intended to be exhaustive though. Other types of contrastive information are expected to show similar resilience. In this chapter, we have tried to emphasize that contrast resilience extends to L2; it is not limited to L1. L1 adapters feel strongly concerned about preserving L2 contrastive information; in the case of importations this is to the detriment of their own (L1) contrast system, which is forced to change. Ultimately, we suggest that what might salvage SP, after all, is to consider it in a much broader perspective in order to deepen our understanding of its purpose and functioning. It will then be easier to circumscribe its effects in native and borrowed words and formulate it more formally, even if it is in terms of a statistically significant tendency instead of an absolute generalization. 25

Structure Preservation obviously does not have the same impact in L2 acquisition as in loanword adaptation. Its influence is necessarily reduced in L2 acquisition, since L2 learners (especially beginners) are not as knowledgeable about the L2 code as are the borrowers (see Paradis and LaCharité 1997 on the borrowers’ bilingualism issue) and thus cannot be as protective of a code with which they are not sufficiently acquainted.

22

Carole Paradis & Darlene LaCharité

ACKNOWLEDGMENTS We would like to thank K. P. Mohanan for his comments on a draft of this chapter and for several discussions about Structure Preservation. We are grateful to Joan Bybee for sending comments on this notion, along with a scanned version of a relevant document. We also benefited from discussions with Zhiming Bao and Haruo Kubozono on phoneme preservation/deletion in loanwords. Finally, we would like to thank the editors, in particular Marc van Oostendorp and Keren Rice, and the reviewers for their comments. We remain solely responsible for the views expressed here as well as for any remaining errors or omissions. Research for this chapter was made possible by SSHRCC grant #410-2008-1128 to the first author and by SSHRCC grant #410-2007-0566 to the second author.

REFERENCES Bermúdez-Otero, Ricardo & April McMahon. 2006. English phonology and morphology. In Bas Aarts & April McMahon (eds.) The handbook of English linguistics, 382–410. Cambridge, MA & Oxford: Blackwell. Borowsky, Toni. 1986. Topics in the lexical phonology of English. Ph.D. dissertation, University of Massachusetts, Amherst. Published 1990, New York: Garland. Borowsky, Toni. 1989. Structure preservation and the syllable coda in English. Natural Language and Linguistic Theory 7. 145–166. Bybee, Joan. 2008. Formal universals as emergent phenomena: The origins of structure preservation. In Good (2008), 108–121. Calabrese, Andrea. 2005. Markedness and economy in a derivational model of phonology. Berlin & New York: Mouton de Gruyter. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. Good, Jeff (ed.) 2008. Linguistic universals and language change. Oxford: Oxford University Press. Hall, T. A. 1989. Lexical Phonology and the distribution of German [ç] and [x]. Phonology 6. 1–17. Hall, T. A. 1992. Syllable structure and syllable-related processes in German. Tübingen: Niemeyer. Hall-Lew, Lauren A. 2002. English loanwords in Mandarin Chinese. B.A. thesis, University of Arizona. Hargus, Sharon & Ellen M. Kaisse (eds.) 1993. Studies in Lexical Phonology. San Diego: Academic Press. Harris, John. 1987. Non-structure-preserving rules in Lexical Phonology: Southeastern Bantu Harmony. Lingua 73. 255–292. Harris, John. 1989. Toward a lexical analysis of sound change in progress. Journal of Linguistics 25. 35–56. Harris, John. 1990. Derived phonological contrasts. In Susan Ramsaran (ed.) Studies in the pronunciation of English: A commemorative volume in honour of A. C. Gimson, 87–105. London: Routledge. Hyman, Larry M. 1993. Structure preservation and postlexical tonology in Dagbani. In Hargus & Kaisse (1993), 235–254. Inkelas, Sharon & Draga Zec (eds.) 1990. The phonology–syntax connection. Chicago: University of Chicago Press. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts, Amherst. Published 1988, New York: Garland. Itô, Junko & Armin Mester. 1995. Japanese phonology. In Goldsmith (1995), 817–838.

Structure Preservation

23

Itô, Junko & Armin Mester. 2001. Structure Preservation and stratal opacity in German. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 261–295. Cambridge: Cambridge University Press. Iverson, Gregory K. 1993. (Post) lexical rule application. In Hargus & Kaisse (1993), 255–275. Iverson, Gregory K. & Joseph C. Salmons. 2005. Filling the gap: English tense vowel plus final /s#/. Journal of English Linguistics 33. 207–221. Kaisse, Ellen M. 1990. Towards a typology of postlexical rules. In Inkelas & Zec (1990), 127–143. Kaye, Jonathan & Jean Lowenstamm. 1984. De la syllabicité. In François Dell, Daniel Hirst & Jean-Roger Vergnaud (eds.) Forme sonore du langage: Structure des representations en phonologie, 123–159. Paris: Hermann. Kiparsky, Paul. 1982. Lexical morphology and phonology. In Linguistic Society of Korea (ed.) Linguistics in the morning calm, 3–91. Seoul: Hanshin. Kiparsky, Paul. 1985. Some consequences of Lexical Phonology. Phonology Yearbook 2. 85–138. Kiparsky, Paul. 2008. Universals constrain change; change results in typological generalizations. In Good (2008), 23–53. Krämer, Martin. 2006. The emergence of the comparatively unmarked. Proceedings of the West Coast Conference on Formal Linguistics 25. 236–244. LaCharité, Darlene & Carole Paradis. 2005. Category preservation and proximity versus phonetic approximation in loanword adaptation. Linguistic Inquiry 36. 223–258. Macfarland, Talke & Janet Pierrehumbert. 1991. On ich-Laut, ach-Laut and Structure Preservation. Phonology 8. 171–180. Millward, C. M. 1996. A biography of the English language. 2nd edn. Fort Worth: Harcourt Brace. Mohanan, K. P. 1986. The theory of Lexical Phonology. Dordrecht: Reidel. Mohanan, K. P. 1995. The organization of the grammar. In Goldsmith (1995), 24–69. Mohanan, K. P. & Tara Mohanan. 1984. Lexical phonology of the consonant system in Malayalam. Linguistic Inquiry 15. 575–602. Mohanan, Tara. 1989. Syllable structure in Malayalam. Linguistic Inquiry 20. 589–625. Mohanan, Tara & K. P. Mohanan. 2003. Towards a theory of constraints in OT: Emergence of the not-so-unmarked in Malayalee English. Unpublished ms., National University of Singapore (ROA-601). Moreno de Alba, José G. 1988. El español en América. Mexico City: Fondo de Cultura Económica. Nikiema, Emmanuel & Parth Bhatt. 2003. Two types of R deletion in Haitian Creole. In Ingo Plag (ed.) The phonology and morphology of creole languages, 43–69. Tübingen: Niemeyer. Oñederra, Miren Lourdes. 2009. Early bilingualism as a source of morphophonological rules for the adaptation of loanwords: Spanish loanwords in Basque. In Wetzels & Calabrese (2009), 193–210. Paradis, Carole. 1988. On constraints and repair strategies. The Linguistic Review 6. 71–97. Paradis, Carole. 1996. The inadequacy of faithfulness and filters in loanword adaptation. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, 509–534. Salford: ESRI. Paradis, Carole & Renée Béland. 2002. Syllabic constraints and constraint conflicts in loanword adaptations, aphasic speech and children’s errors. In Jacques Durand & Bernard Laks (eds.) Phonetics, phonology and cognition, 191–225. Oxford: Oxford University Press. Paradis, Carole & Darlene LaCharité. 1997. Preservation and minimality in loanword adaptation. Journal of Linguistics 33. 379–430. Paradis, Carole & Darlene LaCharité. 2001. Guttural deletion in loanwords. Phonology 18. 255–300.

24

Carole Paradis & Darlene LaCharité

Paradis, Carole & Darlene LaCharité. 2008. Apparent phonetic approximation: English loanwords in old Quebec French. Journal of Linguistics 44. 87–128. Paradis, Carole & Darlene LaCharité. Forthcoming. Loanword adaptation: From lessons learned to findings. In John A. Goldsmith, Jason Riggle & Alan C. L. Yu (eds.) Handbook of phonological theory. 2nd edn. Malden, MA & Oxford: Wiley-Blackwell. Paradis, Carole & Jean-François Prunet. 2000. Nasal vowels as two segments: Evidence from borrowings. Language 76. 324–357. Paradis, Carole & Antoine Tremblay. 2009. Nondistinctive features in loanword adaptation: The unimportance of English aspiration in Mandarin Chinese phoneme categorization. In Wetzels & Calabrese (2009), 211–224. Paradis, Carole, Caroline Lebel & Darlene LaCharité. 1994. Adaptation d’emprunts: Les conditions de la préservation segmentale. Toronto Working Papers in Linguistics 12. 108–134. Phillips, Betty S. 2006. Word frequency and lexical diffusion. New York: Palgrave Macmillan. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Rice, Keren. 1990. Predicting rule domains in phrasal phonology. In Inkelas & Zec (1990), 289–312. Rojas, Nelson. 1988. Fonología de las líquidas en el español cibaeño. In Robert Hammond & Melvyn Resnick (ed.) Studies in Caribbean Spanish dialectology, 103–111. Washington, DC: Georgetown University Press. Sproat, Richard. 1985. On deriving the lexicon. Ph.D. dissertation, MIT. Steriade, Donca. 1995. Underspecification and markedness. In Goldsmith (1995), 114–174. Steriade, Donca. 2007. Contrast. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 139–157. Cambridge: Cambridge University Press. Ulrich, Charles H. 1997. Loanword adaptation in Lama: Testing the TCRS model. Canadian Journal of Linguistics 42. 415–463. Wetzels, W. Leo & Andrea Calabrese (eds.) 2009. Studies in loan phonology, 211–224. Amsterdam & Philadelphia: John Benjamins. Wiese, Richard. 1996. The phonology of German. Oxford: Clarendon Press. Zinc, Gaston. 1986. Phonétique historique du français. Paris: Presses Universitaires de France.

77 Long-distance Assimilation of Consonants Sharon Rose

1

Introduction

There are numerous patterns in languages in which consonants assimilate at a distance for some acoustic or articulatory property. When vowels and consonants intervening between the assimilating consonants show no observable effect of the assimilating property, such patterns are labeled “consonant harmony.” Other terms such as “consonant agreement” have been used (Rose and Walker 2004) in order to distinguish them from cases of harmony involving both vowels and consonants, such as emphasis harmony (Shahin 2002; chapter 25: pharyngeals) or nasal harmony (Walker 2000a; chapter 78: nasal harmony). Consonant harmony has played a central role in debates concerning harmony patterns in general (Rose and Walker, forthcoming) with respect to several issues: locality of interaction, transparency or blocking in long-distance assimilation, and directionality. In this chapter, the main typological patterns of consonant harmony are outlined, highlighting the challenges that the typology presents, including a discussion of harmony domains and directionality. Two main theoretical approaches to consonant harmony are then explored: analyses involving spreading an assimilating feature or extending a gesture across all segments within a string, and analyses advocating distinct correspondence relationships between consonants independently of intervening segments. The role of contrast in determining harmony interaction is examined within both of these frameworks. Finally, experimental approaches to consonant harmony are discussed, showing how they shed light on the analysis of consonant harmony.

2

Typology of long-distance assimilation of consonants

Long-distance assimilation of consonants or “consonant harmony” can be defined as: (1)

Consonant harmony Assimilation for an articulatory or acoustic property between two or more non-adjacent consonants, where intervening segments are not noticeably affected by the assimilating property.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0077

Sharon Rose

2

An example is given in (2) from Tahltan, an Athabaskan language (Shaw 1991). The 1sg subject prefix /s-/ (2a) is realized as [h] when a dental fricative or affricate follows (2b), or as [œ] when a lamino-post-alveolar fricative or affricate follows (2c). Intervening consonants and vowels, including other coronal consonants, are transparent to the harmony: (2)

Tahltan coronal harmony a. b. c.

esk’a( nestex xa?eht’ah ehdu(h eœ–>ni jaœtx’eŒ

‘I’m gutting fish’ ‘I’m sleepy’ ‘I’m cutting the hair off’ ‘I whipped him’ ‘I’m singing’ ‘I splashed it’

Consonant harmony can involve morpheme alternations, as in (2), but may also occur as a morpheme structure constraint (chapter 86: morpheme structure constraints), requiring consonants within a root to share featural properties. In Ngizim (Chadic) roots, non-implosive obstruents must have the same voicing property (3a), unless the linear order of the obstruents is voiced . . . voiceless (3b) (Schuh 1997): (3)

Ngizim laryngeal harmony a.

b.

kùtpr tàsáu z‡dù gâazá bàkú gùmŒí

‘tail’ ‘find’ ‘six’ ‘chicken’ ‘roast’ ‘chin’

(Hausa /œídà/) (Hausa /kàazáa/)

The asymmetrical nature of the restriction points to a harmonic process. Voiceless . . . voiced combinations are not sanctioned, and Hausa words with such sequences are realized as voiced . . . voiced in Ngizim. The Ngizim harmony is therefore a regressive harmony, in which voiceless consonants assimilate to voiced, but not vice versa. The definition of consonant harmony provided in (1) excludes other types of long-distance harmony that also involve assimilation spanning several segments, including both vowels and consonants, such as nasal harmony (Piggott 1988, 1992, 2003; Piggott and van der Hulst 1997; Walker 2000a, 2000b, 2003; chapter 78: nasal harmony) or post-velar or emphasis harmony (Younes 1993; Watson 1999; Zawaydeh 1999; Shahin 2002; chapter 25: pharyngeals). See also chapter 75: consonant–vowel place feature interactions on consonant–vowel interactions in general. In these harmony systems, assimilation affects both vowels and consonants, and certain types of segments can block harmony. Transparency of consonants is observed only under very restricted conditions. Conversely, transparency is routine in consonant harmony, whereas blocking is rare. There are several types of long-distance consonant assimilation identified in typological studies of consonant harmony, laid out in detail in Hansson (2001a) and summarized in Rose and Walker (2004). The main types are outlined in the following sections.

Long-distance Assimilation of Consonants

2.1

3

Laryngeal harmony

Laryngeal harmony requires consonants to agree in aspiration, glottalic airstream, or voicing. Laryngeal distinctions are characterized by the features [spread glottis], [constricted glottis], and [voice], respectively, although different feature specifications are possible. Laryngeal harmony is most frequently observed in morpheme structure constraints (MacEachern 1999). Laryngeal harmony is found in Chaha, a Gurage Semitic language of Ethiopia (Rose and Walker 2004), in which oral coronal and velar stops in roots match for both [constricted glottis] and [voice]: (4)

Chaha laryngeal harmony a.

b.

c.

jq-t’Hk’qr jq-t’Hßk’ jq-t’Hk’qk’ jq-t’Hrk’ jq-kHtf jq-kHft jq-tHks jq-gHdqr jq-dHrg jq-gHda

‘he hides’ ‘it is tight’ ‘it is being crushed’ ‘it is dry’ ‘he hashes (meat)’ ‘he opens’ ‘he sets on fire’ ‘he puts to sleep’ ‘he hits, fights’ ‘he draws liquid’

cf. Endegegn (Gurage) cf. Masqan (Gurage)

q-dHkk’ jq-dHrk’

cf. Amharic

jq-k’Hda-l

Cognates in related languages show laryngeal mismatches, giving insight into the direction and implementation of the harmony. Harmony was regressive, and either ejectives or voiced stops could trigger harmony. Exceptions to laryngeal harmony involve non-adjacent combinations of an ejective and a voiced stop, e.g. [jq-gHmt’] ‘he chews off’. Voicing and aspiration harmony is found in (non-click) stops in disyllabic roots of Zulu (Bantu), as in (5a) (Khumalo 1987; Hansson 2001a). Zulu contrasts plain stops (which may be realized as ejective), voiced stops (described as “depressors,” as they can lower tone),1 and aspirated stops. Loanwords (5b) are adapted to conform to laryngeal harmony. (5)

Zulu laryngeal harmony a.

b.

ukú-peta úku-phát ha uku-guba í-khôt ho um-bídi

‘to dig up’ ‘to hold’ ‘to dig’ ‘court’ ‘conductor’ < English beat

Ngizim voicing harmony was illustrated in (3). Kera (Chadic) appears to have voicing alternations in affixes conditioned by voiced stops or affricates in the stem (Ebert 1979; Rose and Walker 2004), e.g. [kH-sár-káI] ‘black (coll.)’ vs.

1

Zulu voiced stops may be phonetically voiceless (Traill et al. 1987), so this is not a clear case of “voicing” harmony.

Sharon Rose

4

[gH-–àr-gáI] ‘colorful (coll.)’. However, Pearce (2005) argues that voicing is conditioned by a neighboring low tone rather than the voiced stop in the stem, so this does not constitute a case of voicing harmony. Hansson (2004) argues that in Yabem, a Huan Golf language of Papua New Guinea, voicing restrictions arose from tonal patterns, and only superficially resemble consonant harmony. Laryngeal harmony is often restricted to apply between subclasses of obstruents. Harmony operates between pulmonic obstruents in Ngizim, whereas in Chaha and Zulu, it applies between stops with differing airstream mechanisms, but excludes fricatives. In Kalabari Ijo (Jenewari 1989; Hansson 2001a) and Bumo Izon (Efere 2001; Mackenzie 2005, 2009), Ijoid languages of Nigeria, plain voiced stops and implosives may not co-occur in roots. Other cases of laryngeal harmony require homorganicity or complete identity between consonants. In Bolivian Aymara, laryngeal harmony for aspiration and ejectivity occurs between homorganic stops, so they are identical, e.g. [k’ask’a] ‘acid to the taste’, whereas no harmony occurs between heterorganic stops: e.g. [t’aqa] ‘flock, herd’ (Hardman et al. 1974; Davidson 1977; de Lucca 1987; MacEachern 1996, 1999). Similar effects are found in Mayan languages, such as Chol (Gallagher and Coon 2009), Modern Yucatec (Straight 1976), and Tzutujil (Dayley 1985; Gallagher 2010). In conclusion, laryngeal harmonies are attested in numerous languages, most typically those that exhibit a three-way contrast in laryngeal features. Laryngeal harmony is usually root-restricted, may be subject to homorganicity requirements, and appears to be regressive for those cases in which directionality can be identified.

2.2

Coronal harmony

Coronal harmonies involve articulations both for tongue tip/blade posture (apical vs. laminal) and tongue position (dental, alveolar, post-alveolar). Sibilant harmony is the most commonly attested type of consonant harmony and requires sibilant coronal fricatives and affricates to match for tongue tip/blade posture and location. It is widely attested in Native American languages, particularly in Athabaskan and Chumash languages, but it also occurs in Basque, Berber, Bantu, Cushitic, and Omotic languages. An example of sibilant harmony in Tahltan was illustrated in (2).2 In Sidaama, a Cushitic language of Ethiopia (Kawachi 2007), the causative suffix /-is/ (6a) is realized as [iœ] when palato-alveolar fricatives or affricates appear in the preceding stem (6b). (6)

Sidaama sibilant harmony a.

b.

2

dirr-is hank’-is ra?-is miœ-iœ œalak-iœ Œ’uf-iœ

‘cause ‘cause ‘cause ‘cause ‘cause ‘cause

to to to to to to

descend’ get angry’ become cooked’ despise’ slip’ close’

Tahltan harmony may be only partially sibilant, since it is not clear the fricatives [h] and [Ï] are sibilant. They are described as predorsal alveolar in Nater (1989).

Long-distance Assimilation of Consonants

5

Sibilant harmony operates across vowels and non-sibilant consonants, including other coronals. In (6b), the intervening segments do not block and do not participate in the harmony. In some languages, such as Ineseño Chumash (Applegate 1972; Poser 1982; McCarthy 2007), both alveolar and post-alveolar sibilants may trigger harmony. The rightmost sibilant determines the tongue tip-blade realization of all sibilants in the stem. In (7a) and (7c), the 3rd singular subject prefix is /s-/, but it is realized as [œ] if there is a palatal sibilant to its right, (7b) and (7d). In contrast, the dual marker /iœ-/ (7e) is realized as [is] if followed by an alveolar sibilant (7f). (7)

Ineseño Chumash sibilant harmony a. b. c. d. e. f.

/s-ixut/ /s-ilakœ/ /ha-s-xintila/ /ha-s-xintila-waœ/ /p-iœ-al-nan’/ /s-iœ-tiœi-jep-us/

[sixut] [œilakœ] [hasxintila] [haœxintilawaœ] [piœanan’] [sistisijepus]

‘it burns’ ‘it is soft’ ‘his gentile’ ‘his former gentile’ ‘don’t you two go’ ‘they two show him’

Dental harmony is found in Nilotic languages such as DhoLuo (Stafford 1967; Yip 1989; Tucker 1994), Anywa (Reh 1996), Mayak (Andersen 1999), and Päri (Andersen 1988). It operates between dental and alveolar stops, including nasals if a contrast exists in the language, and may be triggered by either. In Päri (Andersen 1988; Hansson 2001a), dental harmony is respected in roots (8a). Root-final stops that are the product of final mutation combined with affixation match the dental or alveolar property of the initial stop (8b). (8)

Päri dental harmony a. b.

|D} àtwá(’ dè(l }ùol

‘sucking’ ‘adult male elephant’ ‘skin’ ‘snake’

dè(nd-á ‘my skin’ }ùo|{-á ‘my snake’

In Mayak (Andersen 1999), harmony is triggered by an alveolar and optionally affects suffixes of the shape /-V}/, as in (9). Intervening stops that are noncontrastive for the dental–alveolar distinction are transparent to the harmony. (9)

Mayak dental harmony a.

b.

le:-i} wZÏ-i} ?in-Z} tie-Z} ~ tie-Zt ket->n-e} ~ ket->n-et

‘tooth’ ‘buttock’ ‘intestine’ ‘doctor’ ‘star’

Retroflex harmony is reported for several languages. In Gimira (Benchnon), an Omotic language of Ethiopia (Breeze 1990), retroflex harmony restricts combinations of sibilants, requiring them to agree for retroflexion. In Malto (Dravidian) (Mahapatra 1979; Hansson 2001a), retroflex harmony operates between oral stops.

Sharon Rose

6

In Australian languages such as Arrernte (Arandic) (Henderson 1998; Tabain and Rickard 2007), apical alveolar and retroflex stops match for retroflexion in a root. Retroflex harmony is also reported in Kalasha (Indo-Aryan) (Trail and Cooper 1999; Arsenault and Kochetov, forthcoming), where it operates between stops, between fricatives, or between affricates in a root, but combinations of different manners of articulation may disagree for retroflex. Kalasha contrasts dentals, retroflex, and palatals; dentals and palatals also tend not to co-occur, so this is a general coronal harmony. (10)

Kalasha retroflex/palatal harmony a.

stops

b.

fricatives

c.

affricates

dental retroflex dental palatal retroflex dental palatal retroflex

thedi ÍoÍ sastirek toui ÏuÏik tsgtsaw tuhatui hik ÕsaÍÏ

‘now’ ‘apron’ ‘to roof a house’ ‘spring festival’ ‘to dry’ ‘squirrel’ ‘to take care of’ ‘spirit beings’

Other cases of coronal harmony involving alveolar stops and alveo-palatal affricates are reported in Hansson (2001a), and include Aymara (de Lucca 1987), Kera (Ebert 1979), and Pengo (Dravidian) (Burrow and Bhattacharya 1970). In each case, harmony rules out /t . . . Œ/ sequences, but allows the reverse, /Œ . . . t/. In terms of directionality, Hansson (2001a) points out two main directionality effects with respect to sibilant harmony. First, sibilant harmony shows a strong tendency to be regressive. In some cases, harmony is triggered by the rightmost sibilant, regardless of its location within a root or affix, as in Chumash (7) or Navajo (11) (McDonough 1991). The 1sg subject prefix /-iœ/ is variably realized as [is] or [iœ], depending on whether /s/ follows.3 (11)

Navajo sibilant harmony /j-iœ-mas/ /dz-iœ-x-ta(x/ /dz-iœ-l-ts’in/

jismas –iœta(x dzists’in

‘I’m rolling along’ ‘I kick him (below the belt)’ ‘I hit him (below the belt)’

Hansson (2001a, 2001b) relates the regressive bias of sibilant harmony to speech production. In speech production studies, anticipatory errors and assimilations are more common than are perseverative (Dell et al. 1997). This is modeled in a serial order theory of speech production, whereby one segment activates a consonant being planned and anticipates its production. There are cases of progressive sibilant harmony, as in the Sidaama case in (6), but in such cases, a suffix alternates in agreement with a root. The same pattern holds for dental harmony; no strong evidence for regressive patterns in dental harmony or retroflex harmony has been detected.

3

Navajo actually has examples of progressive sibilant harmony in the prefix string. See McDonough (1990, 1991) and Hansson (2001a: 193–198) for discussion.

Long-distance Assimilation of Consonants

7

The second directionality effect concerns the nature of the trigger. While some cases of sibilant harmony are like Navajo in that either alveolar or post-alveolar consonants can trigger harmony, other languages only allow /s/ to become [œ] and not the reverse. Hansson (2001a: 472) cites sixteen cases of the /s/ → [œ] pattern, but only one case of /œ/ → [s]. Hansson connects this effect to speech planning and the palatal bias effect reported in speech error research (Shattuck-Hufnagel and Klatt 1979). The palatal bias effect refers to the higher frequency with which alveolar consonants act as targets of speech errors by palatals.

2.3

Nasal harmony

Nasal consonant harmony is attested primarily in Bantu languages. Nasal stops harmonize with voiced stops and oral approximants. If voiceless stops harmonize, they do so only if voiced stops harmonize. In Kikongo (Dereau 1955; Ao 1991; Odden 1994), a nasal stop in a verb root causes a [d] in the active perfect suffix (12a) or [l] in the applicative suffix to be realized as [n] (12b). (12)

Kikongo nasal harmony a. b.

n-suk-idi m-bud-idi ku-sakid-il-a ku-toot-il-a

‘I washed’ tu-nik-ini ‘I hit’ tu-sim-ini ‘to congratulate for’ ku-nat-in-a ‘to harvest for’ ku-dumukis-in-a

‘we ground’ ‘we prohibited’ ‘to carry for’ ‘to cause to jump for’

Intervening vowels and other consonants are transparent to the harmony. Yaka has a similar nasal harmony pattern (Hyman 1995). In other languages, the nasal harmony is restricted to apply only across an intervening vowel, as in Lamba (Odden 1994), Bemba (Hyman 1995), Ndonga (Viljoen 1973), and Herero (Booysen 1982), and may be restricted to roots only. The main distinctions between nasal consonant harmony and general nasal harmony are (i) vowels are not nasalized, (ii) the trigger is a nasal consonant that targets a similar consonant (voiced stop or approximant), and (iii) other consonants and vowels do not block harmony. See chapter 78: nasal harmony for more extensive discussion of the distinction between the two kinds of nasal harmony. Nasal harmony operates progressively from root to suffix. However, it cannot be reduced in all cases to a stem-control effect. In Kikongo, roots such as /dumuk/ are possible, with a voiced stop preceding a nasal. The reverse order of nasal followed by voiced stop is not attested (Ao 1991; Piggott 1996), indicating that nasal harmony applied progressively within the root. The same pattern is attested in Yaka (Rose and Walker 2004).

2.4

Liquid harmony

Liquid harmony involves alternations between /r/ and /l/ (chapter 31: lateral consonants). In Bukusu (Bantu), liquid harmony is attested in roots (Hansson 2001a). In addition, the benefactive suffix /-il-/ is realized as [-ir-] following a stem with [r] (Odden 1994). Vowel height harmony applies to the suffix.

Sharon Rose

8 (13)

Bukusu liquid harmony teex-el-a lim-il-a i(l-il-a

‘cook for’ ‘cultivate for’ ‘send thing’

reeb-er-a kar-ir-a resj-er-a

‘ask for’ ‘twist’ ‘retrieve for’

In Sundanese (Malayo-Polynesian), /l/ triggers harmony of /r/ to [l] (Cohn 1992), as illustrated with the plural infix /-ar-/ in the final form in (14). (14)

Sundanese liquid harmony kusut rahqt lHga

‘messy’ ‘wounded’ ‘wide’

k-ar-usut r-ar-ahqt l-al-Hga

‘messy (pl)’ ‘wounded (pl)’ ‘wide (pl)’

Liquid harmony is also attested in Pohnpeian (Rehg and Sohl 1981). There are cases in which liquids alternate with glides in Bantu languages, such as in Basaa (Lemb and de Gastines 1973) and Pare (Odden 1994), and in which /l/ alternates with a lateral tap in ChiMwiini (Kisseberth and Abasheikh 1975). All cases of liquid harmony are either root-restricted or involve suffix alternations, so no directionality bias can be detected.

2.5

Dorsal harmony

Dorsal harmony is found in Malto, Gitksan (Tsimshianic), Aymara, and the Totonacan languages, and involves alternations between velar and uvular consonants. In Tlachichilco Tepehua (Watters 1988; Hansson 2001a), a uvular /q/ causes a preceding velar to become uvular, which in turn conditions lowering of the preceding high vowel (15b). (15)

Tlachichilco Tepehua dorsal harmony a. b.

?uks-k’atsa( ?uks-laqts’-in

[?uksk’atsa(] [?oqslaqts’in]

‘feel, experience sensation’ ‘look at Y across surface’

Hansson (2001a) notes that intervening vowels are not affected by the harmony even though uvulars lower adjacent vowels. In the word /lak-pu(tiq’i-ni-j/ → [laqpu(te?enij] ‘X recounted it to them’ (the /q’/ is realized as [?]) the vowel /u(/ fails to lower to [o(], despite appearing between two uvulars. Compare this with (15b). In Gitksan (Brown 2008), the harmony effect is a static co-occurrence restriction that can operate at a distance. Dorsal harmony causes velars to become uvular. While most dorsal harmony cases are regressive and target roots, this could be either a directionality effect or due to the trigger consonant, the uvular, being in an affix.

2.6

Stricture and secondary articulation harmony

In addition to the main types reported in §2.2–§2.5, Hansson (2001a) also lists stricture and secondary articulation harmonies. Stricture involves alternations between stops and fricatives, as in Yabem, e.g. /se-dàgù?/ → [tédàgù?] ‘they

Long-distance Assimilation of Consonants

9

follow (realis)’. Secondary articulation refers to labialization, palatalization, velarization, or pharyngealization. There are a few reported cases discussed in Hansson (2007a): pharyngealization in Tsilhqot’in (also known as Chilcotin, Athapaskan) (Cook 1983, 1993), which interacts with sibilant harmony, velarization in Pohnpeian (Micronesian) (Rehg and Sohl 1981; Mester 1988), and palatalization in Karaim (Turkic) (Kowalski 1929; Hamp 1976; Nevins and Vaux 2004), as shown below: (16)

Karaim palatal harmony djort hj-unjŒ ju alt hQ-nŒQ

‘fourth’ ‘sixth’

In sum, consonant harmony targets a range of segments: dorsals, liquids, and coronals, as well as segments differentiated by nasal and laryngeal features. Hansson (2001a) and Rose and Walker (2004) point out that a consistent characteristic of consonant harmony is the high degree of similarity between the interacting segments. Harmony is restricted to minor place or tongue features distinguishing among coronals and dorsals or to features that are also prone to local assimilation. Notably absent, however, is harmony for major place features such as [labial], [coronal], or [dorsal], as well as classificatory features that tend not to assimilate locally, such as [sonorant], [continuant], or [consonantal]. Rose and Walker (2004) relate the absence of place harmony to the inability of major place to change even in local assimilations, citing articulatory speech error research that shows that major place gesture errors tend to be additive rather than replacive (Goldstein et al. 2007; Pouplier 2007). Gafos (1999), on the other hand, argues that major place features cannot spread across vowels (contra Shaw 1991) without serious interruption of the vowel gestures. Only minor features such as tongue tip position can do this. See §3.1.1 for further discussion. The lack of major place consonant harmony is intriguing in light of two related phenomena: child language and dissimilation. Consonant harmony for major place is attested in child language (Vihman 1978), and according to chapter 72: consonant harmony in child language, it is the most common type of consonant harmony in child language. Recent analyses and proposals are discussed in Goad (1997), Berg and Schade (2000), Rose (2000), Pater and Werle (2003), and Rose and dos Santos (2006). The same mechanisms that underlie child phonology and adult phonology may not be the same; some child productions may be due to developmental factors (Rose and dos Santos 2006; Inkelas and Rose 2008). See chapter 72: consonant harmony in child language for an overview. Some authors have drawn a connection between long-distance consonant assimilation and long-distance consonant dissimilation (MacEachern 1999; Walker 2000c; Gallagher 2008), arguing that they are alternate responses to the same pressure. This does appear to be the case for laryngeal and liquid harmony. Yet there are key differences. A common dissimilation process occurs between labial consonants (Alderete and Frisch 2007), whereas labial consonant harmony is unattested. Nasal dissimilations involve prenasalized stops and nasals (Odden 1994), but these segments do not interact in nasal consonant harmony; prenasalized stops are transparent to nasal harmony and do not act as triggers (chapter 78: nasal harmony). A general theory of the relationship between long-distance consonant assimilation and consonant dissimilation currently appears elusive.

10

3

Sharon Rose

Analyses of long-distance assimilation

Two main theoretical analyses have been formulated for long-distance consonant harmony: spreading and correspondence. Spreading involves the extension of a gesture or feature across a string of segments, building upon early autosegmental analyses of vowel harmony and nasal harmony. Correspondence is proposed in Walker (2000b, 2000c), Hansson (2001a), and Rose and Walker (2004) and requires similar consonants to “correspond” and match each other for particular features, regardless of intervening consonants and vowels. The following sections outline these two approaches and identify the strengths of each proposal, as well as the challenges they encounter.

3.1

Spreading

Autosegmental phonology represented a major shift in the analysis of harmony systems. Although the distance effects of harmony systems had been explored within a Firthian prosodic analysis framework (Palmer 1970), Clements’s (1980) groundbreaking analysis of vowel harmony and extension to nasal harmony launched the study of harmony systems using autosegmental spreading. Early autosegmental analyses of consonant harmony include Halle and Vergnaud (1981) on Navajo and Poser (1982) on Chumash. In an autosegmental representation, the harmonizing feature (P-segment) is projected onto its own tier and linked to the segment (P-bearing segment) by means of an association line. Spreading involves extending the feature to other segments in the word via new association lines, as shown in (17) for the Sidaama word [œalakiœ] ‘cause to slip’: (17)

Long-distance spreading œalak - is [−ant]



œalak - iœ [−ant]

(17) illustrates a feature-filling rule, in which the feature [−anterior], characterizing the post-alveolar /œ/, spreads to the /s/, but the /s/ itself is unspecified for the feature [+anterior]. Within models of underspecification (Archangeli 1988; Paradis and Prunet 1989), the default feature [+anterior] is assumed to be filled in by a default rule at the end of the derivation if no specification is provided by a specific rule. Consonant harmony may also be feature-changing where the target and trigger have opposite values for the spreading feature. Sibilant harmony in Inseño Chumash has been analyzed as feature-changing (Poser 1982, 1993; Lieber 1987; Steriade 1987b; Shaw 1991), because harmony can be triggered by either [+anterior] /s/ or [−anterior] /œ/, altering the specification of the other sibilant. The target consonant acquires the specification of the trigger through spreading, and loses its own feature specification by delinking its original association line. This is illustrated in (18) for [œ-ilakœ] ‘it is soft’:

Long-distance Assimilation of Consonants (18)

11

Feature-changing harmony s - ilakœ



œ - ilakœ = [+ant] [−ant]

[+ant] [−ant]

In (18), the feature [anterior] is shown linked to the rest of the segment. In more articulated models of feature geometry (Clements 1985, 1991; Sagey 1986; McCarthy 1988; Clements and Hume 1995), features link to organizing nodes, which in turn link to the root node, connected directly to prosodic structure.

3.1.1 Transparency and spreading In both of the representations above, spreading takes place between two sibilants, despite the fact that other consonants and vowels intervene between the trigger and target. Intervening segments have the potential to block harmony, a phenomenon that is routinely observed in both vowel harmony and local vowel–consonant harmony. However, blocking is not generally observed with consonant harmony (Hansson 2001a; Rose and Walker 2004).4 Blocking segments, or “opaque” segments, can be characterized as those that are specified with the opposite value to the spreading feature. For example, in van der Hulst and Smith (1982a) [−nasal] segments block spreading of [+nasal] in nasal harmony. The [+nasal] feature cannot spread over the association line linking [−nasal] to a non-nasal segment, as this would violate the No-Crossing Constraint (Goldsmith 1979), a principle of autosegmental phonology (chapter 14: autosegments), which prevents association lines crossing. This is illustrated schematically in (19), where the symbols a, b, and c represent segments and [+F] the spreading feature, and where the [−F] value of a feature blocks the [+F] value from spreading. (19)

No-Crossing Constraint *

a

b

c

[+F] [−F] In more recent accounts, the blocking segment is assumed to be incompatible with the spreading feature due to an articulatorily grounded constraint (Archangeli and Pulleyblank 1994), preventing co-occurrence of the spreading feature and a specific feature or features of the blocking segment. For example, in nasal harmony, obstruents block nasal harmony in many languages (Walker 2000a), so [+nasal] may be restricted from associating to a [−sonorant] segment. Under this scenario, the No-Crossing Constraint does not apply; instead locality considerations prevent the spreading feature from skipping over the blocking segment and spreading to another segment. How locality should be defined has been a matter of debate. 4

One identified case is a voicing alternation in Imdlawn Tashlhiyt Berber, which is parasitic on sibilant harmony. Voiceless obstruents block voicing from transferring from stem to prefix.

Sharon Rose

12

If locality is defined at the level of the root node or segment, such that adjacent segments are local, no segment can be skipped in spreading, a theory referred to as strict locality. This is illustrated schematically in (20). (20)

Strict locality *

b

a

c

[+F] Under this view, consonant harmony would be similar to local assimilation (chapter 81: local assimilation), but extended over longer strings of segments. Locality may also be defined at the level of vowel nuclei of adjacent syllables (Archangeli and Pulleyblank 1987, 1994), in which case intervening consonants would be considered transparent. This is referred to as maximal scansion in Archangeli and Pulleyblank (1987). A third possibility is to define locality with respect to autosegmental feature tiers, or feature node tiers in a feature-geometry model, referred to as minimal scansion (Archangeli and Pulleyblank 1987; Steriade 1987a). Segments that lack specification on such tiers would not be computed for locality. In the following schematized representation, the features [F] of the segments a and c would be adjacent on the F tier, despite not belonging to adjacent segments: (21)

Tier-based locality a

b

[F]

c [F]

[G] Tier-based locality lies at the heart of autosegmental spreading analyses of transparency in harmony. Steriade (1987b) argued that intervening transparent consonants and vowels in Ineseño Chumash sibilant harmony lack specification for the feature [anterior] at the point when the harmonic spreading rule applies. Dorsal and labial consonants are excluded from participation in the harmony, as they have place feature specifications on other tiers – Dorsal and Labial. The feature [anterior] is assumed to be relevant only for coronal consonants. The same holds true for vowels in a system in which vowels are considered dorsal (Sagey 1986; Steriade 1987a).5 Yet the coronal consonants /t l n/ are also transparent in Chumash. Steriade adopts a form of contrastive specification, wherein only segments that contrast for a given feature need to be specified for that feature. The feature [anterior] is needed to distinguish sibilants in Chumash, but /t l n/ do not have [−anterior] counterparts, and so are predictably [+anterior]. Predictable features are left unspecified and filled in as default later in the derivation. Harmonic 5

This analysis might be problematic for feature systems in which coronals and front vowels share specification (Clements and Hume 1995), depending on how locality is defined (see Odden 1994 for discussion of vowel–consonant locality issues in this model).

Long-distance Assimilation of Consonants

13

spreading of the [anterior] feature operates unhindered between sibilants before a redundancy rule ([+coronal, −continuant] → [+anterior]) fills in predictable values on the other coronals. Locality is defined on the tier [anterior] as the spreading rule targets only consonants specified for [anterior], not those specified as Coronal, the organizing node on which [anterior] is dependent (Shaw 1991). Chumash is a feature-changing rule, but if spreading rules are feature-filling, targets would need to be defined with respect to other features or the node to which the feature attaches. Harmony does not always operate at the level of individual features, however. Shaw (1991) argues that a more complex harmony system in Tahltan involves spreading the Coronal node (chapter 12: coronals). Tahltan has a rich inventory of coronal consonants, contrasting dental stops, lateral continuants, interdental/ pre-dorsal sibilants, alveolar sibilants, and palatal sibilants. The latter three sibilant classes participate in coronal harmony, but the stops and laterals do not (examples in (2)). Shaw argues that in order to distinguish among a series of three sibilants, at least two features dependent on Coronal are needed. Under the assumption that a single unified spreading rule should capture the harmony, Shaw proposes that harmony involves spreading of the Coronal node. The other two transparent classes must be underspecified for Coronal at the time the rule applies. Similar harmonic effects (e.g. /s/ → [œ]), therefore, involve different spreading rules, depending on the particular inventory of the language. Gafos (1998, 1999) rejects tier-based locality, and presents a model of “Articulatory Locality,” in which locality is defined in terms of articulatory gestures (Browman and Goldstein 1986, 1989, 1990). Vowel gestures are contiguous across a consonant, whereas consonant gestures are not contiguous across a vowel. Vowel harmony may appear to skip over consonants, but consonants are in fact unaffected audibly by the spreading gesture. This strict locality view is also adopted by Ní Chiosáin and Padgett (1997, 2001), Walker and Pullum (1999), and Walker (2000a). Under strict locality, only coronal harmony, which involves assimilation for a tongue tip-blade feature, is predicted to be possible, due to non-interference with vowels. The tongue tip-blade is independent of the tongue dorsum used in the production of vowels, and its exact posture has no significant acoustic effect on vowel quality. By the same reasoning, dorsal and labial consonants would be predicted to intervene as “transparent,” since changes in the tongue tip-blade would not affect their production. Moreover, if the feature that distinguishes /s/ and /œ/ is apicality (tongue tip) vs. laminality (tongue blade), languages with no apical–laminal coronal stop contrast may allow stops to fluctuate between apical and laminal in different harmonic contexts, a suggestion made by Peter Ladefoged, as reported in Steriade (1995). Gafos (1999) formalizes this idea and proposes two new tongue tip-blade parameters: Tongue Tip Constriction Orientation (TTCO) and Tongue Tip Constriction Area (TTCA), gestures that do not skip over other segments, but are maintained through their production with little perceptible effect. Coronal segments /t n l/ in Chumash harmony are predicted to alter their production in accordance with the harmonic domain in which they occur, either apical [⁄] in words like /k-sunon-us/ → [ksu⁄o⁄us] ‘I obey him’ or laminal [¤] in words like /k-sunon-œ/ → /[kœu¤o¤œ] ‘I am obedient’. TTCO is identified as tip-up (↑) for apical and tip-down (↓) for laminal. As non-sibilant coronals do not contrast on this dimension in Chumash, they are not perceived as distinct.

14 (22)

Sharon Rose Gestural extension under strict locality

tongue tip

apical k-su⁄o⁄-us

laminal k-œu¤o¤-œ

ttco ↑

ttco ↓

Gafos argues that the strict locality view of consonant harmony explains the absence of other types of place harmonies. Major place gestures that define dorsal and labial consonants cannot spread across vowels (contra Shaw 1991) without serious interruption of the vowel gestures. Minor features such as tongue tip position can. Tier-based locality is unable to adequately explain why only Coronal, and not Labial and Dorsal, nodes can spread in feature geometry. In addition, the restriction of harmony to subclasses of coronals, such as sibilant fricatives and affricates, is explained, as these segments involve contrast along the tongue orientation dimension. In conclusion, spreading approaches to harmony involve the spreading of a feature or the extension of a gestural parameter over other vowels and consonants. The non-participation of these consonants receives two explanations. In the autosegmental framework, it is due to a version of feature underspecification (chapter 7: feature specification and underspecification) and tier-based locality, allowing for certain kinds of harmony interactions between specific features. In a gestural framework (Gafos 1999), transparency is illusory – articulators perform the harmonic gestures, but have little impact on consonants and vowels that do not involve those articulators, or for which changes in the articulation are noncontrastive and hence perceptually non-distinct.

3.1.2 Challenges to the spreading approach While the analysis of consonant harmony as feature spreading or gestural extension seems appropriate for characterizing retroflex harmony and some sibilant harmony cases, it encounters several challenges when applied to a fuller typology of consonant harmony systems as outlined in §2. Gafos (1999) assumes that coronal harmony is the only type of consonant harmony. Shaw’s (1991) typology of consonant harmony identifies only laryngeal harmony as another possibility.6 Laryngeal features primarily distinguish among obstruents. As vowels and sonorants are inherently voiced and unspecified for laryngeal features (Itô and Mester 1986), laryngeal harmonies can operate between laryngeal tiers specified only on obstruents, thereby respecting tier locality. Gafos (1999) does not explicitly discuss laryngeal harmony within the Articulatory Locality model. However, the larger typology outlined in §2 prompts Hansson (2001a) and Rose and Walker (2004) to conclude that autosegmental spreading is inadequate as a general model of consonant harmony. Their arguments rest on several key properties of consonant harmony not shared with vowel harmony and vowel– consonant harmony, as well as predictions that some spreading models make about the participation of intervening segments. I focus on two main properties here: (i) no blocking and transparency, and (ii) similarity of target and trigger. Hansson (2001a) also notes the lack of sensitivity to prosody and regressive 6

Shaw does identify other harmonies, such as labial, but these are dissimilatory morpheme structure constraints or morphological affixation, rather than true consonant harmony as defined in this chapter.

Long-distance Assimilation of Consonants

15

directionality as defining properties of consonant harmony, but the prosody insensitivity may be due to other factors. Regressive directionality is a strong tendency, but progressive directionality is also observed for consonant harmony. Furthermore, regressive directionality is not an exclusive domain of consonant harmony; it has also been observed for vowel harmony (Hyman 2002) and some forms of vowel–consonant harmony – i.e. emphasis harmony exhibits more restrictions and blocking when progressive than when regressive (Watson 1999).

3.1.3 Blocking and transparency Consonant harmony differs from other types of harmony with respect to blocking effects and transparency. If nasal consonant harmony is compared with nasal vowel–consonant harmony, there are two key differences with respect to the participation of segments. In nasal consonant harmony, nasal consonants harmonize with voiced stops or approximant consonants across other consonants and vowels, even obstruents. In contrast, nasal vowel–consonant harmony shows blocking effects, usually by the same segments that are skipped in nasal consonant harmony. Second, intervening vowels do not show nasalization in nasal consonant harmony, whereas they make the best targets in nasal vowel–consonant harmony and are generally not skipped. If nasal vowel–consonant harmony involves autosegmental spreading or gesture extension, how does one explain the differences with nasal consonant harmony? It does not appear to behave as if spreading of [nasal] is involved. A similar argument can be applied to two other types of consonant harmony. Laryngeal harmony shows no extension of voicing/devoicing or glottalization over intervening segments. Dorsalization shows no effects on intervening vowels, despite the fact that uvulars routinely lower adjacent vowels. If harmony operates as advocated by Gafos (1999), with gestures extended across other segments, these facts are unexpected. If tier-based locality is the explanation, it is hard to give a reason for the neutrality of contrastive voiced and voiceless fricatives in a laryngeal harmony involving voice harmony between stops, as is the case in Chaha.7

3.1.4 Similarity The concept of similarity is implicitly recognized in autosegmental spreading analyses of sibilant harmony, as spreading occurs only between segments specified for the spreading feature. However, it is not a formalized aspect of spreading theory. Hansson (2001a, 2001b) and Rose and Walker (2004) propose that similarity is the driving factor in consonant harmony, and has its functional roots in speech production. For example, sibilants are highly similar to one another and it is hypothesized that production is eased if they match for the position of the tongue tip-blade. Similar, but different, consonants present production difficulties that are manipulated in tongue-twisters and emerge as speech errors in both natural and experimentally induced situations (Fromkin 1971; Shattuck-Hufnagel and Klatt 1979; Frisch 1996; Rose and King 2007; Walker 2007; Kochetov and Radi:io 2009). Nasal stops harmonize with oral sonorants or voiced stops, which differ 7

One solution might be to use the feature [spread glottis] to characterize fricatives and the feature [voice] for stops (Vaux 1998). However, voiceless stops become voiced preceding voiced stops and voiced fricatives alike in Chaha (Rose and Walker 2004).

16

Sharon Rose

minimally from nasals. Voicing harmony occurs between obstruents, but is usually restricted to stops, excluding fricatives. Homorganicity further contributes to similarity; some laryngeal and nasal harmonies operate only between homorganic segments.8 All cases of harmony involve strong similarity between the harmonizing segments, even in ways that local assimilations do not. For example, while local voicing assimilation operates between all obstruents, voicing harmony may be restricted to a sub-type of obstruents based on manner. Rose and Walker (2004) determine similarity using the metric developed in Frisch et al. (2004), wherein similarity is assessed on the basis of shared natural classes of distinctive features in a given language. The numbers of shared and unshared natural classes of two consonants are compared. Both the size and contrastiveness of the segment inventory contribute to the similarity ratings. Natural classes, which incorporate the notion of contrastiveness, are better able to predict gradient phonotactics and capture major class subregularities than are models based simply on distinctive feature specification. However, see Mackenzie (2005, 2009) for some criticisms of this metric.

3.2

Correspondence

Given these observations about the typology of consonant harmony, Hansson (2001a) and Rose and Walker (2004), based on Walker (2000a, 2000c), developed an account of consonant harmony within Optimality Theory (OT), termed “agreementby-correspondence.” A correspondence relationship is created between similar segments, expressed as Corr-C↔C constraints (indicated in the diagram in (23) by co-indexation). This is reminiscent of Zuraw’s (2002) aggressive reduplication model, although this model does not encode similarity directly. Crucially, there is no autosegmental feature spreading between the segments, so their feature specifications are distinct. (23)

Cx V [F]

C

V Cx [F]

The Corr-C↔C constraints are arranged in a fixed implicational hierarchy from most similar to least similar, for example Corr-Th↔Th >> Corr-Th↔T >> Corr-Kh↔T (Rose and Walker 2004: 500). Separate Ident-CC constraints require the corresponding consonants to agree for a given feature. Input–output faithfulness constraints are placed between the Corr-C↔C constraints to achieve harmony of different similarities, or below them to produce full harmony. The following tableau illustrates an example of sibilant harmony in Sidaama for the word /œalak-is/ → [œalak-iœ]. Corr-s↔œ refers to anterior and non-anterior fricative pairs, while Corr-t↔œ refers to anterior stop and non-anterior fricative combinations. Candidate (24a) has a CC-correspondence relationship (indicated by the subscript x on the output sibilant consonants) and sibilant agreement, thereby 8

Hansson (2007a) has argued that secondary articulation consonant harmonies may have a diachronic explanation related to (re)interpretation of C-V co-articulation, but similarity at the level of the secondary articulation is still observed.

Long-distance Assimilation of Consonants

17

satisfying the two high-ranked constraints. This candidate violates Ident-OI[ant], due to the change /s/ → [œ]. Ident-OI[ant] is violated by segments that alter an input [anterior] specification in the output. Candidate (24b) has no correspondence relationship between /s/ and /œ/, indicated by the different subscripts x and y. Due to the lack of CC-correspondence, this candidate does not violate Ident-CC[ant]. Candidate (24c), on the other hand, does have sibilants in a CCcorrespondence relationship. The sibilants do not agree for anteriority, thereby violating Ident-CC[ant]. The [anterior] feature is used here although other features such as [distributed] or [Tongue Tip Constriction Orientation] are also possible. (24)

/œalak-is/ Ident-CC[ant] Corr-s↔œ Ident-OI[ant] Corr-t↔œ ☞ a. œxalak-iœx b. œxalak-isy c. œxalak-isx

* *! *!

No correspondence relationship is established between the fricative and the voiceless stop /k/, or with the vowels, as these two sounds are not sufficiently similar. Other work analyzing coronal harmony systems as involving corresponding segments or feature copy includes Clements (2001) and McCarthy (2007). The correspondence-based approach to consonant harmony allows similar consonants to agree at a distance; transparent segments are those that are not similar enough to participate in the harmony. No blocking is predicted, as lack of harmony is due to either the lack of or the low ranking of correspondence between intervening segments. This approach sets consonant harmony apart from vowel harmony and vowel–consonant harmony in using a different analytical mechanism.9 A more accurate typology of consonant harmony has led to alternate analytical devices, using correspondence-based relations rather than autosegmental spreading. The assumption that all harmony systems are alike and therefore subject to the same type of analysis has also been called into question, representing a significant departure in the analysis of consonant harmony vs. other harmony systems.

3.2.1 Challenges to the correspondence approach Despite the advances of the correspondence approach in unifying the typology of consonant harmony and setting it apart from other types of harmonies, challenges to this model have arisen. In the arena of coronal harmony, there is still debate over whether correspondence is the appropriate mechanism. McCarthy (2007) argues that Chumash harmony should be analyzed via correspondence, as it shows clear differences from local assimilations and dissimilations. Arsenault and Kochetov (forthcoming) also support the correspondence approach in their analysis of sibilant and retroflex harmony in Kalasha. They argue that since coronal harmony in Kalasha is restricted to apply only between consonants with the same manner of articulation, this 9

Krämer (2001, 2003) develops a surface correspondence approach for vowel harmony, with adjacency defined at a moraic or syllabic level. Pulleyblank (2002) offers a different perspective that accounts for both vowel and consonant harmony using a “no-disagreement” harmony-driver (see also Archangeli and Pulleyblank 2007).

Sharon Rose

18

lends support to the correspondence approach, which formally encodes similarity. Spreading approaches would need to explain why harmony operates only between consonants of like manner. Gallagher and Coon (2009) nevertheless argue that correspondence is appropriate for harmonies that require complete identity between consonants, but not for those that induce limited featural agreement, such as most sibilant harmonies. Gallagher and Coon focus on harmony data from Chol, a Mayan language of Mexico. The Chol pattern is an interaction between laryngeal harmony and coronal strident harmony. Total identity (25a) between consonants is required in two cases: (i) two ejectives in a root or (ii) two plain stridents. If the two consonants differ in terms of laryngeal features (ejective and plain), then only strident harmony is enforced (25b). (25)

a.

b.

Total identity Plain stridents *ts-s sus ‘scrape’ *s-Œ ŒiŒ ‘older sister’

Ejectives *k’-p’ k’ok’ *tj’-ts’ tj’otj’ Œ’iŒ’

‘healthy’ ‘snail’ ‘blood’

Strident harmony *ts’-œ ts’is ‘sew’ *s-Œ’ œuhŒ’ ‘thief’

Strident harmony is always enforced, regardless of laryngeal specification, but laryngeal harmony requires complete identity. Gallagher and Coon’s analysis requires that similar consonants (those that share certain features) are “linked” (i.e. correspond), and an identity constraint requires them to be completely identical. Ejectivity renders consonants more similar than stridency. Although the proposal accounts for the particular case of total identity seen in Mayan languages,10 it does not extend to other cases of consonant harmony outlined in §2, which show partial identity effects, but cannot receive a spreading analysis due to the transparency of the intervening segments. Moreover, it is not clear why ejectivity in particular requires a total identity between segments. Laryngeal harmonies are often restricted to a subset of obstruents (stops) and homorganicity is also frequently involved. This signals that more research on how to define similarity is required. In the arena of blocking, Hansson (2007b) argues that while lack of blocking is a descriptive characteristic of consonant harmony systems, it does not necessarily follow from the agreement-by-correspondence approach. Blocking could arise in scenarios in which three segments are in correspondence, but in different correspondence relationships, either different local relationships for the same feature or different featural relationships. More research is necessary to determine whether such scenarios are actually attested, or whether the correspondence analysis requires modification. If attested, this would undermine one of the strong arguments for why consonant harmony should not receive the same analysis as vowel harmony or vowel–consonant harmony. 10

Indeed, the analysis predicts that combinations such as /ts ts’/ are acceptable as they disagree for ejectivity, when in fact they are not attested.

Long-distance Assimilation of Consonants

19

Despite various criticisms, the correspondence approach to harmony has stimulated new areas of research in the analysis of long-distance assimilations and has pushed researchers to examine languages in more detail, to conduct corpus studies of morpheme structure constraints, and to investigate harmony from an experimental angle.

3.3

The role of contrast in consonant harmony

The concept of contrast (chapter 2: contrast) has long played a role in autosegmental spreading analyses of consonant harmony, specifically in determining feature specification. Steriade (1987b) and Shaw (1991) rely on the fact that only sibilants contrast for [anterior] to explain the transparency of stops and sonorants to sibilant harmony. In the correspondence model of harmony, the role of contrast is not emphasized. Although Hansson (2001a) and Rose and Walker (2004) mention contrast, its role in determining harmony systems does not figure prominently in their model, except indirectly via the natural classes model of computing similarity (Frisch 1996; Frisch et al. 2004), which Rose and Walker (2004) adopt. Yet recent research has returned to the issue of contrast, both as a means of constraining harmony, and in promoting contrast as the driving force behind consonant harmony. Mackenzie (2005, 2009) argues that similarity in consonant harmony should be formalized on the basis of contrastive featural specifications determined by a language’s inventory. Segments that are similar to one another in their contrastive specifications, not necessarily segments that are most similar phonetically, interact in harmony. In the harmony system of Bumo Izon, an Ijoid language, voiced implosives and voiced stops may not co-occur, with two exceptions: /g/ and /—+/ (Efere 2001); these sounds freely co-occur with stops of the opposite pulmonic value: [eúgó] ‘to pursue’ or [—+óda—+óda] ‘rain (hard)’. Mackenzie points out that these sounds lack a contrastive counterpart: there is no /—/ or /gb/. The natural classes model (Frisch et al. 2004) of calculating similarity does take phonemic inventory into account when computing natural classes and similarity, but it fares poorly with asymmetric inventories in which contrastive counterparts are missing. Hansson (2004) notes this problem with respect to laryngeal harmony in Ngizim, where implosives do not participate. Mackenzie’s solution is to determine similarity via pairwise contrasts that are partially language-specific. If two sounds are not specified for a feature due to lack of contrast, they do not participate in the harmony, which references presence of features. Hansson (2008) examines the role of contrast in the typology of vowel harmony and consonant harmony and notes that only consonant harmony has cases of “symmetric neutralization,” in which a lexical [±feature] contrast emerges in affixes only with neutral roots. The regressive sibilant harmony systems of Chumash and Navajo involve neutralization of contrasts on both roots and affixes, as either /s/ or /œ/ can trigger harmony as long as it is the rightmost sibilant in the word. Hansson argues that systems of this type are attested in consonant harmony, but not vowel harmony, because they are recoverable. The loss of contrast between /s/ and /œ/ is minimal in the consonant inventory, and affects only a small subclass of consonants. The learner has a large number of contexts provided by neutral roots to compensate for neutralization. Hansson (2001a) notes that the specific combination of symmetric neutralization with absolute directionality of

20

Sharon Rose

assimilation creates problems for some models of phonology, such as Declarative Phonology and standard Optimality Theory. Finally, Gallagher (2008, 2010) argues for a notion of laryngeal contrast rooted in dispersion theory with a more global view of contrast within a language’s lexicon. A constraint, Laryngeal Distance, penalizes contrasts between roots that have only one laryngeally marked stop vs. those with two. Plain stops are unmarked for laryngeal features. Gallagher argues that the distinction between a root with one laryngeal feature and a root with two laryngeal features is perceptually weak. Avoidance can play out as harmony (only two ejectives [k’ap’a] or two plain stops [kapa] are allowed) or as dissimilation (only one plain and one ejective are allowed [k’apa] or two plain [kapa]). In conclusion, contrast remains a powerful and debated concept in the study of consonant harmony, one that is sure to resonate in future research.

4

Experimental approaches to consonant harmony

As debate about the most appropriate analysis of consonant harmony has come to center on hypotheses about its grounding in articulation or perception, experimental studies of consonant harmony have been conducted (chapter 96: experimental approaches in theoretical phonology). The correspondence approach to consonant harmony proposes that harmony is grounded in production difficulties caused by phonological planning and the similarity of interacting consonants. Several experiments have been undertaken to test this hypothesis in the area of speech errors. Walker (2007) conducted an experimental study inducing speech errors. Consonants that were more similar and known to participate in nasal harmony, such as nasals and voiced stops, were predicted to be more prone to speech errors than other combinations. Nonce words with combinations of nasals and voiced stops and nasals and voiceless stops were tested with English speakers, as English is not reported to have nasal consonant harmony. Indeed, more errors arose with nasal–voiced stop combinations than nasal–voiceless. Walker concluded that nasal harmony could indeed be grounded in difficulties with the production of similar sounds. Kochetov and Radi:io (2009) performed a similar experiment on combinations of four sibilant fricatives /s sj œ œ j/ in Russian in a repetition task performed at a fast rate of speech. Errors (assessed by examining acoustic effects of production) were observed for both primary place of articulation and secondary articulation. The primary-place assimilation errors were generally regressive and involved /s/ changing to [œ], reflective of the “palatal bias” effect reported in other speecherror studies on English. Although Russian is not reported to have sibilant consonant harmony, the speech-error effect is similar to that found in harmony languages, supporting Hansson’s (2001a, 2001b) observation of the correlation. However, Kochetov and Radi:io (2009) also note that consonants differing only in secondary articulation did not participate in as many errors and that those errors were progressive. This seems to lend support to Hansson’s (2007a) contention that speech production difficulties may not underlie secondary articulation harmonies. Kochetov and Radi:io (2009) speculate that feature spreading or

Long-distance Assimilation of Consonants

21

gestural extension may be a better analysis for these cases, paralleling vowel harmony. Rose and King (2007) examined the impact of harmony constraints on speech errors in languages observed to have laryngeal harmony, namely Chaha and Amharic. They found higher speech-error rates for certain sequences that violated laryngeal harmony than for those that did not. In particular, the researchers compared the laryngeal pairs with consonant pairs that were also highly similar and infrequent in verb roots, but did not violate any constraints. These pairs did not show high error rates in comparison. Rose and King conclude that laryngeal harmony is not only based on production difficulties, but also, once encoded grammatically, triggers more errors when speakers encounter sequences that violate it. Walker et al. (2008) investigated coronal harmony in Kinyarwanda by means of electromagnetic articulography. Kinyarwanda exhibits retroflex harmony, previously reported in the literature as an alveolar–post-alveolar sibilant harmony (Walker and Mpiranya 2006). Harmony is blocked by alveolar stops and affricates, retroflex stops, and palatal consonants. Intervening vowels and non-coronal consonants do not block the harmony and are not perceptibly affected. The blocking effect is suggestive of spreading, while the transparent vowels and non-coronals point to a correspondence analysis. This is important since retroflex harmony is recognized as both a type of consonant harmony (Arsenault and Kochetov, forthcoming) and a possible vowel–consonant harmony for which spreading may be a more appropriate analysis than correspondence (Gafos 1999; Hansson 2001a; Rose and Walker 2004). Walker et al. (2008) found evidence that the harmonizing retroflex posture persists during apparent transparent non-coronal consonants when they occur between harmonizing fricatives. Such a result is more supportive of a spreading or gestural analysis, in line with Gafos’s (1999) Articulatory Locality model. Results were not conclusive for the intervening coronals. Research of this nature should be conducted on languages that have robust non-retroflex sibilant harmony to help address the question of whether spreading or correspondence is a more appropriate analysis. At the same time, this raises the issue of whether gradient phonetic articulatory results should be used to determine phonological representations. Finally, Gallagher (2010) utilizes perceptual experiments to test the validity of her contrastive perceptual distance model of laryngeal harmony. American English subjects who listened to pairs of Bolivian Quechua words with combinations of ejectives and plain stops had the greatest difficulty perceiving contrasts between two words where an ejective and a plain stop contrasted with two ejectives ([k’apa] vs. [k’ap’a]). The two harmonic forms wherein two ejectives contrast with two plain stops ([k’ap’a] vs. [kapa]) were the easiest to perceive, with [k’apa] vs. [kapa] occupying an intermediary position. It is argued that these results provide support for a perceptual motivation for consonant harmony, with harmony viewed as a response to avoid difficult perceptual contrasts. Experimental research in consonant harmony using a variety of techniques may help illuminate the causes of harmony (perceptual, articulatory, or both) and the best phonological analysis of this phenomenon. It may help sort out whether consonant harmony should be viewed as a unified phenomenon or as several disparate phenomena that share the common characteristic of assimilation at a distance.

22

5

Sharon Rose

Conclusion

Consonant harmony has intrigued researchers for many years, due to its tantalizing similarities to other types of harmony. Recognizing what distinguishes it from other harmony systems has nevertheless pushed analysis in new directions. Two competing approaches have been advanced: spreading of features or gestural extension and correspondence between segments requiring matching for features. Both analyses have positive attributes, but both are not without challenges. It is also possible that spreading is appropriate for some harmony systems but correspondence for others, as has been suggested by different researchers (Hansson 2001a; Gallagher and Coon 2009; Kochetov and Radi:io 2009). Research in experimental directions may help shed light on which analysis is ultimately correct and whether altogether new analyses will eventually emerge.

ACKNOWLEDGMENTS Thanks to Beth Hume, Marc van Oostendorp, and two anonymous reviewers for their instructive comments on various aspects of this chapter. All errors are my own.

REFERENCES Alderete, John & Stefan A. Frisch. 2007. Dissimilation in grammar and the lexicon. In de Lacy (2007), 379–398. Andersen, Torben. 1988. Consonant alternation in the verbal morphology of Päri. Afrika und Übersee 71. 63–113. Andersen, Torben. 1999. Consonant alternation and verbal morphology in Mayak (Northern Burun). Afrika und Übersee 82. 65–97. Ao, Benjamin. 1991. Kikongo nasal harmony and context-sensitive underspecification. Linguistic Inquiry 22. 193–196. Applegate, Richard B. 1972. Ineseño Chumash grammar. Ph.D. dissertation, University of California, Berkeley. Archangeli, Diana. 1988. Aspects of underspecification theory. Phonology 5. 183–207. Archangeli, Diana & Douglas Pulleyblank. 1987. Minimal and maximal rules: Effects of tier scansion. Papers from the Annual Meeting of the North East Linguistic Society 17. 16 –35. Archangeli, Diana & Douglas Pulleyblank. 1994. Grounded phonology. Cambridge, MA: MIT Press. Archangeli, Diana & Douglas Pulleyblank. 2007. Harmony. In de Lacy (2007), 353–378. Arsenault, Paul & Alexei Kochetov. Forthcoming. Retroflex harmony in Kalasha: Agreement or spreading? Papers from the Annual Meeting of the North East Linguistic Society 39. Berg, Thomas & Ulrich Schade. 2000. A local connectionist account of consonant harmony in child language. Cognitive Science 24. 123–149. Booysen, Jacobus M. 1982. Otjiherero: ’n Volledige grammatika met oefeninge en sleutels in Afrikaans. Windhoek: Gamsberg. Breeze, Mary. 1990. A sketch of the phonology and grammar of Gimira (Benchnon). In Richard Hayward (ed.) Omotic language studies, 1–67. London: School of Oriental and African Studies. Browman, Catherine P. & Louis Goldstein. 1986. Towards an articulatory phonology. Phonology Yearbook 3. 219–252.

Long-distance Assimilation of Consonants

23

Browman, Catherine P. & Louis Goldstein. 1989. Articulatory gestures as phonological units. Phonology 6. 201–251. Browman, Catherine P. & Louis Goldstein. 1990. Tiers in articulatory phonology, with some implications for casual speech. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 341–376. Cambridge: Cambridge University Press. Brown, Jason. 2008. Theoretical issues in Gitksan phonology. Ph.D. dissertation, University of British Columbia. Burrow, Thomas & Sudhibhushan Bhattacharya. 1970. The Pengo language: Grammar, texts, and vocabulary. Oxford: Clarendon Press. Clements, G. N. 1980. Vowel harmony in nonlinear generative phonology: An autosegmental model. Indiana University Linguistics Club. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. 1991. Place of articulation in consonants and vowels: A unified theory. Working Papers of the Cornell Phonetics Laboratory 5. 77–123. Clements, G. N. 2001. Representational economy in constraint-based phonology. In T. A. Hall (ed.) Distinctive feature theory, 71–146. Berlin & New York: Mouton de Gruyter. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In Goldsmith (1995), 245–306. Cohn, Abigail C. 1992. The consequences of dissimilation in Sundanese. Phonology 9. 199–220. Cook, Eung-Do. 1983. Chilcotin flattening. Canadian Journal of Linguistics 28. 123–132. Cook, Eung-Do. 1993. Chilcotin flattening and autosegmental phonology. Lingua 91. 149–174. Davidson, Joseph. 1977. A contrastive study of the grammatical structures of Aymara and Cuzco Kechua. Ph.D. dissertation, University of California, Berkeley. Dayley, Jon P. 1985. Tzutujil grammar. Berkeley: University of California Press. de Lacy, Paul (ed.) 2007. The Cambridge handbook of phonology. Cambridge: Cambridge University Press. Dell, Gary S., Lisa K. Burger & William R. Svec. 1997. Language production and serial order: A functional analysis and a model. Psychological Review 104. 123–147. Dereau, Léon. 1955. Cours de kikongo. Namur: A. Wesmael-Charlier. Ebert, Keren. 1979. Sprache und Tradition der Kera (Tschad), vol. 3: Grammatik. Berlin: Reimer. Efere, Emmanuel. 2001. The pitch system of the Bumo dialect of Izon. UBC Working Papers in Linguistics 4. 115–259. Frisch, Stefan A. 1996. Frequency and similarity in phonology. Ph.D. dissertation, Northwestern University. Frisch, Stefan A., Janet B. Pierrehumbert & Michael B. Broe. 2004. Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22. 179–228. Fromkin, Victoria A. 1971. The non-anomalous nature of anomalous utterances. Language 47. 27–52. Gafos, Adamantios I. 1998. Eliminating long-distance consonantal spreading. Natural Language and Linguistic Theory 16. 223–278. Gafos, Adamantios I. 1999. The articulatory basis of locality in phonology. New York: Garland. Gallagher, Gillian. 2008. The role of contrast in laryngeal cooccurrence restrictions. Proceedings of the West Coast Conference on Formal Linguistics 27. 177–184. Gallagher, Gillian. 2010. The perceptual basis of long-distance laryngeal restrictions. Ph.D. dissertation, MIT. Gallagher, Gillian & Jessica Coon. 2009. Distinguishing total and partial identity: Evidence from Chol. Natural Language and Linguistic Theory 27. 545–582. Goad, Heather. 1997. Consonant harmony in child language: An optimality-theoretic account. In S. J. Hannahs & Martha Young-Scholten (eds.) Focus on phonological acquisition, 113–142. Amsterdam & Philadelphia: John Benjamins.

24

Sharon Rose

Goldsmith, John A. 1979. Autosegmental phonology. New York: Garland. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. Goldstein, Louis, Marianne Pouplier, Larissa Chen, Elliot L. Saltzman & Dani Byrd. 2007. Dynamic action units slip in speech production errors. Cognition 103. 386–412. Halle, Morris & Jean-Roger Vergnaud. 1981. Harmony processes. In Wolfgang Klein & Willem J. Levelt (eds.) Crossing the boundaries in linguistics, 1–22. Dordrecht: Reidel. Hamp, Eric P. 1976. Palatalization and harmony in Gagauz and Karaite. In Walther Heissig, John R. Krueger, Felix J. Oinas & Edmond Schütz (eds.) Tractata altaica: Denis Sinor, sexagenario optime de rebus altaicis merito dedicata, 211–213. Wiesbaden: Harrassowitz. Hansson, Gunnar Ólafur. 2001a. Theoretical and typological issues in consonant harmony. Ph.D. dissertation, University of California, Berkeley. Hansson, Gunnar Ólafur. 2001b. The phonologization of production constraints: Evidence from consonant harmony. Papers from the Annual Regional Meeting, Chicago Linguistic Society 37. 187–200. Hansson, Gunnar Ólafur. 2004. Tone and voicing agreement in Yabem. Proceedings of the West Coast Conference on Formal Linguistics 23. 318–331. Hansson, Gunnar Ólafur. 2007a. On the evolution of consonant harmony: The case of secondary articulation agreement. Phonology 24. 77–120. Hansson, Gunnar Ólafur. 2007b. Blocking effects in agreement by correspondence. Linguistic Inquiry 38. 395–409. Hansson, Gunnar Ólafur. 2008. Effects of contrast recoverability on the typology of harmony systems. In Paul Avery, B. Elan Dresher & Keren Rice (eds.) Contrast in phonology: Perception and acquisition, 55–81. Berlin & New York: Mouton de Gruyter. Hardman, Martha James, Juana Vásquez & Juan de Dios Yapita (eds.) 1974. The Aymara language project, vol. 3: Outline of Aymara phonological and grammatical structure. Gainesville: Department of Anthropology, University of Florida. Henderson, John. 1998. Topics in Eastern and Central Arrernte grammar. Ph.D. dissertation, University of Western Australia. Hulst, Harry van der & Norval Smith. 1982a. Prosodic domains and opaque segments in autosegmental theory. In van der Hulst & Smith (1982b: part II), 311–336. Hulst, Harry van der & Norval Smith (eds.) 1982b. The structure of phonological representations. 2 parts. Dordrecht: Foris. Hulst, Harry van der & Norval Smith (eds.) 1988. Features, segmental structure and harmony processes. 2 parts. Dordrecht: Foris. Hyman, Larry M. 1995. Nasal consonant harmony at a distance: The case of Yaka. Studies in African Linguistics 24. 5–30. Hyman, Larry M. 2002. Is there a right-to-left bias in vowel harmony? Paper presented at the 9th International Phonology Meeting, Vienna. Inkelas, Sharon & Yvan Rose. 2008. Positional neutralization: A case study from child language. Language 83. 707–736. Itô, Junko & Armin Mester. 1986. The phonology of voicing in Japanese: Theoretical consequences for morphological accessibility. Linguistic Inquiry 17. 49–73. Jenewari, Charles E. W. 1989. Ijoid. In John Bendor-Samuel (ed.) The Niger-Congo languages, 105–118. Lanham, MD: University Press of America. Kawachi, Kazuhiro. 2007. A grammar of Sidaama (Sidamo), a Cushitic language of Ethiopia. Ph.D. dissertation, State University of New York, Buffalo. Khumalo, James S. M. 1987. An autosegmental account of Zulu phonology. Ph.D. dissertation, University of Witwatersrand, Johannesburg. Kisseberth, Charles W. & Mohammad Imam Abasheikh. 1975. The perfect stem in Chimwi:ni and global rules. Studies in African Linguistics 6. 249–266. Kochetov, Alexei & Milica Radi:io. 2009. Latent consonant harmony in Russian: Experimental evidence for agreement by correspondence. In Maria Babyonyshev, Darya Kavitskaya

Long-distance Assimilation of Consonants

25

& Jodi Reich (eds.) Proceedings of the 17th Formal Approaches to Slavic Linguistics Meeting, 111–130. Ann Arbor: Michigan Slavic Publications. Kowalski, Tadeusz. 1929. Karaimische texte im Dialekt von Troki. Krakow: Polska Akademia Umi laryngeals > glides > liquids > fricatives > obstruent stops

The hierarchy in (1) summarizes a pervasive generalization that emerges from nasal vowel–consonant harmony patterns across languages. However, some finetuning may be needed, as discussed in connection with particular cases below. The hierarchy has been suggested to have a phonetic basis, whereby nasalization of segments that are lower on the scale is disfavored for reasons of articulation, aerodynamics, and/or perceptibility (see Walker 2000a and references therein). Maintenance of a system of contrasts has also been suggested to underlie the hierarchy in (1) (e.g. Flemming 2004). This scale shows a striking similarity to the sonority hierarchy, on which see chapter 49: sonority. Nevertheless, they are distinct in the ranking of nasal stops, which are usually situated between liquids and obstruents in the sonority hierarchy, but could plausibly be located at the top left of the scale in (1). Furthermore, consensus is lacking on the placement of laryngeals in the sonority scale. I return to this issue later in this section, in the context of the impedance hierarchy proposed by Hume and Odden (1996); for further discussion, see Cohn (1993a), Gnanadesikan (1995), Boersma (1998, 2003), and Walker (2000a). Following the exemplification of hierarchical effects in the typology, some ways of formalizing them are discussed. The exemplification of nasal vowel–consonant harmony with opaque segments begins with patterns that show a narrower set of blockers and progresses to ones with blocking by more categories.

Rachel Walker

3

The dialect of Scottish Gaelic spoken in Applecross (Ross-shire), henceforth “Applecross Gaelic,” displays a nasal harmony where nasalization is reported to target all segment categories except obstruent stops (Ternes 2006). Nasalization spreads from a stressed nasal vowel. Stress is usually – but not always – assigned to the initial syllable. Progressive nasalization is halted by an obstruent stop. In addition, consonants in the onset of a syllable with a stressed nasal vowel are nasalized except obstruent stops. (2)

a.

b.

Monosyllables m:6j)m ‘bare, naked’ : m ã(1 ‘hand’ mhã)X ‘finger, toe’ lenited ºhã)X h k XÂãk ‘maggot’ stXãh)2 ‘string’ tã3 ‘ox, stag’ Polysyllabic words ’4Xhãjn ‘root (pl)’ ’kãnã)6 ‘sand’ ’m:ã)5jk ‘axe, hatchet’ ’865h ‘tame’ ’t h X0h7ãX ‘plate’ ’7ÂnÂ)ºãX ‘grandmother’ h ’k 6h8paxk ‘wasp’ j ’mhnh7t ar ‘minister (clergyman)’ : j j ’8n ã)n d an ‘thread’

Transcriptions of nasalized fricatives in Applecross Gaelic follow Ternes; however, the realization of such segments in general have been the subject of debate. Nasalized fricatives present an aerodynamic confound with consequences for perception: an increase in velopharyngeal opening tends to reduce frication and a decrease in velopharyngeal aperture can reduce perceptible nasalization (chapter 28: the representation of fricatives). Gerfen (1999, 2001) has brought instrumental research to this question for the nasal harmony of Coatzospan Mixtec. See Shosted (2006) and Solé (2007a) for recent reviews of the issues and experimental investigations. Whether and how the gradient trade-offs in realization should be represented in phonology remains an open question. Color and height distinctions in nasal vowels of Applecross Gaelic are a proper subset of those in its oral vowels. The mid-high vowel series /e H o/ is always oral, and, like the obstruent stops, they block nasal harmony. Examples with blocking by [H] are given in (3). All other vowel qualities in the language ([i q u e D a]) can be phonemically oral or nasal and can become nasalized through nasal harmony. (3)

’8ãjm:HxkHnj ’ã1Hl: ’nj hHn kha’t h XhãnH

‘to compare’ ‘angel’ ‘girl, daughter’ ‘Catherine’

Blocking of nasal harmony by mid vowels is also attested in M^bà Yoruba, discussed in §2. The resistance of mid-high vowels to contrastive or contextual

Nasal Harmony

4

nasalization has plausible origins in effects of nasalization on the perception of vowel height. The perceived distance of a height distinction between two oral vowels is reduced when those same vowels are nasalized (e.g. Wright 1986), which could give rise to the restriction of nasalization to a subset of the oral vowel heights in Applecross Gaelic (Homer 1998; Walker 2000a). In general, the number of nasal vowels in a language never exceeds the number of oral vowels, and it is relatively common for one or more mid nasal vowels to be missing relative to the oral inventory in a language. Nasalization may interfere most with detection of height distinctions involving mid vowels; also perceptual integration of nasalization and height in vowels disfavors mid percepts (Kingston 2007). This suggests the possibility that the descriptive hierarchy in (1) could be moderated by or interact with the effects of contrast, an issue to which I return later in this section. Given that Applecross Gaelic has nasal stops, the question arises whether they too can trigger nasal harmony. When they occur in the onset to a stressed syllable with an oral vowel, they do not. (4)

mur mara)v

‘sea’ ‘dead person’

These examples do not necessarily demonstrate that triggers of nasal harmony in Applecross Gaelic must be stipulated to exclude nasal stops. It is conceivable that stressed vowels not only trigger nasal harmony when they are nasal, but also block it when they are oral; that is, they can spread nasalization but do not alter their own phonemic oral/nasal quality. A situation of this kind in Guaraní is discussed in §2. I found no examples in Applecross Gaelic with a stressed oral vowel and a following nasal with which to test whether a nasal stop triggers harmony in a following unstressed syllable. The scarcity of such forms is likely because vowel nasalization in Applecross Gaelic in most cases arose historically from a nasal consonant in the vicinity that was either retained, lost, or lenited. In some cases, the nasal consonant is still reflected in the orthography: [tã3] damh ‘ox, stag’, [’8ã)5j6] sàmhach ‘quiet’. A pattern of nasal harmony that includes a supralaryngeal fricative among its blockers is found in Epena Pedee (Saija), a Chocó language of Colombia (Harms 1985, 1994). Nasal vowels trigger progressive nasal harmony, as shown in (5a). Certain consonants in the onset of a syllable that contains a nasal vowel also become nasalized, as will be discussed presently. Of particular relevance is the set of consonants that block progressive nasal harmony, seen in (5b); this includes /s/, which is the only supralaryngeal fricative phoneme in the language, as well as other obstruent phonemes and the trill /r/.1 Within the stem, non-continuant obstruents become prenasalized following a nasal vowel (Harms 1994). The phonemic analysis and phonetic description for these examples follow Harms. Some phonemic forms are constructed on the basis of his orthography, which is close to phonemic. 1

The description of Harms (1985: 16) states that /s/ blocks progressive nasal harmony. The later description by Harms (1994: 8) seems to indicate that /s/ does not always block spreading, but includes the example [’mhksu] ‘spear’, where it is opaque (1994: 6). Another example, [’8hk8i] ‘sugarcane’ (1994: 5), could be regarded as showing that /s/ does not block spreading, since Harms’s phonemic transcription of this word posits only the first syllable as underlyingly nasal. However, it is also compatible with a treatment in which both the first and last syllables contain nasal vowels underlyingly.

Rachel Walker

5

Regressive nasalization occurs within syllables containing a nasal vowel but does not usually transmit beyond the onset.2 Syllables in Epena Pedee are open and begin with a consonant; consonant clusters are infrequent. A consonant that is tautosyllabic with a nasal vowel becomes nasalized except for [ph th kh Œ] ([p t k] and [r] do not occur preceding a nasal vowel). In contrast to their behavior in the progressive harmony, /s/ is characterized as nasalized when preceding a nasal vowel and voiced stops become full nasals. Harms analyzes [m n] as allophones of voiced stops; they are not part of the phonemic inventory. Harms includes [?] in the category of segments that do not become nasalized in the onset to a nasal vowel. While nasalization would not be audible during [?], it is possible that the velum is lowered during this segment. The status of [?] in nasal harmony in general is discussed later in this section. (5)

a.

b.

/’dãwe/ /pe’7i7a/ /khã’ja7a/ /hg’sã(/ /hebg’dg/ /kh h’sia/ /’bhãsu/ /’ãŒi/ /’hiphe/ /kh j’t7a(/ /0’bqsi/ /wãhi’da/ /’ãhgq/ /’tj(ra/

[’nãSg]3 [pe’9i9ã] [khã’Jã9ã] [5g’8ã(] [hemg’ng] [kh h’siH]4 [’mhksu] [’?ã J Œi] [’5im phe] [khj’n t7a(] [?0’mbqsi] [Sã5h’nda] [’?ãh Igq] [’tj(ra]5

‘mother’ ‘guagua (a groundhog-like animal)’ ‘than’ ‘stinging ant’ ‘to play’ ‘think’ ‘spear’ ‘they’ ‘fish (sp.)’ ‘young man’ ‘neck’ ‘go (past pl)’ ‘daughter-in-law’ ‘pelican’

Blocking of nasal harmony by /r/ is likely due to the aerodynamic and perceptual difficulties that a nasal trill would present (Solé 2002, 2007b). Solé points out that “an open velopharyngeal port would bleed the intraoral pressure required to make a relaxed oscillator vibrate for trills” (2002: 677). In addition, a velopharyngeal opening that was small enough to not impair a trill would likely be of insufficient size to produce perceptible nasalization. However, Solé observes that a tap is compatible with nasalization. This is consistent with the pattern that Epena Pedee displays. The distinct behavior of taps and trills suggests that the category of liquids in the scale of targets for nasal harmony should be segregated into a category that includes taps, flaps, and lateral approximants and another lowerranked category that contains trills. A case where taps and laterals are both targets to the exclusion of obstruents occurs in :j;, a Niger-Congo language of Nigeria (Williamson 1965, 1969, 1987).

2

Harms (1985: 16) describes “a minor degree” of nasalization on a vowel that precedes a nasal syllable, but he characterizes it as “so slight” that he does not represent it in transcription. 3 A fricative variant [ß] of /w/ does not block harmony: [’nãSg] ~ [’nãèg]. 4 This form is transcribed in Harms (1985) with a prenasalized [s], but later description in Harms (1994) indicates that [s] is not prenasalized following a nasal vowel. 5 Harms (1985: 16) transcribes this form without aspiration of [t]; however, given his description of voiceless stops before a nasal vowel (1985: 15), it is presumably aspirated.

Nasal Harmony

6

The prenasalization of non-continuant obstruents following a nasal vowel raises questions about their representation. One issue is whether the feature [nasal] is specified for a portion of these segments, and if it is, how the nasal–oral sequence is represented.6 Prenasalized segments have been represented by some researchers as a single root or slot with specifications for both [+nasal] and [−nasal] (e.g. Bivin 1986; Sagey 1990), but Padgett (1995) and Piggott (1997) have analyzed prenasalization as a combination of nasal and oral segments. Steriade (1993) proposes aperture-based representations in which the closure and release phases of a stop can each form a separate anchor for [nasal], in which case a prenasalized stop has [nasal] associated with the closure but not the release. Tied in with Steriade’s representation is a claim that [nasal] is a privative feature (see also Trigo 1993; Steriade 1995), which supports a bipositional representation for prenasalized plosives. Beckman (1999) applies the aperture-based approach to prenasal stops in Guaraní, which may occur at the boundary between a nasal span and an oral span (see §2). While much contemporary research concurs with the need for a sequence of segments or phases within a segment, the specifics of the representation remain at issue. On a related topic, Botma (2009) proposes that postnasalized stops that trigger nasal harmony in Yuhup are underlyingly nasals that have undergone denasalization in a particular context. Epera (also known as 7onh? ’kom\5h? ’mSgggur’5jnj? ’5ggg>? ’ng1]’tu? ’nh’tjænj?

‘to turn over’ ‘to fly’ ‘swallow’ ‘partridge’ ‘old woman’ ‘he laid it down’8 ‘deep’ ‘over there’ ‘wiggling’ ‘termites’ ‘daughter’ ‘to carry on the back’

Sundanese, an Austronesian language spoken in Western Java, has a progressive nasal harmony that targets only vowels and glottals (Robins 1957; Cohn 1990, 1993a). Nasal stops are the triggers. (8)

Jãhãn Jãjr mh5ãk kumã5ã bqI5ãr nj]js Iãjak Iãwih Ijliat mãrios Iibah IhsHr Iãtur

‘wet (active)’ ‘say (active)’ ‘take sides (active)’ ‘how?’ ‘to be rich’ ‘dry (active)’ ‘sift (active)’ ‘sing (active)’ ‘stretch (active)’ ‘examine (active)’ ‘change (active)’ ‘displace (active)’ ‘arrange (active)’

The status of the laryngeals, [h ?], in nasal harmony systems deserves comment. Laryngeals rarely – perhaps never – block nasal harmony (Walker and Pullum

8

Rich does not distinguish degrees of stress in her transcription.

Nasal Harmony

8

1999; Walker 2000a). Blocking by a glottal stop has been reported for Rejang (Austronesian) (McGinn 1979: 187), but field research by Robert Blust suggests otherwise (Walker and Pullum 1999: 776, n. 17). In Kaiwá (Tupí-Guaraní), nasal harmony transmits through [?] at a normal speech rate, but [?] is reported to block nasal harmony in slow speech (Harrison and Taylor 1971: 17). It would be valuable to verify these descriptions with modern investigative techniques. Across languages, the overwhelming tendency is for nasal harmony to transmit through laryngeals. This has prompted researchers to situate laryngeals above the category of glides in the hierarchy that characterizes cross-language variation in targets of nasal vowel–consonant harmony, as in (1) (Schourup 1972; Piggott 1992; Walker and Pullum 1999). Levi (2005) has proposed a refinement in which laryngeals are situated higher than phonemic glides in particular, that is, higher than glides that are not derived from vowels rather than glides that are the non-syllabic realization of vowels. Laryngeals have sparked discussion about the representation of nasal segments and the definition of the feature [nasal] (chapter 7: feature specification and underspecification; chapter 17: distinctive features; chapter 27: the organization of features). If [nasal] reports to the supralaryngeal node in the feature geometry, then laryngeal segments could not be phonologically nasal, whereas if [nasal] were a dependent of the root node, any segment could potentially be specified for [nasal] in the phonology (Cohn 1990, 1993a). Cohn assumes the former representation, but leaves the issue open for further research. Under the assumption that [nasal] is dependent on the supralaryngeal node, laryngeals are phonetically nasalized in the context of nasal segments but they do not participate in nasal processes in the phonology or have the capacity to show nasal contrasts. Piggott (1992) takes a different perspective, in which laryngeals can be phonologically [nasal], and may therefore undergo nasal harmony. In the geometry that he assumes, [nasal] can be dependent on a soft palate node, which reports to the root. He supposes, however, that a nasalized glottal stop is not phonetically possible, because of its lack of egressive nasal airflow. Accordingly, Piggott postulates a feature co-occurrence restriction over [nasal] and [constricted glottis] that applies at the later level of phonetic implementation. Walker and Pullum (1999) also contend that laryngeals can be phonologically specified for [nasal]. Support they cite for this claim includes patterns in which [5] triggers nasal harmony (e.g. Arabela) or contextual vowel nasalization. For two such languages (Kwangali, Seimat), they note evidence for a phonemic contrast between [h] and [5]. In addition, Walker and Pullum observe that the scarcity of blocking of nasal harmony by laryngeals points to their being highly compatible with acquired nasalization, a tendency that can be straightforwardly captured if laryngeals can be phonologically nasal. To allow the possibility of nasalized laryngeals, they conclude that [nasal] should be defined as corresponding to an open velopharyngeal port rather than requiring nasal airflow (see also Cohn 1993a; Padgett 1995; Hume and Odden 1996). Therefore, a glottal stop can be nasalized by virtue of the lowered velum posture even though there is no airflow through the nasal cavity. In accordance with this perspective, laryngeals are transcribed as nasalized in this chapter when they occur within a nasal harmony span. It is plausible that [?] at the periphery of a nasal harmony span, e.g. in Epena Pedee and Arabela, should likewise be treated as specified for [nasal]. Because of the lack of airflow, nasalization will not be perceptible during []]. This makes it unlikely

9

Rachel Walker

that glottal stops will show a phonemic contrast in nasality (but see Walker and Pullum 1999 for a hypothesized scenario). The language data presented above illustrate the scale of targets in patterns of nasal harmony that show blocking. All of the particular cases considered show progressive harmony, with regressive harmony in the syllable in some instances. However, some patterns show more robust regressive harmony, as discussed in §4. The scalar effects for targets have been analyzed by Walker (2000a) as the result of a hierarchy of feature co-occurrence constraints abbreviated as in (9) (excluding laryngeals, discussed below). The constraint *Nas-ObstruentStop prohibits a lowered velum during a segment that also has the features that characterize an obstruent stop, a segment that is highly difficult if not impossible. Nasalization during an articulation with stoppage in the oral cavity usually results in a sonorant stop (e.g. [m n J I], etc.). (9)

Nasalized segment constraint hierarchy *Nas-ObstruentStop >> *Nas-Fricative >> *Nas-Liquid >> *Nas-Glide >> *Nas-Vowel

In addition to nasal harmony surveys, this markedness scaling gains support from other facts about nasal patterns, such as segment inventories and nasal place assimilation (e.g. Pulleyblank 1989; Cohn 1993a; Padgett 1995). The constraint hierarchy can be used to obtain cross-language differences in the sets of targets and opaque segments by ranking a harmony-driving constraint at different breaks in the hierarchy. Feature co-occurrence constraints that dominate the harmony driver will correspond to blocking segments and ones that are dominated will correspond to targets. Walker suggests that the hierarchy of constraints is grounded in factors of articulatory compatibility, aerodynamic difficulty, and ease of perceptibility. However, some researchers have observed that conflating these factors is problematic with regard to inventory contrasts and laryngeals. Ní Chiosáin and Padgett (1997) point out that with respect to articulatory compatibility, a constraint against []] is expected to be low-ranked, likely at the same level or even below *Nas-Vowel. However, because of the lack of perceptibility of nasalization during a glottal stop, []] vs. [?] makes a poor phonemic contrast. If the hierarchy in (9) included *[]] at or near the bottom, it would correctly predict the lack of blocking by [?] in nasal harmony, but it would not account for the disfavored status of a contrast between []] and [?]. Ní Chiosáin and Padgett propose that *[]] is low-ranked in the articulatory markedness hierarchy, but attribute the contrastive distribution to the activity of a separate constraint Contrast[nas], which penalizes a []]/[?] distinction. Flemming (2004) goes a step further, suggesting that blocking effects in nasal harmony are a consequence of constraints governing the maintenance of contrasts, a possibility also noted by Ní Chiosáin and Padgett (chapter 2: contrast). Under Flemming’s formalization, the constraint hierarchy in question scales nasalized segments according to their proximity to a nasal stop. Thus nasalized fricatives are conceived as highly indistinct from nasal stops, but nasalized vowels and laryngeals are at the upper end of the scale of distinctness from a nasal stop. Boersma (2003) proposes an account of the nasal glottal stop that distinguishes its articulation and perception but with different implementation.

Nasal Harmony

10

Other work that emphasizes the role of segmental distinctions includes Homer (1998), who employs contrast-sensitive constraints to obtain blocking in nasal harmony. Piggott (2003) proposes faithfulness constraints specific to categories of blocking segments (stop, fricative, liquid, glide) that simultaneously prohibit segment deletion and feature change (chapter 63: markedness and faithfulness constraints). He argues that this prevents the possibility that a less favored segment category could undergo nasal harmony to the exclusion of preferred targets, by replacement of the less favored category with a nasal stop. For example, replacement of a liquid with a nasal stop would bypass the violation of a feature co-occurrence constraint on nasalized liquids, in which circumstance a nasal harmony system could be expected to exist that allows liquids to undergo harmony, but not glides. A faithfulness-based approach to hierarchical nasal harmony effects is also proposed by Boersma (1998, 2003), using constraints that penalize adding nasalization in consonants. Constraints for consonants with greater oral constriction are generally ranked higher, as nasalization is posited to have a greater perceptual effect in these segments. Another perspective on the basis of an implicational scale of targets makes a connection with its similarity to the sonority hierarchy, mentioned earlier. Hume and Odden propose that the effects of both hierarchies reduce to an impedance hierarchy, where impedance is defined as “the resistance offered by a sound to the flow of air through the vocal tract above the glottis” (1996: 358), a concept reminiscent of Boersma’s reference to degree of oral constriction. Among supralaryngeal segments, obstruent stops have the greatest impedance, and vowels and glides the least. Segments with low impedance values are favored as syllable peaks, a characteristic traditionally diagnostic of high-sonority segments, and they show greater susceptibility to nasalization. Laryngeals have an impedance value of 0. This renders them highly susceptible to nasalization, but their inability to constitute a syllable peak follows from an assumption that a syllable peak has some impedance value, i.e. a non-zero value. This addresses the earlier-mentioned discrepancy concerning laryngeals in scales governing nasal harmony and what are traditionally sonority-based phenomena (i.e. syllabification), yet it posits a common underlying basis from which scalar effects across these phenomena are derived. This approach is extended by Clements and Osu (2003), who interpret resistance to nasalization in terms of a scale of obstruence, a near-synonym of impedance. Their study revolves around Ikwere, an Igboid language of Nigeria, in which nasal harmony transmits through vowels, approximants, and non-explosive stops, but is blocked by fricatives and obstruent stops. This leads them to add a category consisting of implosives and other non-explosive stops between liquids and fricatives on the nasalizability scale. Despite the differences in formal perspectives on patterns of nasal harmony with opaque segments, there is broad consensus that groups of targets vs. blockers essentially conform to the descriptive hierarchy in (1). In addition to new case studies, like that of Ikwere, future research bearing on these approaches may rest largely on the scope of coverage and emphasis, for example the treatment of sonority or contrast in the theory, which situate the account of nasal harmony in a wider context. Where explanatory overlap exists, general issues of theoretical implementation will also be relevant. For instance, future work on the division of labor between contrast, segmental markedness, and faithfulness in the theory could inform the types of constraints that are expected to be possible.

Rachel Walker

11

2

Nasal vowel–consonant harmony with transparent segments

Patterns of nasal vowel–consonant harmony with transparent segments are also attested. Whether the hierarchy in (1) is relevant for these patterns is a matter of debate, intersecting with fundamental questions about the kinds of representations that are involved and whether these systems are of the same basic “type” as ones with opaque segments. A well-known pattern of nasal vowel–consonant harmony with transparent segments is widely attested in the Tucanoan family. Typically, all voiced segments in a morpheme are either nasal or oral. Voiceless obstruents are consistently oral. They may occur in nasal morphemes, and do not prevent nasal harmony from operating among flanking voiced segments. Examples of nasal harmony in morphemes and words of Tucano, spoken in Colombia, are given in (10a), and oral items are provided in (10b) (West and Welch 1967; Noske 1995).9 Although not marked as nasal in the sources, I show laryngeals as nasalized in nasal morphemes (see discussion in §1). Noske notes that [5] occurs in nasal contexts in other Eastern Tucanoan languages and she tentatively postulates that /h/ is likewise realized as nasal in Tucano.10 (10)

a.

b.

9

Sgi Sh]mã 0m0 Sãth mã5ã mã]ã nhth 1i5kã sÂ9ã mãsã ]Â]kgã sjkjã jai jese oho kahpea oso ake patu paga se7e m be?7o n die7i etagq

‘panpipe flutes’ ‘child’ ‘man’ ‘devil’ ‘macaw’ ‘trail’ ‘charcoal’ ‘a drink made from bitter manioc’ ‘pineapple’ ‘people’ ‘nose’ ‘small of back’ ‘jaguar’ ‘pig’ ‘banana’ ‘eye’ ‘bat’ ‘monkey’ ‘coca’ ‘stomach’ ‘a skin disease’ ‘later’ ‘eggs’ ‘the one who is arriving’

On the operation of nasal harmony from roots to certain suffixes in Tucano, see Trigo (1988) and Noske (1995). 10 An off-glide [h], realized predictably in word-final position, is not shown in these transcriptions.

Nasal Harmony

12

In Tucano, a complementary distribution exists between nasal stops and voiced stops (realized as oral or prenasalized, depending on context), with the former occurring in nasal morphemes and the latter in oral morphemes. In nasal morphemes, Noske (1995) postulates that [+nasal] is a feature of the entire morpheme, and it is floating, i.e. unassociated, in the underlying representation. She assumes that [+nasal] links to the first vowel in the word and spreads within the morpheme, as illustrated in (11).11 (11)

se7a → s [+nas]

7a → s

[+nas]

f

[+nas]

Tucano has a small number of disharmonic roots that contain both nasal and oral vowels, e.g. [kimpe] ‘left’, [semg] ‘paca’. Noske treats these with [±nasal] features that are specified for individual segments underlyingly. She argues that [−nasal] specifications are needed in addition to [+nasal], to prevent [+nasal] from spreading to all voiced segments. This is shown in (12). /B/ represents the phoneme that is variously realized as a nasal or voiced labial stop.12 (12)

seBg [−nas] [+nas]



semg [−nas] [+nas]

In systems where voiceless obstruents do not impede harmony, descriptions largely converge on the realization of these consonants as voiceless obstruents in nasal harmony contexts. Consider the case of nasal vowel–consonant harmony in Guaraní, where voiceless consonants are reported to be transparent.13 An acoustic study of voiceless stops in nasal harmony contexts in unsuffixed Guaraní words found no evidence of nasal airflow energy during the stop closure, nor was the closure fully voiced (Walker 1999). On the ongoing debate about instrumental evidence for nasalized fricatives, see the aforementioned references on that topic. Examples of nasal harmony in Guaraní are given in (13). The data are from Gregores and Suárez (1967), Rivas (1975), Piggott and Humbert (1997), and Kaiser (2008). Nasal harmony that targets voiced segments and laryngeals is triggered by a stressed nasal vowel (a, b), and stressed syllables that contain an oral vowel block harmony (b). Harmony is robust in the regressive direction and is also triggered by a prenasalized stop (c).14 Progressive harmony might be more restricted.15 There is 11

[e] is an allophone of [e] in Tucano. Noske assumes an intermediate stage in this derivation that is not relevant to the issues under focus. 13 However, the locative suffix shows an alternation between [-pe] and [-mg]. See Piggott and Humbert (1997) for discussion. 14 Kaiser (2008) finds that word-initial vowels often do not undergo nasal harmony in words over two syllables long. She speculates that a morpheme boundary might be blocking harmony in these forms. 15 Kaiser (2008) observes that the data available do not confirm whether progressive harmony can affect more than one syllable. See also Piggott and Humbert (1997) on observed asymmetries in progressive vs. regressive harmony in Guaraní. If it were determined that progressive harmony does not advance beyond one syllable, then the second two examples in (13b) would be most relevant to establish blocking by a stressed syllable, for regressive harmony from the following nasal vowel or prenasalized stop. 12

Rachel Walker

13

a long history of discussion of the Guaraní pattern in the theoretical literature. For overviews, see Piggott and Humbert (1997), Beckman (1999) and Walker (2000a). (13)

a. b.

c.

/ndo-7oi-ndu’pã-i/ /7o-mbo-po’7ã/ /idja‘kã7a’ku/ /u’peiœa‘7i/ /7o-mbo-he’ndu/ /7o-mbo-Øwa’ta/

[ni7ihnj’pãh] [9imipi’9ã] [hJã‘kã9ã’ku] [u’peiœã‘9i]16 [Jãsã‘i’ndQ] [9imi5g’ndu] [9imboØwa’ta]

‘I don’t beat you’ ‘I embellished you’ ‘is hot-headed’ ‘then, because of that’ ‘moonlight’ ‘I made you hear’ ‘I made you walk’

M^bà Yoruba has a regressive nasal harmony in which both voiced and voiceless stops are transparent, as well as fricatives, as shown in (14). The harmony targets vowels, glides, and liquids (Ajíbóyè 2001; Piggott 2003; Archangeli and Pulleyblank 2007). However, as mentioned in §1, mid vowels block harmony. Mid vowels are always oral, with the exception of [6] when it is an allophone of /ã/. In addition, despite the occurrence of phonemic /ã/ in the language, /a/ is opaque to harmony: [ìsasâ] ‘kind of pot’, [agâtÖ] ‘sheep’. (14)

/uwÄ/ /jÑ/ /ùrá/ /râ-orh/ /lÄ/ /idÄ/ /udâ/ /egigj/ /ùgbÇ/ /ì-sá/ /ìtÖ/ /ikh/

[jS|] ‘lie’ [JÑ] ‘go’ [âXá] ‘walk’ [roXh] ‘to chew chewing stick’ [nÄ] ‘to spend’ [hdÄ] ‘magic’ [jdâ] ‘lover of sweet things’ [eghgj] ‘bone’ [âgbÇ] ‘snail’ [ásá] ‘worship’ [átÖ] ‘story’ [hkh] ‘mucus’

Piggott suggests that voiced stops are underlyingly obstruents in M^bà Yoruba, unlike languages where voiced stops alternate with nasals in nasal harmony, for which he analyzes the voiced stops as sonorants (see also Botma 2009). The patterns of nasal harmony in Guaraní and M^bà Yoruba are also revealing with respect to possible domains of nasal vowel–consonant harmony. In Guaraní, harmony can be bounded at an edge by a stressed syllable, as seen in (13b). This has led some researchers to analyze some or all of Guaraní nasal harmony as operating within metrically defined constituents or via them (e.g. Sportiche 1977; Halle and Vergnaud 1978; van der Hulst and Smith 1982; Flemming 1993; Piggott and Humbert 1997; see also chapter 40: the foot; chapter 41: the representation of word stress). Beckman (1999) offers a different perspective, in which the role of stressed syllables in Guaraní is attributed to the preservation of the underlying oral/nasal quality of segments in these positions. That approach also accounts for the limitation of phonemic nasality in vowels to stressed syllables. Another level of prosodic structure has been suggested to be relevant for harmony in M^bà Yoruba. In this language, nasal harmony can span a word boundary. 16

The vowel sequence [ei] in this form is tautosyllabic (Kaiser 2008).

Nasal Harmony

14

Examples are given in (15). Ajíbóyè (2001) describes /lá/ as a particle and analyzes the domain of harmony as the prosodic word, a constituent that can contain more than a morphological word. (15) /kí lá/ /ìsí lá/

[kÇ ná] ‘what is it?’ [ásÇ ná] ‘who is s/he?’

/kí à/ /ìsí à/

[kí à] [ìsí à]

‘what is it?’ ‘who is s/he?’

Returning to issues surrounding transparent segments and targets, for the Tucanotype patterns in particular, debate has surrounded their analysis and the conception of where they fit in the typology of nasal harmony. One primary approach to these systems posits that they involve different segmental representations from systems like those described in §1, with opaque segments (Piggott 1992). Specifically, they differ in the dependency of the feature [±nasal] in the feature geometry, and in the node that spreads in nasal harmony. In systems with transparent voiceless obstruents, [±nasal] is dependent on a spontaneous voicing node (SV), which is present in sonorant segments. Harmony results from the spreading of [+nasal] among adjacent SV nodes, as depicted in (16). Voiceless stops are transparent to harmony because they lack an SV node. Voiced stops are treated as sonorants in these systems (see discussion in §1).17 Piggott suggests that sonorancy is the source of prenasalization of these consonants in certain oral contexts. The realization is attributed to an articulatory configuration needed to produce spontaneous voicing. Prenasalization in this circumstance thus does not involve a specification for [+nasal] but rather is an epiphenomenon of the sonorant stops’ phonetic implementation. (16)

S

f

t

h

Root

Root

Root

Root

SV

SV

SV

[+nas] In nasal harmony with opaque segments, [±nasal] is dependent on a soft palate (SP) node. An SP node is underlyingly specified in some consonants. Nasal harmony ensues from spreading of the SP node to segments that lack it, as shown in (17) for a Sundanese form. Under this approach, differences in the set of opaque segments arise from differences in the segments that are underlyingly specified for an SP node (governed by Piggott’s Contrastive Nasality Principle). (17)

17

I

f

j

a

k

Root

Root

Root

Root

Root

SP

SP

SP

[+nas]

[−nas]

[−nas]

For other assumptions about the representation of sonorant stops in the context of nasal harmony systems, see Botma (2004, 2009) and Botma and Smith (2007).

15

Rachel Walker

In other work, nasal vowel–consonant harmony systems with transparency vs. blocking have been divided along the lines of relations between syllables and segments (Piggott 1996, 2003; Piggott and van der Hulst 1997). In nasal harmony with transparent consonants, [nasal] is considered to be licensed as a property of the syllable, whereas in harmony with blocking segments, the host for [nasal] is the segment. In the case of syllable licensing, [nasal] is associated with the syllable head – the nucleus (see chapter 33: syllable-internal structure) – and becomes associated with all other sonorant segments in the syllable (Piggott 2003; see Botma 2004, 2009 for a related claim). Locality is respected in harmony with transparent consonants, because no syllable heads are skipped in the propagation of nasal harmony. Noske (1995) also assumes a licensing relation between the syllable and [±nasal] for the Tucano pattern, but with some different specifics in her assumptions. In another approach that posits a basic difference between nasal vowel–consonant systems, those with opaque segments are caused by articulatory spreading, whereas those with transparent consonants involve spreading that is perceptually based (Boersma 1998, 2003). Patterns with blocking are claimed to be driven by an articulatory constraint that penalizes shifts in the position of the velum. This constraint can favor persistence or early onset of a lowered velum in a word that contains a nasal segment. For patterns with transparent obstruents, such as the Tucano type of system, Boersma proposes that a perceptually based constraint drives harmony, causing nasalized segments that are interrupted by an oral segment to be perceived with a single value of [+nasal]. This perceptual representation is distinct from the articulation in which the nasalized segments are interrupted by a velumraising gesture. Boersma suggests that the reason all sonorants become nasalized in patterns with transparency is connected to the lexical-level specification of nasality in these languages (see (11)). Boersma reasons that if [±nasal] is a suprasegmental feature, it is less likely to be specified for individual segments. Segments are thus less likely to have a [−nasal] specification to which to be faithful, and segments that do not become nasalized will be the ones that are inherently problematic in combination with nasalization, i.e. fricatives and plosives. The notion that nasal vowel–consonant harmony patterns with transparency are tied to perception is also pursued in work by Sanders (2003). He proposes that nasal harmony in Tucano-type languages is driven by dispersion constraints on the perceptual distance of systemic contrasts. These constraints favor words that differ to the greatest extent possible in the perception of a nasal/oral contrast, while obeying higher-ranked constraints that prohibit nasalized voiceless obstruents, i.e. they favor the morphemes in which all segments besides voiceless obstruents are the same in nasality. In contrast to analyses where patterns with transparent segments are analyzed as involving representations or harmony imperatives that are different from those with opaque segments, another approach analyzes these systems as having a common source (Walker 2000a, 2003). This account emphasizes a complementarity in the patterns: there is no nasal vowel–consonant harmony in which all of the segments become nasalized, yet there are systems in which obstruents are transparent and the remaining segments are targets. Obstruents form the focus of the complementarity. All segments except (some) obstruents have the potential to be targets in nasal vowel–consonant harmony and only obstruents are transparent. Walker proposes a treatment of the patterns that analyzes systems with transparent

Nasal Harmony

16

obstruents as cases that correspond to the right endpoint of the hierarchy in (1), where nasalization transmits through all segment categories. Walker adduces typological evidence in support of conceptualizing transparent obstruents as on a par with targets. She observes that when obstruents are transparent, all other segment categories are targets, a generalization that would be expected if obstruents were targets in these systems, because they are lowest-ranked on the target scale. More generally, a survey of over 75 languages with nasal vowel–consonant harmony reveals that if a segment is “permeated” by nasal harmony, that is, if it is targeted or behaves as transparent, then all segments belonging to categories that are higherranked in the target hierarchy of (1) are also permeated. A pattern involving voiced stops is brought to bear on the claim that obstruent stops can be targets in nasal vowel–consonant harmony. The nasal harmony of Tuyuca, another Tucanoan language, has been characterized as showing a difference in blocking and transparency effects when it occurs across morphemes vs. within them. Like Tucano, in harmony within a morpheme, voiced stops alternate with nasals and voiceless obstruents are transparent to harmony. However, harmony from stem to suffix is blocked by fricatives and voiced and voiceless stops (Barnes 1996; Walker 2000a).18 Opaque voiced stops are realized as oral or nasal, depending on the nasality of the suffix to which they belong. (See Trigo 1988 and Walker 2000a on the separate phonological treatment of voiced/nasal velar stops.) Walker interprets the blocking of harmony by voiced stops across a morpheme boundary as evidence of their underlying obstruent status in Tuyuca (cf. Botma 2004); all suffixes that alternate in nasal harmony issuing from the stem therefore begin with a continuant sonorant or laryngeal.19 When voiced stops undergo harmony within a morpheme, they would then be an instance of voiced obstruent stops that are targets in nasal harmony. In this approach, feature spreading is analyzed as strictly local at the level of the segment (chapter 81: local assimilation). This implies that segments cannot be skipped in harmony; they must either participate in harmony or block it. As a consequence, it is assumed that a phonological representation is available in which a “transparent” obstruent is nasalized (see chapter 91: vowel harmony: opaque and transparent vowels). The model that Walker proposes is illustrated in (18), implemented using the concept of a sympathetic candidate (McCarthy 1999). A sympathetic candidate is a designated form to which the actual output is encouraged to be similar via the activity of a candidate-to-candidate correspondence relation between this form and the actual output. Arrows represent the existence of correspondence relations among representations, which mediate the enforcement of identity between related forms. Among the candidate outputs generated for an input with a [nasal] specification is one where the feature spreads to all segments in the morpheme, satisfying the constraint that drives harmony. This is the candidate that becomes designated as “sympathetic.” However, as [nasal] is not compatible with an obstruent stop, this candidate is not selected. Instead, a form is selected that is identical to the full harmony candidate except that the stop is 18

Tucano has also been analyzed in this way by Trigo (1988), but it has come to light that a suffix beginning with a labial voiced stop alternates in nasal harmony in this language (Piggott and van der Hulst 1997; Botma 2004), which indicates that voiced stops do not systematically block harmony from the stem to suffix in this language. 19 Whether laryngeals should be treated as sonorants is an open question.

17

Rachel Walker

oral. Because feature associations may not skip a segment, this candidate must have separate [nasal] specifications flanking the oral stop. The actual output is thus chosen not directly by the harmony-driving constraint but rather because of its similarity to a candidate that fares best with respect to that constraint. (18)

/wfti/

Input

Output candidates [nas] Sympathetic candidate with full harmony

[nas][nas] Actual output with transparent [t]

This approach makes use of an abstract phonological representation in which obstruent stops are nasalized. Such representations do not occur within the actual output but they influence its selection. Another analysis that is similar in spirit posits the full harmony form as an intermediate level of representation that is generated as the product of a nasal spreading rule (Piggott 1988). A clean-up rule that prevents nasalized obstruents then causes the stop to be denasalized, to generate the derivational series: /Sati/ → Sã/h → [Sãth]. In summary, systems of nasal vowel–consonant harmony with transparent consonants have formed the center of discussion on several theoretical themes. One basic question is whether they should be considered the same “type” of nasal harmony as systems with opaque segments. Also at issue are the segmental representations involved, including the organization of [nasal] in the feature geometry, the level of structure at which [nasal] spreads, the locality of feature associations, and what kinds of abstract representations are involved. Questions about levels of representation have been touched upon, including whether there are distinct articulatory and perceptual representations and the possibility of intermediate or sympathetic forms. Finally, blocking and transparency effects in nasal vowel– consonant harmony have given rise to different perspectives on the harmony imperative and the sources that cause segments to block harmony or behave as transparent.

3

Nasal consonant harmony

In nasal consonant harmony systems, nasal harmony involves consonants only. Moreover, the participant consonants have been characterized as ones that are phonologically similar. These systems have been considered to differ from nasal vowel–consonant harmony in locality and the nature of participant segments, giving rise to proposals in which nasal consonant harmony involves a different harmony-driving imperative and/or different representations. A prototypical case of nasal consonant harmony is found in Kikongo, a Bantu language spoken in the Democratic Republic of the Congo (Bentley 1887; Dereau 1955; Ao 1991; Odden 1994; Piggott 1996; Rose and Walker 2004; see also chapter 77: long-distance assimilation of consonants). The nasal stop phonemes of

Nasal Harmony

18

Kikongo are [m n]. Nasal consonant harmony causes voiced stops and /l/ to become nasal when following a prevocalic nasal stop at any distance in the stem. The stem constituent in Kikongo consists of the root and suffixes. Examples of alternations in the perfective active and applicative suffixes induced by nasal consonant harmony are shown in (19). The consonant in these suffixes is analyzed as /l/ underlyingly. In words where the conditions for nasal harmony are not met, /l/ is realized as [d] before [i]. Vowel quality alternations are due to vowel height harmony. Vowels and voiceless consonants are transparent to the nasal harmony; they remain oral when occurring between harmonizing consonants. The forms in (19) consist of stems, as indicated by the initial hyphen according to convention. (19)

a.

b.

Perfective active forms -suk-idi ‘wash’ -bud-idi ‘hit’ -bak-idi ‘catch’ -sos-ele ‘search for’ Applicative forms -sakid-il-a ‘congratulate for’ -to(t-il-a ‘harvest for’

-nik-ini -sim-ini -futumuk-ini -le(m-ene

‘grind’ ‘prohibit’ ‘revive, rise’ ‘shine’

-nat-in-a -dumuk-is-in-a

‘carry for’ ‘cause to jump for’

In addition to inducing alternations in suffixes, nasal consonant harmony is considered to operate within roots, which do not show a voiced stop or [l] after a prevocalic nasal stop. Nasal stops that occur in an NC cluster do not trigger nasal consonant harmony (20a), nor do they prevent it from operating across them (20b). In addition, a voiced oral stop in an NC sequence does not undergo nasal harmony from a preceding prevocalic nasal. (20)

Perfective active a.

b.

-bantik-idi -kemb-ele -biIg-idi -tond-ele -mant-ini -meIg-ini

‘begin’ ‘sweep’ ‘hunt’ ‘love’ ‘climb’ ‘hate’

Setting aside NC clusters, the targets of nasal consonant harmony are frequently voiced stops and approximant consonants (/l/ is the only approximant consonant in Kikongo). In some cases, nasal harmony is restricted to consonants separated by no more than a vowel. The Bantu language Ndonga shows this pattern (Viljoen 1973; Rose and Walker 2004). In Ngbaka, a Niger-Congo language spoken in the Democratic Republic of the Congo, the lack of co-occurrence of certain nasals and prenasalized stops within a morpheme has been analyzed as the product of nasal consonant harmony (Hansson 2001; Rose and Walker 2004). Ngbaka contrasts nasal, prenasalized, voiced, and voiceless stops (Thomas 1963, 1970; Wescott 1965). Nasals may occur together with voiced and voiceless stops in a morpheme but not with a prenasalized stop that has the same place of articulation as the nasal (Mester 1988; Sagey 1990; van de Weijer 1994; chapter 29: secondary and double articulation).

Rachel Walker

19

Certain root co-occurrence restrictions (see chapter 86: morpheme structure constraints) on consonants in Ganda (Bantu), discussed by Katamba and Hyman (1991), have also been analyzed as the outcome of nasal consonant harmony (Hansson 2001; Rose and Walker 2004). The patterns in question involve nasals, voiced stops, and voiceless stops. Some of the voiced stops display approximant variants: [b/ß], [d/l], [Á/j]. Within a root, nasals do not usually occur with a voiced stop (or its approximant variant) that has the same place of articulation. This restriction is observed regardless of the order of the nasal and voiced oral consonant.20 In addition, the combination of a nasal and a voiceless stop with the same place of articulation is systematically absent if the nasal precedes the stop. In attested roots, identical nasals co-occur, as do oral voiced stops/approximants with the same place of articulation, as shown in (21a). Also attested are roots that combine a nasal and voiced consonant with different places of articulation (21b), and roots in which a voiceless stop precedes a nasal (21c). (21)

a.

b. c.

-mémèká -nónà -bábùlá -gùgá -bónèká -màlà -táná

‘accuse, denounce’ ‘fetch, go for’ ‘smoke over fire to make supple’ ‘curry favor with’ ‘become visible’ ‘finish’ ‘grow septic, fester’

In the harmony-based analysis of the Ganda pattern, nasal consonant harmony operates within a root among oral stops and nasals with the same place of articulation. For voiced stops the harmony is bidirectional, whereas for voiceless stops it is progressive only. Hansson’s (2001) treatment also takes into consideration restrictions on the co-occurrence of nasal stops and voiced prenasalized stops within a root in Ganda. With respect to non-contour stops, surveys of nasal consonant harmony in Hansson (2001) and Rose and Walker (2004) reveal the following implications: (i) patterns that target voiceless stops with the same place of articulation as the nasal trigger also target voiced stops with the same place of articulation, and (ii) patterns that target voiced stops with a different place of articulation from the nasal trigger also target voiced stops with the same place of articulation as the nasal. An interpretation that has been brought to these generalizations is that nasal consonant harmony favors targets that are similar to nasals (Walker 2000b; Hansson 2001; Rose and Walker 2004). These patterns are suggested to have a basis in speech planning (i.e. the organization and sequencing of abstract units) and its physical execution (i.e. the motor controls that carry out the “plan”). The similarity hypothesis finds support from speech error research. It is well established that the likelihood of a speech error between two phonemes increases with their phonological similarity. A series of speech error elicitation tasks conducted by Walker (2007) found that consonants that are more likely to interact in nasal consonant harmony are also more likely to participate in speech errors with speakers of English.

20

Ganda also shows a dispreference for particular pairs of voiced stops and nasals in a root when the voiced stop and nasal have a different place of articulation and the voiced stop follows the nasal (Katamba and Hyman 1991; Hansson 2001).

Nasal Harmony

20

Prior to the extensive typological studies of consonant harmony by Hansson (2001) and Rose and Walker (2004), analyses of nasal consonant harmony were developed that involved nasal feature spreading at a node within the feature geometry (Ao 1991; Odden 1994; Hyman 1995) or at a suprasegmental level (Piggott 1996). The typological studies in question sharpened the characterization of the differences between nasal consonant harmony and nasal vowel–consonant harmony, leading the authors of those studies to analyze nasal consonant harmony as the product of a different harmony-driving mechanism and as involving different representations from those usually assumed for nasal vowel–consonant harmony. Two of the chief differences between nasal consonant harmony and nasal vowel– consonant harmony involve locality and types of triggers and targets. Nasal consonant harmony targets segments that are phonologically similar to the nasal stop trigger, i.e. stops and approximant consonants. The harmonizing segments are usually non-adjacent, with at least a vowel intervening and sometimes longer transparent sequences. In contrast, in nasal vowel–consonant harmony, harmony affects a (near-)continuous sequence of segments, and vowels are never skipped. In the latter systems, favored targets follow the scale in (1), with vowels ranked at the top, a scaling suggested to have a basis in the segments’ phonetic compatibility with nasalization or in maintaining distinct contrasts. Consonants that do not become nasalized in nasal vowel–consonant harmony most often block harmony, although in some systems, (some of) the obstruents behave as transparent.21 The role of phonological similarity and the capacity for action-at-a-distance are emphasized in the correspondence-driven approach to nasal consonant harmony (Walker 2000b; Hansson 2001; Rose and Walker 2004). In this account, the occurrence of high phonological similarity between consonants can spur a formal correspondence relation to be established between them. Corresponding segments are co-indexed with one another, as illustrated in (22). Nasal harmony is effected via the correspondence relation. Constraints for individual features, such as [nasal], are postulated that enforce identical specifications in corresponding segments, thus producing nasal consonant harmony, as in (22b). Because nasal assimilation is accomplished through the correspondence relation in this structure, the harmonizing segments are not required to share a single [nasal] specification, unlike the outcome of [nasal] spreading, which is usually assumed for nasal vowel–consonant harmony. A representation like that in (22b) is suggested to accommodate the potential for nasal consonant harmony to occur among non-adjacent segments. (22)

a.

na i k-i la i [nas]

b. na i k i na i [nas] [nas]

In the correspondence-based approach, patterns that show harmony only between consonants in adjacent syllables are analyzed using proximity-sensitive constraints governing corresponding segments. The neutrality of preconsonantal nasals 21

Some additional differences are that nasal consonant harmony never has opaque segments (Hansson 2001; Rose and Walker 2004), whereas many nasal vowel–consonant harmony systems do. Also nasal consonant harmony does not appear to show sensitivity to metrical structure, such as stress and foot boundaries, and it does not extend across word boundaries (Hansson 2001), although these characteristics are attested in some patterns of nasal vowel–consonant harmony.

21

Rachel Walker

in a Kikongo-type system has been attributed to their dissimilarity from the potential target oral consonants, either in terms of their role in syllable structure (the nasals in NC clusters are codas, whereas the oral consonants are onsets) or in terms of their release status (the nasals in NC clusters are unreleased, whereas the oral consonants are released). It is suggested that a voiced stop in an NC cluster does not undergo harmony because of avoidance of geminate nasals, which do not occur in Kikongo. In Ngbaka, prenasalized stops are considered to be singleton consonants – not NC clusters, as in Kikongo – so these issues do not arise. Whether the representation and patterning of nasal contours in Ganda fall in line with these treatments has yet to be closely considered. The correspondence approach to nasal consonant harmony has been applied to consonant harmony systems in general. The basis for that proposal is that other systems of consonant harmony also show effects of similarity and action-at-adistance. See Hansson (2001), Rose and Walker (2004), and chapter 77: longdistance assimilation of consonants.

4

Directionality

This section turns to directionality in nasal harmony. Some systems can be considered bidirectional, with no apparent difference in the pattern of harmony in either direction. Nasal consonant harmony involving voiced stops in roots of Ganda is an example of this kind. Such patterns do not necessitate an overt statement of directionality. Also, root- or stem-controlled harmony where affixes in the domain of harmony occur only following or only preceding the root or stem can give the appearance of directional harmony but without requiring formal reference to a direction for harmony. However, other systems of nasal harmony show evidence of asymmetrical directionality, where harmony operates in only one direction or shows different patterns in its progressive vs. regressive operation. A contrast in the direction of nasal vowel–consonant harmony with opaque segments is seen in the patterns of the Johore dialect of Malay, an Austronesian language of Malaysia, and Capanahua, a Panoan language spoken in Peru. Johore Malay shows a harmony from nasal stops that targets vowels, laryngeals and glides (Onn 1980). Liquids and obstruents block harmony. Examples in (23) show that the harmony is progressive only. (23)

22

pHnkIã5ãn pHIãSãsan pHmãndaIãn mãkan baIin mknãSãn mã1ãI mkratappi22 mhnim mã]ãp

‘central focus’ ‘supervision’ ‘scenery’ ‘to eat’ ‘to rise’ ‘to capture (active)’ ‘stalk (palm)’ ‘to cause to cry’ ‘to drink’ ‘pardon’

Vowel nasalization in this word is assigned according to Onn’s description and harmony rule. Because vowel nasalization is predictable in Johore Malay, Onn only marks it when demonstrating rule applications.

Nasal Harmony

22

Like Johore Malay, Capanahua displays a nasal harmony that targets vocoids and laryngeals and is blocked by other segments (Loos 1969; Piggott 1992); however, the direction is regressive. The regressive direction cannot be predicted from the position of triggers in the syllable structure, as they occur in both syllable onsets and codas. Word-final nasals are enclosed in parentheses because they are deleted but still trigger harmony.23 (24)

5ãmawi 5ãmã]ina kajatãnai?24 wQrãnai bhmi ŒipiIki k.nŒap bãSh(n) warã(n) pi1ã(n) ]inãmpã(n)

‘step on it’ ‘coming stepping’ ‘I went and jumped’ ‘I pushed it’ ‘fruit’ ‘downriver’ ‘bowl’ ‘catfish’ ‘squash’ ‘arm’ ‘I will learn’

An instance of directionality in nasal vowel–consonant harmony with transparent segments is found in Siriano, a Tucanoan language spoken in Colombia and Brazil (Bivin 1986). Suffixes in Siriano become nasalized following a nasal stem (excluding certain suffixes that are invariant in nasality). Examples of suffix alternations are given in (25). Underlying forms are as provided by Bivin. (25)

a.

b.

/wehe-gq/ [wehegq] ‘when he is fishing’ to fish-3sg masc /Sghg-gq/ [SÂ5ÂI0] ‘when he is killing’ to kill-3sg masc /igo-7e/ [?igo7e] ‘she (complement)’ /hg0-7e/ [?hI0nÂ]25 ‘he (complement)’

The data in (25) are compatible with nasal harmony where directionality is an epiphenomenon of root or stem control. However, a small group of suffixes harmonize with a following suffix rather than the root, as shown in (26). The suffixes that exhibit this behavior are /-ju/ (second hand information), /-de(/ (past nominalizer), /-bu/ (inceptive), and /-ku/ (probability). (26)

23

a.

/wa(-ju-pq/ [wa?jupq] ‘they say he left’ to go+evid:second hand+3sg masc /wa(-ju-7ã/ [wa?Jj9ã] ‘they say they left’ to go+evid:second hand+3pl animate /Sghg-ju-pq/ [SÂ5Âjupq] ‘they say he killed’ to kill+evid:second hand+3sg masc

Capanahua also manifests a bidirectional nasal harmony triggered by a nasal stop that is deleted preceding an oral continuant consonant. For discussion, see Loos (1969), Safir (1982), and Trigo (1988). 24 Vowel nasalization in this word and the next one is assigned according to Loos’s description and rules. 25 Bivin (1986: 71) transcribes the suffix consonant here as [n], but in other transcriptions that he provides for flaps that have undergone nasal harmony in Siriano the consonant is a nasalized flap.

Rachel Walker

23 b.

/wa(-bu-gq/ [wa?bugq] ‘about to go (sg masc)’ to go+inceptive+sg masc /wa(-bu-7ã/ [wa?mj9ã] ‘about to go (pl)’ to go+inceptive+3pl animate /Sghg-bu-gq/ [SÂ5Âbugq] ‘about to kill (sg masc)’ to kill+inceptive+sg masc c. /wa(-de(-7o/ [wa?ande?7o] ‘where he went’ to go+nominalizer+loc /wa(-de(-7ã/ [wa?anÂ]9ã] ‘the ones who went’ to go+nominalizer+3pl animate /Sghg-de(-7o/ [SÂ5Ânde7o] ‘where he killed’ to kill+nominalizer+loc d. /wa(-ku-a/ [wa?akoa] ‘it left’ to go+probability+inanimate /wa(-ku-bh/ [wa?akjmh] ‘he left’ to go+probability+evid:3sg masc /Sghg-ku-a/ [SÂ5Âkua] ‘it killed’ to kill+probability+inanimate Bivin notes that /-ju/ and /-ku/ are evidentials, which must appear with a personnumber suffix. Although he does not have data to verify the facts for /-de(/ and /-bu/, he speculates that they too must be used with an additional suffix. We may wonder whether a stipulation for Siriano is needed that harmony with these suffixes is regressive or whether this directionality could be made to follow from morphological structure. Both possibilities have been considered. Bivin suggests that the suffixes in question form a separate lexical class. He treats regressive nasal harmony using a left spreading rule for [nasal] that applies to that lexical class. He also considers an approach that posits an internal word boundary at the left of these particular suffixes. This would block them from harmonizing with the root, as harmony occurs only within words. Bivin disprefers this account because he finds no evidence from other Tucanoan languages to support the presence of an internal word boundary, nor does he find evidence for an analogous occurrence of internal word boundaries elsewhere in Siriano. On the other hand, for Desano, a language closely related to Siriano, Kaye (1971) treats a similar directionality phenomenon as an epiphenomenon of morphological constituency. Like Siriano, Desano has a limited number of suffixes that derive their nasality from the following suffix rather than the preceding morpheme. Three of these suffixes are the same as those in Siriano (Bivin 1986). Kaye suggests that the suffixes targeted by regressive nasal harmony form a morphological constituent with the following suffix that is separate from the preceding stem.26 He pairs this assumption with a bidirectional nasal assimilation rule that applies cyclically to obtain differences in the direction of harmony. Siriano and Desano, then, are cases where directionality in nasal harmony can perhaps be reduced to the organization of morphological structure, but further study on this issue is needed. There is possible evidence of directionality in nasal consonant harmony. In the nasal consonant harmony of Kikongo, introduced in §3, harmony is progressive in the stem. The examples in (27) show apparent directionality; a voiced stop or 26

Miller (1999) treats the regressive nasal harmony in Desano as lexicalized.

Nasal Harmony

24

consonantal approximant precedes a nasal in the stem but remains oral. (See also the discussion in chapter 77: long-distance assimilation of consonants.) (27)

-dumukina -bilumuka

‘jump for’ ‘assemble in crowd’

Hansson (2001) speculates that progressive directionality in canonical Bantu nasal consonant harmony, like that of Kikongo, might be reducible to a system of harmony that is stem-controlled and that preserves the underlying oral/nasal quality of root-initial segments (e.g. using a faithfulness constraint specific to this position). However, this hypothesis has been questioned. Although the canonical Bantu root structure is CVC, Rose and Walker (2004) cite evidence that [bilum] is lexically stored as a whole. For the similar pattern of nasal consonant harmony in Yaka, another Bantu language, they point to evidence of stored forms with the sequences /CVlVN-/ and /CVbVN-/, which likewise do not show nasal harmony. Because the /l/ or /b/ is not root-initial in these cases, nor does it belong to a different cycle from the nasal, it would be expected to undergo nasal harmony that was not strictly progressive. Within a correspondence-based approach, Rose and Walker propose to analyze directional harmony in these patterns using a precedencesensitive identity constraint for the feature nasal in corresponding segments. In sum, there are systems of nasal vowel–consonant harmony and nasal consonant harmony that seem to display directionality effects that cannot be attributed to independent aspects of the system or structure. Differences in directionality in certain patterns of nasal vowel–consonant harmony with opaque segments present the strongest evidence for these effects. In some cases, certain researchers have suggested that morphological structure and/or prosodic position could obtain the effect of directional harmony, but there is not consensus on this explanation for the various patterns discussed above.

5

Conclusion

To conclude, at the heart of research on nasal harmony are patterns that fall into three descriptive categories: nasal vowel–consonant harmony with opaque segments, nasal vowel–consonant harmony with transparent segments, and nasal consonant harmony. Across languages, patterns belonging to the first category respect an implicational scale that governs favored targets. Whether nasal vowel–consonant harmony with transparent segments and systems with opaque segments share a common source remains in question. Studies bearing on this issue have generated diverse perspectives on the harmony imperatives, the levels of representation that are involved, and the nature of locality. Nasal consonant harmony presents differences from nasal vowel–consonant harmony in showing action-at-a-distance and in favoring harmony between segments that are phonologically similar. This has given rise to a correspondence-driven approach to nasal consonant harmony, situated in a general typology of consonant harmony. This approach is distinct from the treatment of nasal vowel–consonant harmony, which is most often assumed to involve spreading. The study of nasal harmony can illuminate not only the nature of long-distance phonological assimilation but also themes in phonology that are more general in

25

Rachel Walker

nature. Whereas in the last couple of decades the broad strokes of the typological characteristics of nasal harmony patterns have been reasonably well delineated, the details of many specific systems remain unknown. Future research could be fruitfully applied to developing more case studies. The resulting findings will doubtless in turn shed new light on the theoretical debates and the cross-linguistic characterization of nasal harmony, both alone and in the larger picture.

ACKNOWLEDGMENTS For comments on this chapter, I am grateful to Beth Hume, Marc van Oostendorp, and two anonymous reviewers.

REFERENCES Ajíbóyè, {ládiíp^. 2001. Nasalization in M^bà. University of British Columbia Working Papers in Linguistics 8. 1–18. Ao, Benjamin. 1991. Kikongo nasal harmony and context-sensitive underspecification. Linguistic Inquiry 22. 193–196. Archangeli, Diana & Douglas Pulleyblank. 2007. Harmony. In de Lacy (2007), 353–378. Barnes, Janet. 1996. Autosegments with three-way contrasts in Tuyuca. International Journal of American Linguistics 62. 31–58. Beckman, Jill N. 1999. Positional faithfulness: An optimality theoretic treatment of phonological asymmetries. New York: Garland. Bendor-Samuel, David (ed.) 1971. Tupi studies I. Norman, OK: Summer Institute of Linguistics. Bentley, W. Holman. 1887. Dictionary and grammar of the Kongo language. London: Baptist Missionary Society and Trübner. Bivin, William E. 1986. The nasal harmonies of twelve South American languages. M.A. thesis, University of Texas at Arlington. Boersma, Paul. 1998. Functional phonology: Formalizing the interactions between articulatory and perceptual drives. The Hague: Holland Academic Graphics. Boersma, Paul. 2003. Nasal harmony in functional phonology. In van de Weijer et al. (2003), 3–35. Botma, Bert. 2004. Phonological aspects of nasality: An element-based dependency approach. Ph.D. dissertation, University of Amsterdam. Botma, Bert. 2009. Transparency in nasal harmony and the limits of reductionism. In Kuniya Nasukawa & Philip Backley (eds.) Strength relations in phonology, 79–111. Berlin & New York: Mouton de Gruyter. Botma, Bert & Norval Smith. 2007. A dependency-based typology of nasalization and voicing phenomena. In Bettelou Los & Marjo van Koppen (eds.) Linguistics in the Netherlands 2007, 36–48. Amsterdam & Philadelphia: John Benjamins. Clements, G. N. & Sylvester Osu. 2003. Ikwere nasal harmony in a typological perspective. In Patrick Sauzet & Anne Zribi-Hertz (eds.) Typologie des langues d’Afrique et universaux de la grammaire, vol. 2, 69–95. Paris: L’Harmattan. Cohn, Abigail C. 1990. Phonetic and phonological rules of nasalization. Ph.D. dissertation, University of California, Los Angeles. Cohn, Abigail C. 1993a. The status of nasalized continuants. In Huffman & Krakow (1993), 329–367. Cohn, Abigail C. 1993b. A survey of the phonology of the feature [±nasal]. Working Papers of the Cornell Phonetics Laboratory 8. 141–203.

Nasal Harmony

26

de Lacy, Paul (ed.) 2007. The Cambridge handbook of phonology. Cambridge: Cambridge University Press. Dereau, Léon. 1955. Cours de kikongo. Namur: A. Wesmael-Charlier. Flemming, Edward. 1993. The role of metrical structure in segmental rules. M.A. thesis, University of California, Los Angeles. Flemming, Edward. 2004. Contrast and perceptual distinctiveness. In Bruce Hayes, Robert Kirchner & Donca Steriade (eds.) Phonetically based phonology, 232–276. Cambridge: Cambridge University Press. Gerfen, Chip. 1999. Phonology and phonetics in Coatzospan Mixtec. Dordrecht: Kluwer. Gerfen, Chip. 2001. Nasalized fricatives in Coatzospan Mixtec. International Journal of American Linguistics 67. 449–466. Gnanadesikan, Amalia E. 1995. Markedness and faithfulness constraints in child phonology. Unpublished ms., University of Massachuetts, Amherst (ROA-67). Gregores, Emma & Jorge A. Suárez. 1967. A description of colloquial Guaraní. The Hague: Mouton. Halle, Morris & Jean-Roger Vergnaud. 1978. Metrical structures in phonology. Unpublished ms., MIT. Hansson, Gunnar Ólafur. 2001. Theoretical and typological issues in consonant harmony. Ph.D. dissertation, University of California, Berkeley. Harms, Phillip L. 1985. Epena Pedee (Saija): Nasalization. In Ruth M. Brend (ed.) From phonology to discourse: Studies in six Colombian languages, 13–18. Dallas: Summer Institute of Linguistics. Harms, Phillip L. 1994. Epena Pedee syntax. Arlington: Summer Institute of Linguistics and University of Texas at Arlington. Harrison, Carl H. & John M. Taylor. 1971. Nasalization in Kaiwá. In Bendor-Samuel (1971), 15–20. Homer, Molly. 1998. The role of contrast in nasal harmony. Ph.D. dissertation, University of Illinois, Urbana-Champaign. Huffman, Marie K. & Rena A. Krakow (eds.) 1993. Nasals, nasalization, and the velum. Orlando: Academic Press. Hulst, Harry van der & Norval Smith. 1982. Prosodic domains and opaque segments in autosegmental theory. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 311–336. Dordrecht: Foris. Hume, Elizabeth & David Odden. 1996. Reconsidering [consonantal]. Phonology 13. 345–376. Hyman, Larry M. 1995. Nasal consonant harmony at a distance: The case of Yaka. Studies in African Linguistics 24. 5–30. Kaiser, Eden. 2008. Nasal spreading in Paraguayan Guaraní: Introducing long-distance continuous spreading. Amerindia: Revue d’ethnolinguistique amérindienne 32. 283–300. Katamba, Francis & Larry M. Hyman. 1991. Nasality and morpheme structure constraints in Luganda. In Francis Katamba (ed.) Lacustrine Bantu phonology, 175–211. Cologne: Institut für Afrikanistik, University of Cologne. Kaye, Jonathan. 1971. Nasal harmony in Desano. Linguistic Inquiry 2. 37–56. Kingston, John. 2007. The phonetics–phonology interface. In de Lacy (2007), 401–434. Levi, Susannah V. 2005. Reconsidering the variable status of glottals in nasal harmony. Papers from the Annual Regional Meeting, Chicago Linguistic Society 41. 299–312. Loos, Eugene Emil. 1969. The phonology of Capanahua and its grammatical basis. Norman, OK: Summer Institute of Linguistics. McCarthy, John J. 1999. Sympathy and phonological opacity. Phonology 16. 331–399. McGinn, Richard. 1979. Outline of Rejang syntax. Ph.D. dissertation, University of Hawaii. Mester, Armin. 1988. Studies in tier structure. New York: Garland. Miller, Marion. 1999. Desano grammar. Arlington: Summer Institute of Linguistics & University of Texas at Arlington.

27

Rachel Walker

Morris, Judy. 1978. The syllable in wZz la>k/ can be produced as [+PZÚlH;] is surprising, but

P

Z

l

H

;

Figure 79.1 Waveform and spectrogram of a conversational speech token of but I was like, demonstrating numerous deletions, mergers, and changes to segments, as well as insertion of unexpected /r/-coloring, perhaps as a reflection of the flap of but. All spectrograms show a 0–5000 Hz range The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0079

Reduction

2

perhaps even more surprising is that when a native listener hears the utterance, it sounds like a normal pronunciation of the phrase. While one might not expect this realization, when one examines spontaneous conversation, such surprising reductions are rather common. The fact that a surface realization can depart so drastically and in variable ways from the underlying representation poses problems for phonological theory. The fact that listeners find it unproblematic brings up further questions regarding what a phonological underlying representation is, and how speakers and listeners use it. There are three directions from which one might approach the topic of reduction: what sounds the speaker produces (acoustics of reduction), how the speaker produces them (reductions and changes in articulations), and how the speech is perceived. Phoneticians and phonologists have long known that speakers sometimes produce reduced speech, at least in casual speech situations. [–i?je?] for did you eat yet? is often used as a plausible example. However, such examples are sometimes presented in phonetics and phonology classes as something that occurs when one is at home talking with one’s spouse, perhaps when one is not fully awake, not as a normal phenomenon. Such speech has often been removed from the realm of phonology, and left for phonetic implementation. Yet phoneticians have often paid hardly more attention to reduced speech, considering it too uncontrolled to give good results. Ladefoged (2003) suggests avoiding connected speech such as storytelling when documenting a language, and advises sticking to word lists in controlled frame sentences. Many of the chapters in the Companion summarize a debate about how phonological theories should handle a particular phenomenon. This one, instead, summarizes a debate about whether a phenomenon is even relevant. The default assumption may be that reduction is not linguistically relevant. Speech style (e.g. casual, formal) and speech rate seem like non-linguistic factors, outside the boundaries of the abstract system that makes up the grammar. Or perhaps speech reduction really does occur primarily when conversing with one’s spouse, at home, when tired – perhaps it is the exception. However, there is also reason to think that reduced speech is anything but peripheral to the linguistic system. It may be in fact the normal, typical way for humans to communicate information. Furthermore, neither phonetic nor phonological theories were built to handle reduced speech, so reduced speech raises large theoretical questions. First, I will discuss terminology. There are many overlapping terms falling along more than one dimension, such as reduced, conversational, connected, spontaneous, fast, casual, and natural speech. One could separate a speech rate dimension (fast–slow) out from the dimension of formality (casual–formal) (see also chapter 92: variability). Neither of these is exactly a dimension of “reduction”: some speakers talk very quickly in both formal and informal settings, yet seem to maintain almost all of their consonantal articulations. However, reduction is probably more common in casual conversation and fast speech. Another dimension might be spontaneity. In my own understanding of the term, “spontaneous speech” refers to any speech in which the words are not chosen ahead of time. This excludes read speech, speech in a talk one has given repeatedly (e.g. politicians’ campaign speeches), and speech that is explicitly prompted (e.g. shadowing tasks). When the Oregon Graduate Institute (OGI)

3

Natasha Warner

collected its Japanese corpus (Muthusamy et al. 1992), for example, speakers were instructed to talk about themselves (e.g. hobbies, work) for a minute, which leads to spontaneous monologue speech. I take “connected” speech to be broader than “spontaneous speech,” including anything where words form part of a longer utterance, perhaps even target words in a frame sentence. “Conversational” speech sets a stricter requirement, meaning speech that occurs while two or more speakers are conversing, whether by telephone or in person, and whether they know each other or not. Speech can be spontaneous and connected but not conversational, as in the OGI monologues. “Conversational” speech excludes any scripted speech, even reading a scripted conversation. “Casual” speech goes one step further, requiring that the speakers be comfortable in the setting, comfortable with the topics and comfortable with their interlocutors. A job interview or a discussion of governmental policy is usually conversational and spontaneous, but not casual. However, monologues about hobbies or friends can be casual without being conversational if the speakers are comfortable with the recording setting. Three terms differ from the rest: fast, natural, and reduced speech. Speech rate, measurable in intended syllables per second, for example, does seem to be a separate but usually correlated dimension. For example, some parts of spontaneous conversational speech are very fast, while others may be very slow (perhaps if the speaker is tired or uninterested in the topic). However, one may expect that casual conversation would usually have a faster rate in intended syllables per second than most careful read speech. “Naturalness” defies a simple definition. One can refer to natural speech as the opposite of synthesized speech, so that even nonsense CV syllables can be natural if produced by a human. Alternatively, linguistic anthropologists may put very strict requirements on the setting of “natural speech,” such as the speakers not being seated in a sound booth, or not having head-mounted microphones on, as these things might make the speakers self-conscious, and affect their speech. Thus I will not attempt to fit “natural speech” into the other dimensions. I take “reduced speech” to refer to changes in the segments or suprasegmentals relative to what would be expected in a careful pronunciation of the same word or phrase. “Reduction” thus includes changes to sounds (chapter 66: lenition; e.g. [Ñ] or [Ø] for [g] in gonna; Figure 79.2), deletions of expected segments (chapter 68: deletion; e.g. [wjÌ] for we were; Figure 79.2), and shrinkage of contrast space (e.g. a smaller overall vowel space, or a specific centering like [w\] for when; chapter 2: contrast). Types will be discussed below. Clearly, however, not all alterations of segments are reduction: speech errors, dialectal differences, historical sound change (chapter 93: sound change), and obligatory (morpho-)phonological changes (e.g. /k/ becoming /s/ in electric/electricity; Chomsky and Halle 1968) are not reduced speech. “Reduction” refers to changes relative to a clearly pronounced surface form, not relative to the underlying form. Thus, spontaneous or conversational speech can be defined based on the setting in which it is produced, but I define reduced speech based on its acoustic or articulatory form. Notably, reduction does occur in relatively careful speech, although not as often as in spontaneous casual speech, so setting or style alone does not define it. I do not define reduction based on it being easier to articulate, or requiring less movement of the articulators, because it is difficult to define objectively for each situation what “easier to articulate” would mean.

Reduction

w

j

Ì

:

H n

H

:

o

4

aw

Figure 79.2 Waveform and spectrogram of a conversational token of We were gonna go out, demonstrating approximant realizations of the /g/ phonemes (marked with arrows)

2

Historical perspective

Scholars have long noticed that expected speech sounds are not always produced. Richter (1930) gave numerous examples of reduction in French, although from relatively careful speech. She notes severe reductions like the deletion of the entire syllable -ble in impossible, as well as single-segment deletions, assimilations, etc. Early scholars like Richter had to develop creative methods to detect reductions, since in context reductions sound notoriously unremarkable. Richter recorded speech on phonograph records, then played the records backwards while transcribing the sounds she heard. While this may have biased her results in some ways, it was a creative way to determine what kinds of reductions were present, without the benefit of spectrograms. For several subsequent decades in phonetics and phonology, many scholars acknowledged that speech reduction exists and gave a few examples of reductions that might occur in conversation or in “sloppy” speech. However, they usually considered reduction phenomena as outside the area of interest, stating that one should analyze the full form of a word rather than the “elliptic,” “slovenly,” or “slurred” forms one finds in “rapid speech” or “familiar talk” (all terms from quotations in Johnson 2004). Johnson (2004) cites Hockett (1955) as giving examples [d>–H] for did you? and [wÁ–H] for would you?. Johnson also discusses how Chomsky and Halle’s focus on competence rather than performance removed speech reduction from the field of data to be analyzed. Johnson (2004) provides an excellent historical summary. Johnson does point out that European phonology (he emphasizes Stampe’s Natural Phonology) gave more attention to reduction. For example, Dressler (1975) discusses implications of reduction for analysis of historical sound change, and advocates Natural Phonology as the best formal

5

Natasha Warner

phonological approach to reduction. Shockey (2003) presents numerous transcriptions of reductions, and discusses how various phonological theories, including Natural Phonology, would handle reduction. Zwicky (1972) and Bolozky (1977) also address how to generate variable reduced surface forms in phonology. However, the publications that do address reduction represent a minuscule proportion of the work in the field, and papers not explicitly on reduction generally only mention careful speech forms. The training which new members of the field receive can also give an impression of the field’s stance on reduction: one graduate introductory phonology course in an American linguistics department in the 1990s included a few examples of minor reductions such as [m>œju] miss you and a statement that what we should be studying is the clear speech form of words, not reductions. Overall, the field of phonology is primarily characterized by work that stops at the careful speech surface level. Many of the phonological works that do address reduction, furthermore, discuss reductions of a single segment, rather than the more drastic deletions of multiple syllables found in conversational corpora. Turning to phonetics, Johnson (2004) notes that phoneticians have discussed reductions. The earliest example he cites is Dalby (1986), who examined how often schwas (chapter 26: schwa) were deleted in English television talk-show speech and in a fast speech task in the lab. Richter (1930), mentioned above, gives an early example. Ladefoged et al. (1976) recorded speech styles ranging from conversational interviews to isolated word-list reading, and failed to find reduction of the vowel space. Koopmans-van Beinum (1980), however, found exactly that effect (in Dutch), in a wide variety of speech styles. A search of the Journal of the Acoustical Society of America’s database, going back to 1929, does show early studies of conversational speech, although often for engineering purposes, e.g. earplug effectiveness (Kryter 1946). Still, the vast majority of phonetic research for many decades has been on carefully read speech. To take just one example, Löfqvist and Gracco’s (1999) work using read sentences such as Say “ipa” again is more representative of the field than Dalby’s (1986) work using television talk-show recordings. This is not a fault of the field. In part, technological limitations hindered analysis of large amounts of spontaneous conversational speech. Furthermore, there are many interesting questions that are more appropriate to answer with carefully controlled speech than with open, uncontrolled, variable conversation. To confirm objectively that research using spontaneous or non-careful speech is still the exception in phonetics, I examined every article in a recent volume of the Journal of Phonetics (vol. 36, 2008). This volume contains 36 research articles, including a special issue on phonetics of North American indigenous languages (six articles). Of the 36, a total of four articles use speech material occurring in a relatively natural setting. Notably, two of those are on infants’ speech productions during play (isolated words and babbling). The other two are from the special issue on indigenous languages and use field recordings (interviews, monologues, and elicitation) as corpora. One additional indigenous languages article uses a sentence translation task. Thus, in this volume of the journal, relatively natural speech settings seem to be used when it would be difficult or impossible to obtain data otherwise: with infants and with highly endangered or nonwritten languages. (One additional article on an African language uses a sentence repetition task rather than reading.) The remaining 32 articles use careful speech. The majority use target words or non-words in repetitive frame sentences of

Reduction

6

the Now I say X again type. Thus, although the sources discussed below demonstrate that there is now a substantial body of research on spontaneous speech and reduction, such research represents a small percentage of work in the field. Until recently, speech perception and spoken word recognition research has also largely avoided speech that might be reduced. Speech perception stimuli tend to be even more careful and less natural than the speech for acoustic phonetics: a common type of stimulus might be synthesized /ba da ga/ nonsense syllables, or perhaps isolated words with one segment gated out. One would be unlikely to use stimuli collected from natural conversation unless perception of reduction were the topic. Turning to psycholinguistics, much of Cutler and colleagues’ work is about how listeners segment words out of connected speech. Still, Cutler points out (1998; see also Mehta and Cutler 1988) that one usually uses carefully pronounced stimuli with little context (e.g. apple embedded in vufapple for a word-spotting task) to study the segmentation of connected speech, reducing experimental variability. Thus, even work that sets out to examine how one deals with connected speech uses rather careful speech as the testing ground. Recently, phonetic investigation of reduced speech has been expanding rapidly. Three studies quantify how much reduction takes place overall by comparing pronunciations in corpora of spontaneous speech to expected careful pronunciations. Shattuck-Hufnagel and Veilleux (2007) surprisingly emphasize how few of the expected acoustic landmarks are deleted in spontaneous speech (14 percent in their study), while Greenberg (1999) emphasizes how many expected segments are deleted (12.5 percent in his study). Johnson (2004) finds 20 percent of words having at least one segment deleted and 5–6 percent of words having at least a syllable deleted. Although one paper concludes that deletions are rather rare and the other two conclude that they are common, the difference in deletion rate is not large, and the methods differ (deletion of Stevens-style landmarks vs. deletion of segments, in a map task – where speakers viewing a map give directions to listeners – as opposed to conversation). Thus, whether deletion is frequent may be a question of whether the glass is half full or half empty: is 12.5–14 percent deletion a lot? Is it enough to make listening challenging? Work such as that by Raymond et al. (2006) examines deletion of particular segments, also showing substantial rates of deletion, even for word-medial consonants. Reduction is not solely about deletion: Greenberg finds 117 pronunciations for the word that, for example, an astounding demonstration of the variability that listeners encounter in normal conversation. Strik et al. (2010) show a similar result for Dutch. Shockey (2003) contributes transcriptions of reductions from many dialects of English. Ernestus and colleagues have conducted extensive research on reduction in Dutch (beginning with Ernestus 2000). Some of their studies locate all tokens of particular high-frequency words or suffixes in a corpus, such as eigenlijk ‘actually’ or its suffix -lijk, and analyze what affects their duration (Pluymaekers et al. 2005a, 2005b). Research on reduction has favored English and Dutch thus far, but does exist for other languages. Some examples are Tseng (2005) and Cheng and Xu (2008) on Mandarin, Furui and colleagues (e.g. Nakamura et al. 2007) and Maekawa and Kikuchi (2005) on Japanese, Kohler (2001) and colleagues on German, Engstrand and Krull (2001) for Swedish, Lennes et al. (2001) on Finnish, and Nicolaidis (2001) on Greek. The Nijmegen Speech Reduction Workshop (June 2008) included several talks on French, and Barry and Andreeva (2001) compare reduction phenomena in six languages. Furthermore, Keune et al. (2005)

7

Natasha Warner

find that degree of reduction can differ even for dialects, with more reduction in Belgian Dutch than in Netherlands Dutch. It is promising that there are data in an array of languages, although Europe still dominates. Another consideration about the past literature is that much of the spontaneous speech has been from the Map Task (Bard et al. 2001; Shattuck-Hufnagel and Veilleux 2007), monologues (Arai 1999), or other relatively careful speech, which may lead us to underestimate reduction. There has also been phonetic research on speech style in the opposite direction from reduced speech, specifically on the “clear” speech that speakers use to address hard-of-hearing or second-language listeners (Bradlow and Bent 2002; Smiljanio and Bradlow 2005, 2008). Most phonetic research on reduction is on what is produced (articulatory or acoustic phonetics), but perception research is now increasing rapidly. Mehta and Cutler (1988) test perception of spontaneous vs. read speech, using a phoneme monitoring task, and find that the coherent intonation of planned speech facilitates processing, relative to the pauses and self-corrections of spontaneous speech. Koopmans-van Beinum (1980) finds that identification accuracy for Dutch vowels presented in isolation, cut out of the speech stream, is extremely low for casual speech. Arai (1999), on Japanese, and Ernestus et al. (2002), on Dutch, both investigate perception of severe reductions (e.g. syllable deletions, such as [eik] for Dutch [eixHlHk] ‘actually’). Both Arai and Ernestus et al. find that listeners cannot retrieve the intended words well without context, but can with context. Shockey (2003), using a single conversational utterance, indicates that listeners usually misperceive reduced speech unless extensive conversational context is present. Turning to perception of specific types of reduction, Mitterer and Ernestus (2006) find that listeners take patterns of where reduction is produced into account when recognizing reduced final /t/. Ranbom and Connine (2007) investigate recognition of English words with /nt/ reduced to nasal flap, as in gentle rhyming with kennel. They find that words which listeners hear more often with nasal flap are not as hard to recognize with the nasal flap as other words. However, the reduced form is still more difficult overall. Tucker (2007) finds that having heard fast, reduced speech in the preceding frame sentence can, to some extent, mitigate the reduced-speech difficulty. This suggests that listeners use information about reduction in the context to adjust their acoustic criteria for recognition of upcoming sounds. Isabelle Racine presented on recognition of French words with schwa deletion at the Nijmegen Speech Reduction Workshop. Oliver Niebuhr, also at that workshop, discussed listeners’ use of lengthening in neighboring segments to recognize otherwise deleted segments. Warner et al. (2009b) test the contribution of various acoustic aspects of flap reduction to whether listeners hear a medial consonant in pairs such as needle–kneel. Past research has also examined processing of flapped vs. extremely careful /t d/ pronunciations, e.g. recognition of pretty with flap vs. [t] in American English (McLennan et al. 2003, 2005; Connine 2004; chapter 113: flapping in american english). However, since flap is the normal careful pronunciation, that work is more about the processing of phonological alternations than about reduction. Speech reduction borders on other fields besides linguistics and psychology. It is particularly relevant for engineering, for purposes of automatic speech recognition (ASR), because speakers may expect ASR systems to respond correctly to their requests even if they use reduced speech. Some of the works cited above relate particularly to ASR (Greenberg 1999; Strik et al. 2010; Nakamura et al. 2007).

Reduction

8

Within the field of linguistics itself, but beyond phonetics and phonology, speech reduction is of particular interest for sociolinguistics. To sum up the historical perspective on reduction, formal phonology has long excluded reduction from its domain. Phonetics has long included the topic, but as a very small proportion of all phonetic research. Psycholinguistics has addressed questions of connected speech, although usually by means of careful speech stimuli. The last few years, however, have shown an explosion of research on reduction, speech style, and large connected speech corpora. Perhaps it is not surprising that conversational speech and reduced speech make up a small proportion of research, even in phonetics. The field uses an astounding range of speech carefulness, from nonsense sequences such as /ipa, ipu/, possibly with articulograph pellets glued to the speaker’s tongue, to spontaneous conversation between close friends, or even natural interactions outside the lab (e.g. Hicks Kennard’s 2006 work on male and female marine corps drill instructors’ phonetics). The former will yield extremely stable data with little random variability, with obvious statistical advantages (increased power) when the question is not about more natural speech. However, once there is a background literature on a given topic, we may be able to move to more spontaneous speech. We are then likely to encounter more reductions, even if reduction is not the topic of the work.

3 3.1

Reduction phenomena Duration

Shorter duration of segments is perhaps an obvious meaning of reduction (duration is literally reduced), but if the segments are acoustically otherwise unaltered, overall shorter durations might stem from fast speech rather than reduction. However, shorter duration is usually correlated with reduction in manner of articulation, so duration may provide a convenient way to measure reduction. For example, we find a good correlation between intervocalic stop duration and how approximant-like the “stop” is (Warner and Tucker 2007). Ernestus and colleagues have made extensive use of duration as a simple, one-dimensional indicator for reduction (e.g. Pluymaekers et al. 2010). Figure 79.3 shows a phrase taken from casual conversation, with reduction of several syllables to very short durations, and the same phrase in careful speech. Ernestus and colleagues often use a partially automated system (automatic speech recognition, given phonetic transcription as input) for duration measurement (Pluymaekers et al. 2010).

3.2

Alterations to segments

Changes in the manner of articulation or voicing of a segment are extremely common in reduced speech. Warner and Tucker (2007), studying realizations of expected intervocalic stops and flaps, find that a great many tokens in all speech styles are actually produced as approximants (Figure 79.4a, even in read speech). Figure 79.4b shows an expected flap that is barely visible, and a second visible but extremely short flap. Figure 79.5 shows a Japanese example in which reduction causes devoicing rather than voicing (in an environment where vowel

Natasha Warner

9 (a)

First pair with transcription

Œ

\ (l)

k

n

n(

k

s

(k)

p

a

(b) Second pair with transcription

Œ

/

l

.

n

.

n(

Ï H

s

p

a

Figure 79.3 Waveforms and spectrograms of two tokens of chillin’ in the spa. The highlighted portion corresponds to -in’ in the. In all figures, phonetic symbols in parentheses indicate sound for which there is some evidence, but either they have extremely low amplitude or their presence is in doubt. (a) Utterance taken from conversation (highlighted portion: 173 msecs). (b) Carefully produced utterance of the same words (highlighted portion: 435 msecs)

Reduction (a)

10

First pair with transcription

th

a P

(q)

s

(t)

!(H) b

H

aw

(b) Second pair with transcription

g e

(@)

a(

j

Ì(

Figure 79.4 Waveforms and spectrograms showing reduced stops and flaps. (a) . . . artistic about (read speech from a list of phrases), with /k/ marked by an arrow. (b) . . . get out, er . . . (conversation), with arrows at expected flaps

devoicing should be phonologically impossible). The entire final syllable of the utterance, /ru/, is effectively devoiced, although the /u/ could be argued to have 3–4 extremely low-amplitude pulses. If manner of articulation (chapter 13: the stricture features) and voicing (chapter 69: final devoicing and final laryngeal neutralization) can change as part of reduction, one wonders whether place of articulation (chapter 22: consonantal place of articulation) can as well. One might hesitate to consider most

11

Natasha Warner

t

o

j

Q(

n o

d

e

a

í #

Figure 79.5 Waveform and spectrogram of a Japanese partial utterance . . . to iu no de aru ‘it’s that it’s the case that’ (from a cassette tape accompanying a third-year language textbook, read speech by a professional speaker). The arrow marks a consonant and vowel that are devoiced where not phonologically expected

assimilations (e.g. English />n-/ in impossible; chapter 81: local assimilation) to be reductions. However, variable phonetic assimilations may be part of reduction: seve[m] plus for seven plus would happen less often in careful speech, even though the tongue tip still makes an unperceived alveolar gesture (Browman and Goldstein 1989). This type of example has influenced the field through Articulatory Phonology (chapter 5: the atoms of phonological representations), although it is not generally presented as being “reduced” speech.

3.3

Deletion and syllable count

In reduced speech, one almost always finds deletions of segments (Figures 79.1– 79.3), as quantified by Greenberg (1999), Johnson (2004), Raymond et al. (2006) and others. Some scholars might propose that deletion is not a type of reduction, because reducing something does not mean removing it. However, it is impossible to separate deletions from other reduced speech phenomena. Even for a given segment, it can be difficult to say whether there is some trace of the segment present. Figure 79.6 shows several examples ranging from reduced to probably deleted (even in Figure 79.6c, one perceives some trace of a consonant for the expected flap, although it is not visible in the spectrogram). Furthermore, Browman and Goldstein (1989) review findings that, even when a segment appears to be fully deleted acoustically, there may still be a reduced-size articulatory gesture for it, as in the apparent deletion of the final /t/ in perfect memory. This is very similar to the seve[m] boys type of assimilation example above, and in Articulatory Phonology, the difference between deletion (perfect memory) and this assimilation is solely a matter of whether the following labial gesture overlaps a voiceless stop, rendering it inaudible, or a nasal, making it audible with

Reduction

12

(a)

m

æ

9?

Ì

æ

j

H

s

(b)

s

t

s

Figure 79.6 Waveforms and spectrograms of a variety of reductions of flap. Arrows mark the location, visible or expected, of the flap. All are from read speech ((a) from the reading of a story; (b) and (c) from word-list reading). (a) matters, with the flap slightly nasalized. (b) status, with the flap approximated. (c) capitalist, with the flap so reduced that no trace is visible, although there is some suggestion of a consonant perceptually

(Cont’d)

Natasha Warner

13 (c)

kh Figure 79.6

æ

p

H

=

q

s

t

(Cont’d)

a different place. Thus there are several reasons to consider apparent segmental deletions to be quite literally a type of reduction: reduction vs. deletion is a continuum (Figure 79.6), acoustic deletions are known to often have small residual gestures (Browman and Goldstein 1989), and gesturally there is no difference between some deletions and some assimilations. Furthermore, deletion and reduced segments go together: in conversational or spontaneous speech, if we are finding many changes of manner or voicing in a recording, we will also be failing to find some expected segments at all. Often, reduced speech overlaps gestures and perceptual cues so much that one cannot say which segments have been deleted and which are present. One hears some trace of expected segments, but on close examination of the spectrogram, and despite close listening, one cannot be sure what sounds are present at all (Figure 79.7). In weekend were you in Figure 79.7, there is a voiceless palatal fricative for the /k/, and later a high F2 for the /j/ of you. Between those, there is a vocalic stretch, then a low F2 that suggests a /w/, and one hears substantial nasalization and some r-quality. This unclear stretch must be a reduction of /endwÌ/, and one can identify features such as nasalization, but it is very difficult to identify segments or transcribe it in order to determine what has been deleted. Any transcription of this stretch forces unjustified and artificial clarity onto the speech. Still, this utterance sounds completely normal and intelligible. Such examples make the issue of whether deletion and assimilation are part of reduction moot. Gestures and perceptual cues are highly overlapped, yet something remains of many of them. Both assimilation and deletion must be happening, but the speech is simply not clear enough to point to specific assimilations or deletions. As a result of deletions, syllable count often drops relative to careful speech. As discussed above, Johnson (2004) finds that 5–6 percent of words in a spontaneous

Reduction

w Figure 79.7 speech

i

ç

k

S

(Ì)

14

j y

Waveform and spectrogram of . . . weekend were you, from conversational

English corpus have at least one syllable deleted, as determined from phonetic transcriptions. Arai (1999) studies perception of spontaneous Japanese speech, using the fact that the Japanese mora writing system allows one to evaluate perceived mora (or syllable) count. Arai (1999) presented a stretch of conversational speech, with varying amounts of context. Listeners were simply asked to write down what they had heard. Out of context, a stimulus which would have five moras in careful speech was perceived as containing an average of slightly more than two, but with even a little context, the same stretch was reported as constituting an average of nearly five moras. This study concurs with others that listeners cannot recognize reduced words or sounds well out of context, yet do so quite successfully in context. It also provides direct evidence that listeners perceive fewer moras or syllables for the same reduced acoustic material out of context.

3.4

Shrinkage of acoustic spaces

Literature on reduced speech often focuses on changes from one segment to another or deletions (e.g. Greenberg 1999; Johnson 2004; Shattuck-Hufnagel and Veilleux 2007). This is partly because of the method of transcribing a corpus, then comparing the transcription to dictionary listings for the words (Warner, forthcoming). However, some types of reduction do not result in a different transcription, as when the overall vowel space shrinks (Koopmans-van Beinum 1980). Many of the vowel tokens in Koopmans-van Beinum’s work would probably be perceived as the full vowel phonemes (e.g. /i u a/, etc.) and not as /H/,

15

Natasha Warner

yet the vowel space shrinks with each step away from careful speech. Ladefoged et al. (1976) are unable to find this effect in Southern California English, but theirs is a relatively small study. Other acoustic spaces may also compress, without causing changes in the transcribed segments. Berry (2009) shows reduction of the tonal space in spontaneous Mandarin speech (also demonstrating how reduction applies to suprasegmentals). Furui and colleagues (e.g. Nakamura et al. 2007) quantify reduction in spontaneous Japanese through a measure of spectral difference among phonemes, showing lesser spectral difference in reduced speech. This method is particularly useful for its ability to quantify reduction across all segment types, regardless of either underlying or surface manner of articulation.

4

The driving force behind reduction

What motivates reduction? Several motivations have appeared in the past literature, although some have primarily been proposed as explanations of other phenomena, and simply extend easily to reduction. The most obvious potential explanation may be “ease of articulation.” In the 1990s, a trend developed that proposed speakers’ desire for ease of articulation as an opposing principle to their wish for listeners to understand them, with these two wishes sometimes described as ranked constraints in Optimality Theory (reviewed and criticized by Hale and Reiss (2000); see also chapter 63: markedness and faithfulness constraints). While these constraints were not developed to describe spontaneous speech, their applicability is obvious. This would, however, be tantamount to saying that reduced speech is sloppy speech. Furthermore, what constitutes “ease” in articulation is neither clearly defined nor readily measurable. Target undershoot is perhaps a more measurable version. Reduction in the size of gestures provides a convenient way to describe many reductions, such as expected stops being realized as approximants. Combining target undershoot and gestural overlap (discussed above), one could describe lowered syllable counts, as in beret realized as bray (Browman and Goldstein 1989). One would have to use gestural overlap and target undershoot considerably more heavily to model extreme reductions as in Figures 79.6c and 79.7, but this would be readily possible. However, the idea of reduced size and greater overlap of gestures does not provide a motivation for reduction, just a description. Task dynamic modeling, with its interaction among stiffness, targets and time, might do so, but gestures may be a better description of reduced speech than they are an explanation of what causes it. As far as the author is aware, the exact relationship of speech rate, stiffness, speech reduction, and speaker’s choice of speech style remains for future research. “Ease of articulation” and gestures focus on the speaker. One can alternatively focus on the listener. The OT account mentioned above suggests that listeners want clearer speech, and the perceptual literature does confirm that listeners find clearer speech easier to process (Ranbom and Connine 2007; Tucker 2007). However, listeners may not uniformly “want” unreduced speech. Speakers might accommodate the listener’s specific informational needs in choosing when to reduce. This could be viewed as an overall motivation: overlap the segments (or gestures, or perceptual cues) during low-information portions of the signal, then use clearer speech for high-information portions, maximizing the speed and efficiency with

Reduction

16

which information is conveyed. Some results, such as Ernestus and colleagues’ work on high-frequency words and suffixes (e.g. Dutch eigenlijk ‘actually’, mentioned above), support this view. In some items, they find greater reduction for higherfrequency words (chapter 90: frequency effects), for words that have already occurred in the conversation, and for words that are more predictable from surrounding words (Pluymaekers et al. 2005a, 2005b). All of these support the idea that speakers reduce where the information will not be too important, or difficult to retrieve, for the listener. Kuperman et al. (2007) find an opposite effect of predictability for a specific set of data, though. Greenberg (1999) finds that low-frequency words tend to be pronounced canonically, regardless of speech rate, but that high-frequency words show more reduction with faster speech rate. However, Bard et al. (2001) find quite surprising results by cleverly manipulating what the listener actually knows and what the speaker knows, suggesting that speakers decide how much to reduce based more on what the speaker him/ herself knows than on what the listener knows. These results together suggest that information structure certainly has an effect, but not a straightforward one, on speech reduction. It seems safe to conclude that reduction is only partially motivated by how information can be conveyed efficiently. Furthermore, reduction itself probably conveys information. Bradlow (2002) argues that CV co-articulation conveys information to the listener and is part of the speaker’s intentional strategy, rather than being simply a necessary consequence of inability to move articulators instantaneously from one place to another. Ogasawara (2007) and Tucker (2007) both find that listeners make use of reduction or speech rate in the context to help them decide on the acoustic criteria for later sounds in the speech stream. Putting these separate ideas together with the commonsense notion that speakers are more likely to reduce when talking with a close friend than with a prospective employer, for example, it seems likely that part of the motivation for reduction is to convey something about the speech style to the listener, rather than to eliminate unnecessary gestures for the speaker. Overall, there is no definitive explanation for what drives reduction. It seems very likely that articulatory factors (e.g. task dynamic stiffness, articulator movement rate), information structure (greater reduction where information is less important), and intentional use of reduction as a feature that conveys information in itself all contribute to how much reduction a given utterance contains. The possible interactions of these factors are too complex to be disentangled in a single experiment.

5 5.1

Representational consequences of reduction Formal phonology

As discussed in the section on historical perspectives above, formal phonology (whether OT or rule-based) has largely ignored reduction. Assuming that language is divided into competence vs. performance, all reduction may be a performance phenomenon, outside the grammar. However, one must know how to reduce in order to be a native speaker of a language, particularly if anything about reduction is language-specific. Barry and Andreeva (2001) show crosslinguistic similarities in reduction types, but there has been little quantitative

17

Natasha Warner

comparison of reduction across languages, and stress vs. syllable vs. mora rhythm at least are likely to cause language-specific differences in reduction. Keune et al. (2005) find more reduction in Belgian Dutch (Flemish) than Netherlands Dutch. To the extent that how to reduce is language-specific, it may have to be part of the grammar, although at the phonetic level (Pierrehumbert 1994). However, formal phonology rarely reaches a detailed enough surface level to reflect reductions, nor does it attempt to generate as many variants for each word as reduction produces. Formal phonology usually takes written broad transcriptions of single-word careful pronunciations as the data to be accounted for. If a particular token of we were as in Figure 79.2 sounds like were out of context but like we were in context, a formal phonological analysis is likely to lose that information before the analysis ever starts. Formal phonology examines an abstract version of how a word might be pronounced (carefully), not how it was pronounced on a particular occasion. However, the 1990s saw a huge increase in the number of formal phonological models that integrate gradient, quantitative phenomena. Pierrehumbert (1994) lays out the reasons for addressing quantitative variability within the phonological competence. Flemming’s (1995) work includes constraints that specify by how many Hertz two vowels’ formants must differ. Warner (2002) addresses (but argues against) the possibility of modeling low-level phonetically variable acoustic events such as epenthetic stops (e.g. [k] in young[k]ster) within Optimality Theory. Nagy and Reynolds (1997) suggest ranking constraints variably to obtain multiple possible outcomes (see also chapter 92: variability). Boersma’s version of OT (Boersma and Hayes 2001) adds noise to constraint rankings, so that how an underlying form is pronounced on a particular occasion can vary in a specific distribution. None of these works focuses on reduced speech, but the overall development of including gradience provides a mechanism for modeling the variability of reduction in formal phonology. Using Boersma and Hayes’s (2001) approach, one could easily model deletions (e.g. [wÌ] for we were; (Warner et al. 2009a) by ranking the deletion-preventing constraint Max only slightly higher than the markedness constraints that work against realization of various segments or sequences (e.g. *Lab, *Cor, *NY, NoCoda, etc. (Kager 1999). Markedness constraints are unviolated if the relevant segments are deleted, and Max is unviolated if the underlying segments are maintained. Thus, if Max were ranked just slightly above a collection of many markedness constraints, the random noise which Boersma and Hayes’s system adds to rankings would sometimes place Max lower than some of the markedness constraints, making a form with deletions optimal. Since random noise is added to the constraint rankings each time a speaker produces a form, Max would be demoted beneath a different collection of markedness constraints on different productions, modeling the variability of which deletions occur in a given token. Furthermore, by varying how much random noise is added to constraint rankings, one could model varying amounts of deletion. Perhaps spontaneous, casual speech involves adding more noise to constraint rankings at evaluation time than careful speech does, supplying a direct way to model a speaker’s choice of speech styles and degree of reduction. By ranking the Ident constraints that prevent changes to manner of articulation and voicing appropriately relative to the markedness constraints, one could perhaps model reduction of stops to approximants and other such changes.

Reduction

18

If reduced speech becomes part of “the business” of phonology, it could be modeled in OT in the way tentatively suggested here, or in some other way, or it could be modeled in a rule-based formalism. It would require far more detailed surface forms than are typically encoded, variable rules or variably ranked constraints (Boersma and Hayes 2001), and perhaps acoustically detailed rules or constraints (e.g. “overlap the gestures by at least x msecs or percent”). It is an open question in the field of psycholinguistics and spoken word recognition whether words have a single underlying representation in the lexicon or a wide variety of lexical listings (discussed below). Greenberg (1999) finds that the word that occurs with 117 distinct pronunciations in his spontaneous speech corpus, of which the most common is [Ïæ], accounting for just 11 percent of all tokens. Is there just one abstract underlying representation stored for that, from which 117 different forms are derived? (See chapter 1: underlying representations for more discussion.) How to write phonological rules or constraints that can produce the wide array of surface forms from a single underlying representation would be a mind-boggling problem. How can one change /Ïæt/ into any of 117 forms, without allowing all words to become simply [H]? One might need rules or constraints that effectively map any vowel onto any central vowel quality, any consonant onto an approximant, and any segment onto null. While it is not true that anything is possible in reduced speech, so many different things are possible that any formal phonological system might severely overgenerate, if set up to generate attested reduced forms.

5.2

Articulatory Phonology

Articulatory Phonology (Browman and Goldstein 1989 and many other publications) is better equipped to handle reduction and variability than many theories, as discussed above. Articulatory Phonology allows for reduction in the size or temporal span of gestures, and allows overlap, readily accommodating most or perhaps all reductions, as discussed above. It does require that all gestures present in the underlying representation (which consists of a gestural score), and only those gestures, be present in the surface form. This means that it is not literally possible for a gesture to be deleted. However, the theory does not prevent reducing a gesture until it is not measurably different from deletion. There is ample evidence that speakers often make articulatory gestures even if there is no audible acoustic consequence (e.g. the seve[m] plus and perfe[k] memory examples discussed above, with some tongue tip gesture maintained). This supports the idea that speakers would reduce the size of gestures in reduced speech whether listeners hear them or not. Heavily overlapping gestures might remove most sudden acoustic changes in the word, leaving one long vocalic stretch with minor acoustic variation throughout and no clear segments, as in portions of Figures 79.3a, 79.4b, and 79.7. The fact that Articulatory Phonology incorporates time as a continuous scale (gesture duration, rather than simply linear order of segments or features) is an important factor in its success with speech variability. Articulatory Phonology may seem a perfect theory for describing reduced speech. However, this may be because it is actually more adept at describing phonetic implementations than at describing most abstract phonological alternations. With its prohibition on adding or removing gestures from the underlying representation, it is not meant to account for abstract morphophonemic alternations. (As

19

Natasha Warner

one example, Navajo has certain suffix combinations in which all segmental material of one suffix deletes, but a high tone is added (Young and Morgan 1987). This is clearly not a matter of altering underlying gestures.) Instead, the theory shows its strength in areas that traditional formal phonology might have relegated to phonetic implementation, such as casual or fast speech reduction. It is clear, though, that inclusion of speech reduction would have no representational consequences for Articulatory Phonology: the use of gestural scores as representations works extremely well for reduced speech. Articulatory Phonology is simply based in the gradience and variability of real speech, which includes reduced speech, whereas other phonological theories are not.

5.3

Abstractionist theories of spoken word recognition

Moving beyond models of phonology, we can ask what representational consequences reduced speech has for spoken word recognition models. Greenberg (1999) finds that in data from the Switchboard corpus, the word and has 87 distinct pronunciations, including [æn, en, Hn, ænt, an, Hm, q]. He also finds that the most common pronunciation of them is [Hm]. How does a listener get from the possible surface forms back to the lexical entry and without creating massive confusion with ant, on, them, hand, a, I’m, etc.? The problem is more extreme for high-frequency function words, but is not limited to them. Models of spoken word recognition such as TRACE, SHORTLIST, etc. traditionally assume that each word has a single underlying form (e.g. Norris et al. 2000), although listing multiple forms is sometimes adopted (Spinelli et al. 2003). Listeners would recognize [q] as [ænd], despite their dissimilarity, through a combination of finding the closest segmental match in the lexicon and weighting high-frequency words. For example, listeners might recognize [>s] as this despite the poor segmental match, because this has higher frequency than hiss. However, many high-frequency words reduce to similar forms: this, just, and is can probably all be realized as something like [qs], and realizations of and overlapping with ant, on, them, and a above also demonstrate this. Thus best match plus frequency may not solve the problem, and neither would multiple lexical listings. For somewhat less ambiguous forms such as [wiçk] weekend (Figure 79.7), listeners could try to apply a rule-like conversion to get from the surface string back to the single lexical representation. However, the problem would be just as for formal phonology in reverse: rules would have to allow for insertion and alteration of almost any segments. Since reduction creates such varied forms, it might be impossible to systematically derive a single invariant underlying form by working backwards from them. Another approach is to list every possible form of each word in the lexicon, or at least several forms. Thus, the word that might have at least 117 underlying representations, or at least some substantial number from which the rest of the possible surface forms can be derived. The exact number is dependent on the narrowness of the transcription system, but the effect on the lexicon is the same: the number of forms would multiply greatly. This solution is a drastic departure from the traditional view of what a lexicon contains, and assumes that speakers and listeners do very little abstraction. Taking the logic of multiple underlying listings further, the forms listed need not be limited to those that receive differing transcriptions in a phonetics lab. Why

Reduction

20

should that be limited to the 117 forms Greenberg (1999) found for it? Perhaps it should have a separate listing for every acoustically distinct pattern that can be realized for the word. This leads us to whole-word-based exemplar models of spoken word recognition (Goldinger 1998). In such a theory, every incoming speech token is measured on various acoustic characteristics, and placed into a “covering map” on all the relevant acoustic dimensions (Johnson 1997), with this information saved about each token a listener hears. Exemplar models were not developed to account for reduced speech. However, they already use an unusually detailed version of lexical representation, which can be viewed as the word category plus all the acoustic information about exemplars of that word heard in the past. An exemplar model would also save information about the speech style in which the listener heard a particular token. For example, if a listener hears a highly reduced token of and in fast, casual speech that was realized just as [q], and successfully recognizes it, the acoustic properties of this token and the fact that it was a token of and will be stored, and the fact that it occurred in casual, fast speech would also be stored. This might help the listener to avoid recognizing the acoustic pattern [q] as and in slow formal speech. It remains to be explicitly tested though whether an exemplar model of spoken word recognition would do any better than other models at identifying reduced speech tokens. Whether token-specific information is saved or not, identifying [qs] as the word this vs. just, for example, would present a challenge to any model. If past exemplars of these two words happen to fall into acoustically somewhat distinct clusters, an exemplar model might succeed (but other models would as well). However, if information about long-term speech rate across the utterance and syntactic and semantic context are more important than acoustic differences within the word, then an exemplar model might have no advantage. To sum up the issue of whether reduced speech affects our understanding of what constitutes a lexical representation, this depends on the degree of phonetic detail included in a theory’s representations. Theories that include considerable detail in lexical representations or memory (e.g. Articulatory Phonology, exemplar models) can readily accommodate reduced speech without a change to what constitutes a representation, although this does not guarantee that these models would succeed in generating or recognizing the correct forms. Theories with exclusively abstract lexical representations may require moderate or large numbers of separate underlying representations for each word in order to accommodate the variety of reduced surface forms that occur in normal speech.

6

Conclusions

At the outset, I suggested that the controversy over reduced speech is not one of how phonological theories should handle a particular phenomenon, but rather one of whether phonetic or phonological theories even should attempt to handle the phenomenon. It is clear that reduction exists, is extremely common, and is not peripheral to the system. But is reduced speech of any interest to phonetics or phonology? We return now to the possible answers to this question. Within applied work, there is a reason to include reduced speech: if we would like speech technology systems (speech synthesis or automatic speech recognition (ASR)) to make use of what we know about language, we should know as much

21

Natasha Warner

as possible about the everyday connected speech which humans use. While speakers may speak somewhat clearly to ASR systems in some applications, ASR systems have to recognize a great deal of reduction. The acoustic findings on reduction show that even relatively careful speech contains quite a number of reductions (Warner and Tucker 2007), so ASR systems are unlikely to escape the task of recognizing reduced forms. Another application for findings about reduction is language teaching. Students acquiring a second language in a classroom usually hear careful speech. Many of us have had the experience of arriving in the country of our L2, only to find that we cannot understand much at all. While that problem has many sources, pronunciation variability from casual speech reduction is likely to be one of them. (See also Shockey 2003 on the importance of reduced speech for both of these applications.) The second reason for considering reduction to be of interest to phonetics and phonology is theoretical. While no theory needs to account for every phenomenon, it seems reasonable for theories of phonology to be able to reach detailed surface forms that actually occur in corpora, unless the theory is intended to stay entirely at a level of abstraction. If the general purpose of phonology is to relate what speakers know about words to how they pronounce them, one might want the theory to be equipped to discuss and represent attested forms; if obtained from a corpus, these forms will include reduction. Some phonological theories can represent reduced forms, at least as transcriptions and perhaps with acoustic values. The modifications to these theories that allow them to model reduction usually were not introduced for that purpose, but the necessary formal mechanisms are now in place. For a theory of phonetics to rule out reduced speech as being outside its area of interest would be surprising. Phonetic theories clearly “are responsible for” speech as it is produced and perceived. Thus, theories of gestural coordination, of segment perception, of speaker normalization, of phoneme distinctiveness, etc. should be adaptable for reduced speech. Fortunately, many phonetic theories are less sensitive to representational issues than formal phonological theories, and Articulatory Phonology already provides a thoroughly implemented theory that can represent and model reduction, as discussed above. Models of segment perception (whether exemplar or more abstract; cf. Smits et al. 2006) can also potentially accommodate the detail of reduced segments, although these models have not generally been tested on reductions. One might argue that reduced speech should be considered during the development of all phonetic theories, even if reduction is not the primary interest, to see whether the theory would adapt to the speech that speakers and listeners use daily. Even if connected speech data would be too variable to test a theory on, it should at least be able to apply to more natural speech. For example, the stimuli used by Smits et al. (2006) are synthesized non-speech noises, because the exact controlled distribution of synthesized acoustic characteristics allows them to contrast several models’ predictions. However, one could test each model on how it distinguishes the reduced realizations of flap in Figures 79.4b and 79.6 from /Ï/ or from a vowel–vowel sequence, and the mechanism behind each theory could, in principle, work for reductions. When one spends a lot of time looking at reduced, spontaneous speech, the difference between that and careful speech can seem so pervasive that one begins to wonder why there is so much research on abstract, careful forms. The exact

Reduction

22

details of, for example, locus equations (phonetic theory, involving the relationship between formant frequencies near a consonant and at vowel mid-point) or sonority sequencing (phonological theory) might be obscured or obliterated by the deletions, overlaps, mergers, and alterations of reduction. How can one determine a reliable locus equation for a place of articulation when the vowels can shift toward schwa, be deleted, or merge into a neighboring vowel, and the consonant might be deleted, or might surface with unexpected acoustic characteristics? How can one determine the sonority sequencing requirements when the number of syllables is unstable (two on the surface, five underlyingly, as in Arai 1999, or two vs. four in Figure 79.1), and manner and voicing of each consonant varies (e.g. Figure 79.2, where everything has become nasals and/or approximants)? The fields of phonetics and phonology have both invested considerable effort into working out the details of theories based on careful speech forms, with an assumption that those forms are representative. Spontaneous speech might make these detailed theories of careful speech seem pointless. Instead of asking whether theories should accommodate spontaneous speech reduction, one might ask whether theories should accommodate the forms of careful speech. However, it is clear that native speakers’ judgments of syllable structure are tapping into some real property of language, and that experimental findings on locus equations are as well. The same could be said of other phenomena in phonetics and phonology: spontaneous speech obscures the phenomenon, yet the phenomenon is clearly a real part of language. Also, it is easy to focus on the extreme reductions in spontaneous speech, but even casual conversation contains clearly articulated focused words as well. Furthermore, when one does not yet know much about a topic the field is starting to explore, it is certainly advisable to work with speech that is as controlled as possible. Phonetic and phonological studies that consider exclusively careful speech patterns do tell us about a real type of human language. However, what they tell us about is probably not the most common form of human language our auditory processing systems encounter in daily life, inside or outside the classroom, the lab, or our homes. Even when hearing professional newscasting, we are likely to encounter far more reduction than most phonetics experiments or phonological data consider. There are arguments for both perspectives: that reduced speech is a specific facet of phonetic implementation irrelevant to formal phonology and to most topics of phonetics, and also that it is important to test and model reduced speech in phonological and phonetic theories. What is clear, though, is that reduced speech is a normal part of our daily experience as speakers and hearers, not a rare or marginal phenomenon.

REFERENCES Arai, Takayuki. 1999. A case study of spontaneous speech in Japanese. In John J. Ohala, Yoko Hasegawa, Manjari Ohala, Daniel Granville & Ashlee Bailey (eds.) Proceedings of the 14th International Congress of Phonetic Sciences, 615–618. Berkeley: Department of Linguistics, University of California, Berkeley. Bard, Ellen Gurman, Catherine Sotillo, M. Louise Kelly & Matthew P. Aylett. 2001. Taking the hit: Leaving some lexical competition to be resolved post-lexically. Language and Cognitive Processes 16. 731–737.

23

Natasha Warner

Barry, William & Bistra Andreeva. 2001. Cross-language similarities and differences in spontaneous speech patterns. Journal of the International Phonetic Association 31. 51–66. Berry, Jeff. 2009. Tone space reduction in Mandarin Chinese. Unpublished ms., University of Arizona. Boersma, Paul & Bruce Hayes. 2001. Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry 32. 45–86. Bolozky, Shmuel. 1977. Fast speech as a function of tempo in Natural Generative Phonology. Journal of Linguistics 13. 217–238. Bradlow, Ann R. 2002. Confluent talker- and listener-related forces in clear speech production. In Carlos Gussenhoven & Natasha Warner (eds.) Laboratory phonology 7, 241–273. Berlin & New York: Mouton de Gruyter. Bradlow, Ann R. & Tessa Bent. 2002. The clear speech effect for non-native listeners. Journal of the Acoustical Society of America 112. 272–284. Browman, Catherine P. & Louis Goldstein. 1989. Articulatory gestures as phonological units. Phonology 6. 201–251. Cheng, Chierh & Yi Xu. 2008. When and how disyllables are contracted into monosyllables in Taiwan Mandarin? Journal of the Acoustical Society of America 123. 3864. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Connine, Cynthia M. 2004. It’s not what you hear but how often you hear it: On the neglected role of phonological variant frequency in auditory word recognition. Psychonomic Bulletin and Review 11. 1084–1089. Cutler, Anne. 1998. The recognition of spoken words with variable representation. In Proceedings of the ESCA Workshop on Sound Patterns of Spontaneous Speech, 83–92. Aix-en-Provence: ESCA. Dalby, Johnathan. 1986. Phonetic structure of fast speech in American English. Ph.D. dissertation, Indiana University. Dressler, Wolfgang U. 1975. Methodisches zu Allegroregeln. In Wolfgang U. Dressler & F. V. Mare: (eds.) Phonologica 1972, 219–234. Munich: Fink. Engstrand, Olle & Diana Krull. 2001. Simplification of phonotactic structures in unscripted Swedish. Journal of the International Phonetic Association 31. 41–50. Ernestus, Mirjam. 2000. Voice assimilation and segment reduction in casual Dutch: A corpus-based study of the phonology–phonetics interface. Ph.D. dissertation, University of Utrecht. Ernestus, Mirjam, R. Harald Baayen & Rob Schreuder. 2002. The recognition of reduced word forms. Brain and Language 81. 162–173. Flemming, Edward. 1995. Auditory representations in phonology. Ph.D. dissertation, University of California, Los Angeles. Goldinger, Stephen D. 1998. Echoes of echoes? An episodic theory of lexical access. Psychological Review 105. 251–279. Greenberg, Steven. 1999. Speaking in shorthand: A syllable-centric perspective for understanding pronunciation variation. Speech Communication 29. 159–176. Hale, Mark & Charles Reiss. 2000. “Substance abuse” and “dysfunctionalism”: Current trends in phonology. Linguistic Inquiry 31. 157–169. Hicks Kennard, Catherine. 2006. Gender and command: A sociophonetic analysis of female and male drill instructors in the United States Marine Corps. Ph.D. dissertation, University of Arizona. Hockett, Charles F. 1955. A manual of phonology. Baltimore: Waverly Press. Johnson, Keith. 1997. Speech perception without speaker normalization: An exemplar model. In Keith Johnson & John W. Mullennix (eds.) Talker variability in speech processing, 145–165. San Diego: Academic Press.

Reduction

24

Johnson, Keith. 2004. Massive reduction in conversational American English. In Kyoko Yoneyama & Kikuo Maekawa (eds.) Spontaneous speech: Data and analysis. 29–54. Tokyo: National Institute for Japanese Language. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Keune, Karen, Mirjam Ernestus, Roeland van Hout & R. Harald Baayen. 2005. Social, geographical, and register variation in Dutch: From written “mogelijk” to spoken “mok.” Corpus Linguistics and Linguistic Theory 2. 183–223. Kohler, Klaus J. 2001. Articulatory dynamics of vowels and consonants in speech communication. Journal of the International Phonetic Association 31. 1–16. Koopmans-van Beinum, Florian J. 1980. Vowel contrast reduction: An acoustic and perceptual study of Dutch vowels in various speech conditions. Amsterdam: Academische Pers. Kryter, Karl D. 1946. The effect of plugging the ears on the intelligibility of speech in noise. Journal of the Acoustical Society of America 18. 249. Kuperman, Victor, Mark Pluymaekers, Mirjam Ernestus & R. Harald Baayen. 2007. Morphological predictability and acoustic duration of interfixes in Dutch compounds. Journal of the Acoustical Society of America 121. 2261–2271. Ladefoged, Peter. 2003. Phonetic data analysis: An introduction to fieldwork and instrumental techniques. Malden, MA & Oxford: Wiley-Blackwell. Ladefoged, Peter, Iris Kameny & William Brackenridge. 1976. Acoustic effects of style of speech. Journal of the Acoustical Society of America 59. 228. Lennes, Mietta, Nina Alarotu & Martti Vainio. 2001. Is the phonetic quality of unaccented words unpredictable? An example from spontaneous Finnish. Journal of the International Phonetic Association 31. 127–138. Löfqvist, Anders & Vincent L. Gracco. 1999. Interarticulator programming in VCV sequences: Lip and tongue movements. Journal of the Acoustical Society of America 105. 1864–1876. Maekawa, Kikuo & Hideaki Kikuchi. 2005. Corpus-based analysis of vowel devoicing in spontaneous Japanese: An interim report. In Jeroen van de Weijer, Kensuke Nanjo & Tetsuo Nishihara (eds.) Japanese voicing, 205–228. Berlin & New York: Mouton de Gruyter. McLennan, Conor T., Paul A. Luce & Jan Charles-Luce. 2003. Representation of lexical form. Journal of Experimental Psychology: Learning, Memory, and Cognition 29. 539–553. McLennan, Conor T., Paul A. Luce & Jan Charles-Luce. 2005. Representation of lexical form: Evidence from studies of sublexical ambiguity. Journal of Experimental Psychology: Human Perception and Performance 31. 1308–1314. Mehta, Gita & Anne Cutler. 1988. Detection of target phonemes in spontaneous and read speech. Language and Speech 31. 135–156. Mitterer, Holger & Mirjam Ernestus. 2006. Listeners recover /t/s that speakers reduce: Evidence from /t/-lenition in Dutch. Journal of Phonetics 34. 73–103. Muthusamy, Yeshwant K., Ronald A. Cole & Beatrice T. Oshika. 1992. The OGI multilanguage telephone speech corpus. In Proceedings of the 1992 International Conference on Spoken Language Processing. 895–898. Nagy, Naomi & Bill Reynolds. 1997. Optimality Theory and variable word-final deletion in Faetar. Language Variation and Change 9. 37–55. Nakamura, Masanobu, Koji Iwano & Sadaoki Furui. 2007. The effect of spectral space reduction in spontaneous speech on recognition performances. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2007), vol. 4, 473–476. Nicolaidis, Katerina. 2001. An electropalatographic study of Greek spontaneous speech. Journal of the International Phonetic Association 31. 67–85. Norris, Dennis, James M. McQueen & Anne Cutler. 2000. Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences 23. 299–370.

25

Natasha Warner

Ogasawara, Naomi. 2007. Processing of speech variability: Vowel reduction in Japanese. Ph.D. dissertation, University of Arizona. Pierrehumbert, Janet B. 1994. Knowledge of variation. Papers from the Annual Regional Meeting, Chicago Linguistic Society 30. 232–256. Pluymaekers, Mark, Mirjam Ernestus & R. Harald Baayen. 2005a. Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America 118. 2561–2569. Pluymaekers, Mark, Mirjam Ernestus & R. Harald Baayen. 2005b. Articulatory planning is continuous and sensitive to informational redundancy. Phonetica 62. 146–159. Pluymaekers, Mark, Mirjam Ernestus, R. Harald Baayen & Geert Booij. 2010. Morphological effects on fine phonetic detail: The case of Dutch -igheid. In Cécile Fougeron, Barbara Kühnert, Mariapaola D’Imperio & Nathalie Vallée (eds.) Laboratory phonology 10, 511–532. Berlin & New York: Mouton de Gruyter. Ranbom, Larissa J. & Cynthia M. Connine. 2007. Lexical representation of phonological variation in spoken word recognition. Journal of Memory and Language 57. 273–298. Raymond, William D., Robin Dautricourt & Elizabeth Hume. 2006. Word-internal /t,d/ deletion in spontaneous speech: Modeling the effects of extra-linguistic, lexical, and phonological factors. Language Variation and Change 18. 55–97. Richter, Elise. 1930. Beobachtungen über Anglitt und Abglitt an Sprachkurven und umgekehrt laufenden Phonogrammplatten. In Paul Menzerath (ed.) Berichte über die I. Tagung der Internationalen Gesellschaft für experimentelle Phonetik, 87–90. Bonn: Scheur. Shattuck-Hufnagel, Stephanie & Nanette M. Veilleux. 2007. Robustness of acoustic landmarks in spontaneously-spoken American English. In Jürgen Trouvain & William J. Barry (eds.) Proceedings of the 16th International Congress of Phonetic Sciences, 925–928. Saarbrücken: Saarland University. Shockey, Linda. 2003. Sound patterns of spoken English. Cambridge, MA & Oxford: Blackwell. Smiljanio, Rajka & Ann R. Bradlow. 2005. Production and perception of clear speech in Croatian and English. Journal of the Acoustical Society of America 118. 1677–1688. Smiljanio, Rajka & Ann R. Bradlow. 2008. Clear speech intelligibility and accentedness ratings for native and non-native talkers and listeners. Journal of the Acoustical Society of America 123. 3883. Smits, Roel, Joan Sereno & Allard Jongman. 2006. Categorization of sounds. Journal of Experimental Psychology: Human Perception and Performance 32. 733–754. Spinelli, Elsa, James M. McQueen & Anne Cutler. 2003. Processing resyllabified words in French. Journal of Memory and Language 48. 233–254. Strik, Helmer, Micha Hulsbosch & Catia Cucchiarini. 2010. Analyzing and identifying multiword expressions in spoken language. Language Resources and Evaluation 44. 41–58. Tseng, Shu-Chuan. 2005. Syllable contractions in a Mandarin conversational dialogue corpus. International Journal of Corpus Linguistics 10. 63–83. Tucker, Benjamin V. 2007. Spoken word recognition of the reduced American English flap. Ph.D. dissertation, University of Arizona. Warner, Natasha. 2002. The phonology of epenthetic stops: Implications for the phonetics– phonology interface in Optimality Theory. Linguistics 40. 1–27. Warner, Natasha. Forthcoming. Methods for studying spontaneous speech. In Abigail C. Cohn, Cécile Fougeron & Marie K. Huffman (eds.) Handbook of laboratory phonology. Oxford: Oxford University Press. Warner, Natasha & Benjamin V. Tucker. 2007. Categorical and gradient variability in intervocalic stops. Paper presented at the 81st Annual Meeting of the Linguistic Society of America, Anaheim. Warner, Natasha, Dan Brenner, Anna Woods, Benjamin V. Tucker & Mirjam Ernestus. 2009a. Were we or are we? Perception of reduced function words in spontaneous conversations. Journal of the Acoustical Society of America 125. 2655.

Reduction

26

Warner, Natasha, Amy Fountain & Benjamin V. Tucker. 2009b. Cues to perception of reduced flaps. Journal of the Acoustical Society of America 125. 3317–3327. Young, Robert & William Morgan. 1987. The Navajo language: A grammar and dictionary. Albuquerque: University of New Mexico Press. Zwicky, Arnold M. 1972. On casual speech. Papers from the Annual Regional Meeting, Chicago Linguistic Society 8. 607–615.

80

Mergers and Neutralization Alan C. L. Yu

1

Introduction

The notions of mergers and neutralization presuppose the concept of contrast. Two sounds are phonologically contrastive if they are in opposition with each other, i.e. if they are capable of differentiating the lexical meanings of two words in a particular language. The plosives [p] and [ph], for example, are in opposition in Cantonese (Yue Chinese) (e.g. [pa(T] ‘father’ vs. [pha(T] ‘to lay down’) but [b] and [p] are not. Contrast is not restricted to pairs of segments; classes of segments contrast as well. The aspiration opposition between [p] and [ph] finds analogs in other pairs of segments ([t] ~ [t h], [k] ~ [kh], [kw] ~ [kwh]). When a phonological opposition is suspended, neutralization or merger obtains. For example, Cantonese has no aspiration opposition between plain and aspirated plosives in syllable-final position; all syllable-final plosives are voiceless and unreleased (e.g. [t ha(p¬™] ‘pagoda’, [pa(t¬™] ‘eight’, [kD(k¬™] ‘corner, horn’). The terms merger and neutralization are often employed in complementary contexts; merger often characterizes a diachronic and neutralization a synchronic collapse of contrast. The diachronic–synchronic divide between merger and neutralization is more apparent than real, however; the two notions are the two faces of the same coin. The notion of merger is often applied in the context where a contrast reduction leaves no trace of the contrast in the synchronic system; a context-free contrast reduction is the clearest example of this. Neutralization applies to context-dependent contrast reduction; traces of a contrast remain in some contexts, but not in others. Certain varieties of English, for example, merge the voiceless labial-velar fricative /R/ with its voiced counterpart /w/ (Minkova 2004). Thus the words whine and wine are homophonous; no remnant of this /R/ ~ /w/ contrast is evidenced in the grammar of speakers of these dialects. In certain dialects of Cantonese (most prevalently in Guangzhou, Hong Kong, and Macao; Bauer and Benedict 1997), the distinction between plain and labial velars is not maintained before the back rounded vowel /D/. The collapse of the plain vs. labial velars distinction is referred to as a matter of neutralization because the contrast remains before vowels that are not /D/ (e.g. [kôn≠] ‘tight’ vs. [kwôn≠] ‘boil’). These instances of contrast reduction in English and Cantonese transpire diachronically, but one results in a merger (i.e. the /R/ ~ /w/ merger) and the other in neutralization (i.e. /k(h)/ ~ /kw(h)/ neutralization). In this chapter, I shall The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0080

Mergers and Neutralization

2

collectively refer to mergers and neutralization in terms of contrast reduction. I shall further assume that the term neutralization refers to contrast reduction that results in alternation, while the term merger will refer to any reduction of contrast, both synchronically and diachronically. Thus, in the case of the /k(h)/ ~ /kw(h)/ contrast in Cantonese, /k(h)/ and /kw(h)/ merge before /D/ diachronically. The outcome of this merger is the neutralization of /k(h)/ and /kw(h)/ before /D/. This chapter begins with a review of the range of contrast reduction (§2). §3 surveys several theories that attempt to explain the sources of contrast reduction. §4 concludes with a discussion of the challenges to a purely phonological conception of contrast reduction.

2

Typology of contrast reduction

Contrast reduction manifests itself in three different ways: structure-preserving reduction, structure-building reduction, and free variation.1 Structure-preserving reduction characterizes scenarios where two or more distinct sounds have, after the reduction, a form that is physically similar to that of one of the sounds appearing in the position of differentiation (e.g. /k(h)/ ~ /kw(h)/ neutralization; cf. Kiparsky 1985; chapter 76: structure preservation: the resilience of distinctive information). Formally, a reduction of contrast m is structure-preserving if and only if m turns two (or more) distinct sounds into only one of the two sounds, to the exclusion of the other. The merger of /R/ and /w/ is structure-preserving, since the result of the merger leaves /w/ as the surviving sound. Regressive assimilation of voicing is another instance of structure-preserving contrast reduction. For example, in Dutch, the distinction between voiced and voiceless plosives is suspended preconsonantally (Ernestus and Baayen 2003). However, the result of neutralization differs depending on the nature of the following consonant. For example, before a voiced plosive, the /t/ ~ /d/ contrast in verwijten [verseitHn] ‘reproach-inf’ and verwijden [verseidHn] ‘widen-inf’ neutralizes toward /d/ (verwijt bijna [verseid beina(] ‘reproach almost’ vs. verwijd bijna [verseid beina(] ‘widen almost’). However, before a nasal, neutralization is toward /t/ (verwijt niet [verseit nit] ‘reproach not’ vs. verwijd niet [verseit nit] ‘widen not’). Contrast reduction is structure-building when the outcome of contrast reduction is a sound intermediate between the normal realization of the two phonemes. Final-consonant voicing neutralization in Cantonese is a case in point. Stops in syllable-final position are unreleased, and thus phonetically non-contrastive in terms of aspiration (chapter 69: final devoicing and final laryngeal neutralization). Another celebrated case of structure-building reduction is flapping in English. The /t/ ~ /d/ contrast in English is suspended intervocalically where the coronal in question is immediately followed by an unstressed vowel (e.g. heed [’hi(d] vs. heat [’hi(t], but ladder [’læ7Ì] vs. latter [’læ7Ì]) (chapter 113: flapping in american english).2

1

Unless noted otherwise, I shall abstract away from the issue of context-sensitivity in what follows. The definition of neutralization adopted here differs from Kiparsky’s (1976: 169) formulation of a neutralizing rule, which states that a rule of the form A → B / XC __ DY is neutralizing iff there are strings of the form CBD in the input to the rule. Certain structure-building neutralizing rules, such as flapping in English, are not considered neutralizing from Kiparsky’s perspective, since the product of the rule is not phonemic in the language. 2

3

Alan C. L. Yu

When contrast reduction leads to a form varying between two or more variants, this is referred to as free variation (see chapter 92: variability). For a large number of Cantonese speakers, syllable-initial [n] is in free variation with [l] (Bauer and Benedict 1997). Thus, words like [nej≠] ‘you’ and [na(nÆ] ‘difficult’ are often pronounced with initial [l], thus merging with [lej≠] ‘Li (surname)’ and [la(nÆ] ‘orchid’, respectively. The rate of [n] vs. [l] usage varies according to age and gender of the speaker, as well as the register of speaking (e.g. read speech vs. conversational speech).

2.1

Positions of contrast reduction

Contrast (chapter 2: contrast) is often restricted to certain positions within the word: the syllable peak (rather than the margin; chapter 33: syllable-internal structure), the onset (rather than the coda; chapter 55: onsets), the stem (rather than the affix; chapter 104: root–affix asymmetries), the stressed syllable (chapter 40: the foot), or the edge of the morphological domains (chapter 50: tonal alignment). Washo (Hokan), for example, only allows voiceless liquids and nasals in onset position (Jacobsen 1964). Isthmus Zapotec (Oto-manguean) contrasts glottalized and modal-voiced vowels, but only in stressed positions (Bueno-Holle 2009). Hausa (Chadic) has a five-vowel system (/i e a o u/), with a long–short distinction which is reliably distinguished only in final position (Steriade 1994). Ngalakan (Australian) has a five-vowel system (/i e a o u/), but mid vowels in Ngalakan are restricted to the edges of roots (Baker 1999: 72–73); if there is only one mid vowel in a root, it must appear in an edgemost syllable (i.e. initial /ceraÍa/ ‘woman’s ceremony’ or final /curuwe-/ ‘rush’). If there is more than one mid vowel, they must occur in contiguous syllables (/caworo/ ‘patrilineal clan’) or every vowel in the root must be a mid vowel (/koweleI?(-mi+)/ ‘beckon to’). !Xóõ (Bushman) contrasts consonants with clicks and consonants without click accompaniment, but only in initial syllables (Traill 1985). In Etung (Bantu), falling and rising tones (HL, H↓H, LH) are restricted to the final syllable of phonological words, but there is no restriction on the occurrence of level tones (Edmondson and Bendor-Samuel 1966). In Lushootseed (Central Salishan), glottalized consonants are only found in roots and lexical suffixes; grammatical suffixes never have glottalized consonants (Urbanczyk 1996: 46). Contrast restrictions might also differ across word types. For example, in a cross-linguistic survey of 32 languages having 26 consonants or more, Willerman (1994) found that pronouns made significantly less use of the palato-alveolar, retroflex, uvular, and pharyngeal places than other places of articulation and of fewer laterals, affricates, trills, clicks, ejectives, and aspirated segments (see also chapter 102: category-specific effects for differences between nouns and verbs). Loci of contrast reduction are not always characterizable in structural terms. Steriade (1994) observes that languages with a retroflexion contrast in the apicals (e.g. /t/ vs. /Í/) often neutralize the contrast in initial or postconsonantal positions, but allow the contrast in post-vocalic position (chapter 46: positional effects in consonant clusters). The position of retroflexion neutralization is difficult to capture in prosodic terms, since post-vocalic position can be either within or across a prosodic domain (e.g. the coda of a syllable and a syllable onset in intervocalic position). Obstruents in Lithuanian contrast in terms of voicing (Senn 1966; Steriade 1997). However, the voicing contrast is supported only before sonorants

Mergers and Neutralization

4

(skobnis ‘table’; bãdmetys ‘year of famine’) and not elsewhere. Voicing is neutralized word-finally (dajg [dauk] ‘much’; kàd [kat] ‘that’) and in pre-obstruent position (dèg-ti [kt] ‘burn-inf’, míelas drajgas [zd] ‘dear friend’).

2.2

Common triggers and targets of contrast reduction

Languages with contrast reduction often exhibit striking parallelism in the direction of merger and neutralization. Non-assimilatory neutralization of laryngeal contrasts in word-final and preconsonantal positions is often structure-preserving; the preserved segments are generally voiceless. Neutralization toward voiced or ejective is rare, if not non-existent.3 Reduction of vocalic contrasts in unstressed positions is commonplace across the world’s languages (chapter 26: schwa). The vast majority of such reductions involve the neutralization of vowel nasalization, quantity, or height. Nasal and oral vowels, for example, are often only contrastive in stressed syllables (e.g. Copala Trique (Hollenbach 1977); Guaraní (Beckman 1998: 158)). Contrasts in vocalic quantity are frequently neutralized toward the short variant in unstressed syllables. Kolami, for example, only contrasts long and short vowels in initial syllables, which are always stressed (Emeneau 1961: 6–7). Quantity contrasts may also neutralize toward the long variant under certain circumstances. For example, a vowel following a consonant–glide sequence must be long (/ak-a/ ‘ask!’ vs. /kw-a(k-a/ ‘to ask’; Myers and Hansen 2005: 318) in Rwanda (Bantu), which has a contrast in vowel length ([gusi(ßa] ‘to be absent’ vs. [gusißa] ‘to erase’; Kimenyi 1979: 1). Reduction in vowel height in unstressed position often favors one of two outcomes: the unstressed vowel may become either [a] or [H]. In Belarusian, for example, mid vowels /e o/ reduce to [a] ([’no:i] ‘legs’ vs. [na’:a] ‘leg’; [’reki] ‘rivers’ vs. [ra’ka] ‘river’; Crosswhite 2004: 192); thus the five vowels found in stressed syllables, /i e a o u/, are reduced to three, [i a u], in the unstressed syllables. The seven-vowel system in Central Eastern Catalan (/i e e a D o u/) is only evident in stressed syllables; in unstressed syllables, only three vowel qualities, [i H u], are allowed; underlying /e e a/ become [H], while /u o D/ become [u], as shown in (1). Vocalic contrast reductions along other featural dimensions are rare and are often secondary to height neutralization in the same system (Barnes 2002). (1)

Central Eastern Catalan (Barnes 2002: 37) ’riw ’new ’mel ’palH ’rDÏH ’monH ’ku7H

‘river’ ‘snow’ ‘honey’ ‘shovel’ ‘wheel’ ‘monkey (fem)’ ‘cure’

ri’wet nH’wetH mH’letH pH’etH ru’ÏetH mu’netH ku’7etH

‘river (dim)’ ‘snow (dim)’ ‘honey (dim)’ ‘shovel (dim)’ ‘wheel (dim)’ ‘monkey (fem dim)’ ‘cure (dim)’

The targets of assimilatory neutralization show cross-linguistic similarities as well (Cho 1990; Ohala 1990; Jun 1995; Steriade 2001; de Lacy 2002, 2006). For 3

Yu (2004) reports a case of final neutralization toward the voiced series in Lezgian (North Caucasian); the neutralization is restricted only to monosyllabic nouns, however. The default direction of neutralization in final position is toward the voiceless aspirated series.

Alan C. L. Yu

5

example, obstruents are often voiced after nasals (Pater 1999). Nasals in turn frequently assimilate to the place of articulation of the following consonant, as illustrated by the examples from Yoruba (Niger-Congo) in (2). (2)

Yoruba nasal assimilation (Pulleyblank 1995: 5) a.

bá fF b. tà sj c. jó je d. kD wí e. gbF kpa

3bá ≥fF Utà Usj ∞jó ∞je ØkD Øwí ØmgbF Ømkpa

‘overtake’ ‘break’ ‘sell’ ‘sleep’ ‘dance’ ‘eat’ ‘write’ ‘say’ ‘hear, understand’ ‘kill’

Among obstruents, coronals are most susceptible to place assimilation. In Korean, for example, morpheme-final coronals assimilate to dorsals or labials (3a). Morpheme-final labials assimilate to dorsals (3b), but no assimilation is observed when the following consonant is coronal. Dorsals are inert; they assimilate neither to a following labial nor to a following coronal (3c) (chapter 22: consonantal place of articulation).4 (3)

Korean place assimilation (Hume 2003: 7–8)5 a. b. c.

3

/mit+ko/ /mith+pota/ /ip+ko/ /nop+ta/ /nok+ta/ /kuk+pota/

[mikk’o] [mipp’ota] [ikk’o] [nopt’a] [nokt’a] [kukp’ota]

*[nott’a] *[nott’a] *[kupp’ota]

‘believe and’ ‘more than the bottom’ ‘wear and’ ‘high’ ‘melt’ ‘more than soup’

Theories of contrast reduction

Early discussions of contrast reduction focused on how to characterize the outcome of context-specific contrast reduction. That is, how would a theory of phonemics capture the fact that the contrast between two or more sounds in some positions of a word or a syllable is not maintained in other positions (chapter 11: the phoneme)? The main analytic puzzle neutralization presents to structuralist phonemics concerns the violation of the bi-uniqueness condition (i.e. of one-toone mapping between allophones and phonemes). The Prague School resolves this indeterminacy by positing archiphonemes in contexts of neutralization (Trubetzkoy 1939); archiphonemes are units that represent the common features of phonemes whose contrastive property is neutralized in specific contexts. In Yoruba, for example, a preconsonantal nasal would be treated as an archiphoneme, N (e.g. 4 5

See Silverman (2010) for a thorough review of neutralizing processes in Korean. C′ indicates a tense consonant.

Mergers and Neutralization

6

[3bá] /∑bá/ ‘overtake’). The archiphonemic treatment of neutralization anticipates the underspecification treatment of neutralized segments made possible by the reconceptualization of the phonemes as sets of distinctive features. In an underspecification model (chapter 7: feature specification and underspecification), a preconsonantal nasal in Yoruba, for example, would be specified for the feature [+nasal], while the surface realization of this underspecified nasal would be specified contextually. In addition to the issue of representation, theories of neutralization also attempt to explain the causes for neutralization. That is, why do cross-linguistic parallelisms abound in cases of contrast reduction? Two main approaches have been advanced: structure-based and cue-based. This section reviews how these two approaches conceptualize the problem of contrast reduction and what mechanisms account for the observed typological tendencies.

3.1

Licensing and markedness

Structure-based approaches maintain that certain prosodic or structural positions disfavor the maintenance of phonological contrasts. The phonological grammar may either prohibit a contrast in a given structural position in terms of a filter constraint (4) or impose a licensing condition which specifies how a phonological contrast must be configured in order to be realized in a given position within the word (5) (see also chapter 46: positional effects in consonant clusters). (4)

Positional neutralization: Filter/negative version (Steriade 1995: 120) *aF in x, where x is defined prosodically or morphologically.

(5)

Positional neutralization: licensing/positive version (Steriade 1995: 121)6 aF must be licenced in x, where x is defined prosodically or morphologically.

Codas in Pali, for example, must be the first half of a geminate structure (6a) or nasal (6b). Coda nasals must be placeless, or homorganic with the following stop. (6)

Pali cluster simplification (Zec 1995: 157) a.

b.

sup+ta tap+ta caj+ta dam+ta vam+ta

sutta tatta catta danta vanta

‘to sleep’ ‘to shine’ ‘give out’ ‘to tame’ ‘to investigate’

Coda constraints such as those in (7) prevent illicit codas. (7a) states that “if there is a syllable-final consonant which is singly linked, its melody cannot be [−nasal]”; (7b) states that “if there is a syllable-final consonant which is singly linked, its melody must be [+nasal].” 6

Within Optimality Theory, two types of constraints have been posited to account for positional asymmetries in the realization of segmental features. See chapter 46: positional effects in consonant clusters for discussion.

Alan C. L. Yu

7 (7)

Codas in Pali (following Itô 1986) a.

b.

*C]q [−nas]

C]q [+nas]

Geminates, where the melody is doubly linked both to the coda of one syllable and to the onset of the following syllable (chapter 37: geminates), violate neither (7a) nor (7b), because the melody is not uniquely linked to a [nasal] feature. (8)

Root] [−nas]

Root [−nas]

The same approach can be applied to the fact that codas in Pali are either placeless, as in the case of nasal codas, or homorganic with the following stop (9). (9)

Coda place in Pali (following Itô 1989: 224) *C]q Place

A coda consonant can be specified for place as long as the Place node is not uniquely linked to the coda consonant. If a coda nasal cannot share Place with another segment, it will remain placeless. (10)

Root]q Root [+nas]

[Place]

The restrictiveness of potential triggers and targets of neutralization have provided fruitful venues for discovering the organization of features at the phonological level. There have been many proposals for the organization of features into a hierarchical set structure within Autosegmental Phonology (see McCarthy 1988 and Clements and Hume 1995 for overviews of proposals in feature geometry; see also chapter 27: the organization of features). By assuming that the different features for place of articulation are hierarchically linked to a Place node, nasal place assimilation in Yoruba can be elegantly and economically modeled in terms of the spreading of the place node (11). (11)

Place assimilation in a feature-geometric organization (Pulleyblank 1995: 9) [+nas] Root tier Place tier

Mergers and Neutralization

8

Within this type of feature-geometric framework, non-assimilatory contrast reductions are generally treated as a matter of delinking of branches of a feature free. In the Kelantan dialect of Malay (Austronesian), for example, /p t k/ neutralize to [?], and /s f/ become [h] (12). (12)

Kelantan Malay place neutralization (Teoh 1988) /ikat/ /dakap/

ika? daka?

‘tie’ ‘embrace’

/sHsak/ sHsa? /hampas/ hapah

‘crowded’ ‘husk’

Debuccalization to [?] and [h] can be viewed as delinking of the Place node (13). The fact that /p t k/ debuccalize to [?], but /s/ to [h], can be attributed to the fact that non-place features of the underlying segment (e.g. [continuant]) are left intact. (13)

Formalization of /s/ → [h] and /p t k/ → [?] [−voice, ±cont] =

Other features Place node

Place

Adopting the framework of Optimality Theory (OT; Prince and Smolensky 1993), which determines the contrastive status of a feature F via the interaction of a constraint that requires the preservation of F and constraints on the rest of the system (Kirchner 1997), Lombardi (2001b) analyzes place neutralization such as (12) in terms of the interaction between consonantal place faithfulness and a family of universally ranked place markedness constraints ((14); cf. Prince and Smolensky 1993; Smolensky 1993; see also chapter 63: markedness and faithfulness constraints). Unlike the position-specific markedness constraints in (4) and (5), this family of markedness constraints captures the idea that pharyngeals, including /? h/ (McCarthy 1994), are less marked than coronals in general, irrespective of position. The tableau in (15) illustrates a markedness-based treatment of coda place neutralization. (14)

Place hierarchy *Dors/*Lab >> *Cor >> *Phar (Lombardi 2001b: 29)

(15)

Place neutralization in Kelantan Malay (Lombardi 2001b: 31)7 /ikat/ Max Dep *Dors/*Lab a. ikat b. ikati c. ika ☞ d. ika?

7

*! *!

* *

*Cor

*Phar *Max(Place)

*! * *

* *

*

*

The CodaCons constraint, which bans any Place feature in coda consonants, is omitted from this tableau, because it is not directly relevant in the present evaluation.

9

Alan C. L. Yu

The position-specificity of place neutralization is captured by the universal ranking of Ident(OnsPlace) >> Max(Place). Ident(OnsPlace) requires that an onset consonant have the Place of its input correspondent, while Max(Place) requires that an underlying Place feature have an output correspondent. Assuming that Id(OnsPlace), which preserves underlying place features in the onset only, always outranks the place hierarchy in (14), a place distinction in coda position is neutralized due to the dominance of the markedness constraints in (14) over Max(Place). Markedness violations in coda position cannot be resolved by deleting the offending coda due to the high ranking of Max, which penalizes deletion, nor can it be resolved by the addition of a final vowel due to the high ranking of Dep, which penalizes epenthesis. Place distinctions neutralize toward [?], since *Phar, which penalizes /?/, among other things, is ranked lower than the other place markedness constraints; the candidate with a /?/ coda (15d) is thus preferred over the fully faithful candidate (15a), which has a coronal coda.

3.2

Richness of cues and contrast maintenance

As the last case study illustrates, the notion of markedness is often invoked to account for the directionality of contrast reduction (chapter 4: markedness). Laryngeal neutralization in coda position is said to favor voicelessness, because laryngeal features such as [voice] and [constricted glottis] are more marked than voicelessness (Lombardi 1991; chapter 69: final devoicing and final laryngeal neutralization). Similarly, the fact that vowels in unstressed position often neutralize to schwa is often attributed to the unmarkedness of schwa (chapter 26: schwa). The definition of markedness is a matter of great debate, however, see chapter 4: markedness. In an effort to provide an objective basis for markedness, some scholars have proposed to ground the notion of markedness in terms of speakers’ partial understanding of the physical conditions under which speech is produced and perceived. This phonetically based notion of markedness leads to the development of a cue-based approach to contrast reduction (see Hayes et al. 2004 and references therein). The basic assumption of cue-based approaches to contrast reduction is that a contrast is suspended in positions where the relevant contrast-supporting cues are diminished; a contrast in such cue-impoverished environments may be maintained only at the cost of additional articulatory maneuvers. A contrast is licensed in positions that are rich in perceptual cues that maximize the contrast’s perceptibility. Alveolars and retroflexes, for example, are most easily distinguished by their VC transition profiles. Positions where VC transition is impoverished or non-existent, such as word-initial and postconsonantal positions, tend to be loci where the alveolar ~ retroflex contrast is eliminated (Steriade 1994). For example, the Australian language Bunuba contrasts apical alveolar and retroflex wordmedially (e.g. /biÕi/ ‘thigh’ vs. /widigi/ ‘stick insect’), but only apical alveolars are found word-initially (Rumsey 2000). The only exception to this restriction is when a subsequent syllable contains [Õ K Î]; in such instances (e.g. /KaÕZ/ ‘short’, /ÕuÎu/ ‘heart’), long-distance retroflexion is assumed to be what licenses the presence of retroflexion word-initially (Hamann 2003). Even when VC transitions are present, however, retroflexes are often avoided in the environment of /i/. For example, retroflex fricative and affricate series in several Chinese dialects

Mergers and Neutralization

10

are in complementary distribution with the alveo-palatals: before a high front vowel, only alveo-palatals are found, while the retroflexes occur elsewhere (Yip 1996). Hamann (2003) explains this avoidance of retroflexes in the environment /i/ as a result of the articulatory incompatibility between the production of these segments; a flat tongue middle and retracted tongue back configuration for retroflexion cannot be combined with the high tongue middle and fronted tongue back necessary for front vowels. Languages often restrict the distribution of contour tones to phonemic long vowels (e.g. Somali and Navajo), stressed syllables (e.g. Xhosa and Jemez), and word-final positions (Zhang 2001, 2002). While it is difficult to characterize these positions in structural or prosodic terms in a unifying way, they have in common rhyme durations that are long, sonorous, and high in intensity. This fact has led some researchers to hypothesize a long sonorous rhyme duration as the unifying factor for privileged contour tone licensers (Gordon 1999, 2001; Zhang 2001). Obstruents are often voiced after a nasal, resulting in voicing neutralization (Luyia (Niger- Congo) /N + p t k ts c/ → [mb nd Ig nz Jj]; Herbert 1986: 236). Hayes and Stivers (1995) attribute the preference for post-nasal voicing to the effects of “velar pumping,” which arises from vertical motion of a closed velum, and of “nasal leak,” the leakage of air through a nearly closed velar port during the coarticulatory period between oral and nasal segments. Structure-based accounts have difficulties accounting for languages, such as Lithuanian, which licenses laryngeal contrasts in pre-sonorant position, regardless of whether the following sonorant is tautosyllabic or heterosyllabic (see also Ancient Greek and Sanskrit; Steriade 1997). From a cue-based perspective, the reduction of laryngeal contrasts in preconsonantal and final positions follows from the fact that many of the relevant cues for the perception of voicing (closure voicing, closure duration, duration of preceding vowel, F0 and F1 values in preceding and following vowels, VOT values, burst duration, and amplitude) are endangered in those positions (see also chapter 8: sonorants). The more impoverished the available perceptual cues are, the less sustainable the laryngeal contrast is. Thus, word-initial preconsonantal position is least hospitable to a contrast in voicing, while inter-sonorant position is most ideal for voicing realization. Formally, a cue-based account of contrast reduction may be modeled as the interaction between constraints on contrast maintenance and markedness constraints induced from phonetic knowledge (Steriade 1997; Hayes 1999). Steriade (1997), for example, models [voice] neutralization in terms of the interaction between the constraint Preserve[voice], which demands faithfulness to input voice values, and a fixed hierarchy of *Voice constraints, aligned to a voice perceptibility scale (16). (16)

Scale of obstruent voicing perceptibility according to context (Steriade 1997: 11)8 V __ [+son] > V __ # > V __ [−son] > [−son] __ [−son], [−son] __ #, # __ [−son]

A language with voicing licensed only before sonorants would have the following ranking: The > symbol in (16) indicates that voicing is more perceptible in the context to its left than to the context on its right. 8

Alan C. L. Yu

11 (17)

Voice licensed before sonorants (Steriade 1997: 12) *Voice / [−son] __ [−son], [−son] __ #, # __ [−son] >> *Voice / V __ [−son] >> *Voice / V __ # >> Preserve[voice] >> *Voice / V __ [+son]

Given that the ranking of constraints projected from a phonetically grounded perceptibility scale has been argued to be universal (chapter 98: speech perception and phonology), such a model makes strong predictions about the typology of laryngeal neutralization patterns. For example, it predicts that a language with a voicing contrast in word-initial preconsonantal position must also allow a voicing contrast in word-initial, intervocalic, and word-final positions. The Mon-Khmer language Khasi, spoken in the Assam province of India, shows that such a strong prediction does not obtain. As illustrated in (18), Khasi contrasts voiced and voiceless plosives in word-initial preconsonantal position. (18)

Voicing contrast in initial clusters in Khasi (Henderson 1992: 62) bti bthi dkar dkhar dpei bshad

[bœa(t]

‘to lead by the hand’ ‘sticky’ ‘tortoise’ ‘plainsman’ ‘ashes’ ‘civet’

pdot pdeng tbian tba pjah bdi

‘throat’ ‘middle’ ‘floor’ ‘to feel’ ‘cold’ ‘twenty’

Using evidence from a Frøkjær-Jensen combined oscilloscope and mingograph, Henderson (1992) confirmed the voicing contrast in word-initial preconsonantal position and ruled out the possibility of a svarabhakti vowel between the two stops. What is of interest here is the fact that in syllable-final position there is no distinction between voiced and voiceless stops; final stops are unreleased and frequently accompanied by simultaneous glottal constriction (Henderson 1967: 567). Since a voicing contrast is allowed word-initially before another obstruent, a highly impoverished environment for the maintenance of a voicing contrast, a cue-based approach that maintains the universality of voicing perceptibility necessarily predicts that a voicing contrast should also be maintained in less impoverished environments, such as post-vocalic word-final positions. It is worth noting that counterexamples of this sort do not obviate the validity of a cue-based approach to contrast reduction per se, since the assumption of the universality of cue perceptibility is logically independent of the claim that cue maintenance is the driving force behind contrast maintenance and reduction (see Hume and Johnson 2001b for discussion on the language-specificity of speech perception). Some cue-based theorists eschew the notion of markedness at the level of the individual segment or feature, and favor instead a contrast maximization account. Dubbed “Dispersion Theory” (Flemming 1995, 1996; Ní Chiosáin and Padgett 2001; Padgett 2003) after Lindblom’s (1986, 1990) Theory of Adaptive Dispersion, such a theory of contrast maintains that the selection of a phonological contrast is subject to three functional goals (see Martinet 1952, 1955, 1964 for early formulations of these functional ideas; cf. Silverman 1996, 2004, 2006): (19)

a. b. c.

Maximize the distinctiveness of contrasts. Minimize articulatory effort. Maximize the number of contrasts.

Mergers and Neutralization

12

From this perspective, the dispreference for sound x is conceptualized as a dispreference for the sub-maximally distinct contrasts between x and other sounds in the particular sound system. As schematized in the ranking in (20), a contrast is formally neutralized in some context if it cannot be realized with a distinctiveness of d without violating *Effort, an effort-minimization constraint penalizing some articulation. (20)

MinDist=d, *Effort >> MaximizeContrasts

In Belarusian, for example, a five-vowel inventory /i e a o u/ is observed in stressed syllables. In unstressed syllables, /e a o/ reduce to [a] or [ô], depending on the position of the vowel relative to the stressed syllable (Barnes 2002: 65). Flemming (2004) argues that this type of vowel reduction is motivated by difficulties in producing distinct F1 contrasts in unstressed positions. Specifically, increasing difficulty in producing a low vowel as a result of vowel duration shortening in unstressed positions leads to the raising of short low vowels; the smaller range of the F1 dimensions for distinguishing F1 contrast then leads to the selection of a smaller number of contrasts. Flemming captures this intuition in terms of the ranking in (21). (21)

Unstressed vowels are short, *ShortLowV, MinDist=F1:3 >> MaximizeContrasts >> MinDist=F1:4

The constraint Unstressed vowels are short requires unstressed vowels to be shorter than stressed ones. This constraint will be omitted in the subsequent discussion, since it is assumed to be undominated, so that no vowel systems violating this constraint will be permitted in the present context. *ShortLowV (abbreviated *a) is an effort-minimization constraint that penalizes low vowels. The MinDist=Y:X constraints are satisfied by contrasting sounds that differ by at least X distance on the Y dimension. The highest-ranking MinDist constraint that outranks the MaximizeContrasts constraint sets the threshold distance, and the optimal inventory is the one that packs the most contrasting vowels onto the relevant dimension (here F1 ) without any pair being closer than this threshold. With the relative positioning of vowels on the F1 dimension stated in (22), Belarusian’s three-way vowel height distinction in stressed syllables is predicted in (23). Since the present evaluation concerns only distinctions in vowel height, the back counterparts of vowels in the inventory candidate set are left out for ease of reference. The tableau in (23) shows that a four-way height distinction is suboptimal (23c), because vowels are not distinct enough according to the constraint, MinDist=F1:3. Reducing the height inventory too much (23a) results in excessive contrast reduction, thus incurring more MaximizeContrasts violations relative to the optimal inventory set (23b). (22)

F1

7

6

5

4

3

2

1

a

ô

e e a H

ı

>

i

13 (23)

Alan C. L. Yu Belarusian: Vowels in stressed syllables *a MinDist= Maximize Contrasts F1:3 a. ’i ’a ☞ b. ’i ’e ’a c. ’i ’e ’ ’a

✓✓! ✓✓✓ *!

✓✓✓

MinDist= F1:4 ** ****

In the unstressed syllables, the constraint *ShortLowV (*a) becomes applicable. It rules out the candidate vowel inventory [i e a], because of the presence of [a]. The three-way height distinction cannot be maintained even if the low vowel [a] is avoided, the distance between [e] and [ô] being insufficient, due to the high ranking MinDist=F1:3 constraint. The winning candidate has only two vowel heights, which fares worse by MaximizeContrasts, but satisfies the higher-ranked minimum distance requirements. (24)

Belarusian: Vowels in unstressed syllables

☞ a. i ô b. i e ô c. i e a

*a MinDist= MinDist= Maximize F1:4 Contrasts F1:3 ✓✓ *! *!

** **

✓✓✓ ✓✓✓

Within Dispersion Theory, the objects of analysis are systems of oppositions. The notion of contrast reduction is thus given a genuine expression in such an analysis. Whereas most other approaches view mergers and neutralization as the results of the application of constraints or rules that prevent the expression of individual segments or features, Dispersion Theory holds that mergers and neutralization follow from the number of oppositions a language makes available in different contexts. It should be noted that, because of its insistence on looking at systems of contrast from the perspective of the language as a whole, Dispersion Theory raises questions regarding how phonological derivation is implemented in such a model (Boersma 1998: 361; but see Ní Chiosáin and Padgett 2001 and Padgett 2003 for a response to this problem). This section has reviewed major theories of contrast reduction, showing that proposals range from completely structure-dependent accounts to theories that embrace the full phonetic substance of sound patterns. The debate on what a proper theory of contrast reduction is, however, might ultimately rest on resolving a more fundamental question – does synchronic contrast reduction truly exist? This is the topic of the next section.

4

Do real synchronic mergers and neutralization exist?

Until recently, most theories of phonology have assumed some form of lexical minimality (the minimization of lexically stored information; Chomsky and Halle

Mergers and Neutralization

14

1968: 381; Steriade 1995: 114; see also chapter 1: underlying representations) and feature economy (the minimization the ratio of features to segments in an “alphabet”; Clements 2003; see also chapter 17: distinctive features). In early generative phonology, for example, the underlying alphabet is the minimal sound set needed to express surface differences between distinct morphemes; at the level of the underlying representation, no allophonic variants are present. Theories differ in the number of levels of representation allowed (e.g. Lexical Phonology and Morphology (Kiparsky 1982, 1985; Mohanan 1982) recognizes three levels of representations: underlying, lexical, and phonetic) and the degree of minimality assumed at each level. Common to these early theories of phonology, however, is the premise that, out of the vast sea of phonetic signals, only a small subset of phonetic properties are contrastive in a given language (Sapir 1933; Trubetzkoy 1939; Jakobson et al. 1952; Hockett 1955; Chomsky and Halle 1968; Kiparsky 1982, 1985). Contrast is encoded in terms of a difference between +Fi and −Fi for some finite set of features Fi, and contrast reduction corresponds to the elimination of this difference (i.e. the outcome of such a reduction is either +Fi, −Fi, or null). Non-distinctive phonetic properties are treated in one of two ways. To begin with, features that do not distinguish lexical items may be underspecified in the lexical entries (e.g. Archangeli 1988; Pulleyblank 1995; Steriade 1995; Clements 2003; chapter 7: feature specification and underspecification). The feature [voice] in sonorants, for example, is non-contrastive, and thus redundant, in languages such as English, which do not distinguish between voiced and voiceless sonorants (chapter 8: sonorants; chapter 13: the stricture features). Sonorants are underspecified for voicing, i.e. sonorants bear no value for the feature [voice]. Such an assumption of non-contrastive feature underspecification has important theoretical consequences for the treatment of transparency effects in the phonology of the feature [voice]. For example, as seen earlier, nasals do not induce regressive voicing assimilation in Dutch, but voiced obstruents do, suggesting that only voiced obstruents are underlyingly specified for the feature [voice]. The other treatment of non-distinctive phonetic properties is to exclude them from the feature pool altogether. For example, vowels are longer before voiced stops than before voiceless ones in American English (bat [bæt] vs. bad [bæ)d]). Peterson and Lehiste (1960) suggest that the ratio of vocoid duration before voiceless consonants to that before voiced consonants in American English is 2 : 3. Such a difference in vocoid duration which covaries with the voicing of the following consonant is generally dismissed as the effect of automatic phonetics, and thus assumed to play no role in any phonological analysis; features such as [slightly long] would not be part of the universe of phonological features.9 As Labov et al. (1991: 38) point out, the assumptions that “contrasts were discrete and binary, that there was no such thing as a small difference in sound, that production and perception were symmetrical, and that introspections were reliable” have received increased scrutiny in recent years (chapter 89: gradience and categoricality in phonological theory). For example, Dispersion Theory’s admission of phonological constraints that regulate features 9

While these sub-featural cues might not be distinctive, they may nonetheless have enhancing functions (Stevens and Keyser 1989; Stevens et al. 1986; Keyser and Stevens 2001).

15

Alan C. L. Yu

along scalar dimensions, rather than in terms of binary oppositions, already foreshadows the move away from a discrete and binary notion of contrasts (e.g. MinDist=Y:X constraints evaluate distances along some phonetic dimensions such as F1). Mounting evidence for near mergers and incomplete neutralization raises further questions about the validity of these abovementioned assumptions. This is the topic of the next section.

4.1

Near mergers and incomplete neutralization

Near merger describes the situation where speakers consistently report that two classes of sounds are the same, yet consistently differentiate them in production at a better than chance level. Labov et al. (1972: ch. 6), for example, reports that speakers in New York City differentiate words such as source and sauce in production, but report no distinction between them in perception. Similar near mergers have been reported in other varieties of English (e.g. fool and full in Albuquerque (Di Paolo 1988); too vs. toe and beer vs. bear in Norwich (Trudgill 1974); line vs. loin in Essex (Labov 1971; Nunberg 1980); meat vs. mate in Belfast (Milroy and Harris 1980; Harris 1985)). Near mergers are not restricted to segmental contrasts. Yu (2007b), for example, demonstrates that derived mid-rising tones in Cantonese show a small but statistically significant difference in F0 from underived mid-rising tones. Similar to near mergers, incomplete neutralization refers to reports of small but consistent phonetic differences between segments that are supposedly neutralized in certain environments. Flapping is often cited as a neutralizing phonological alternation in American English; underlying /t/ and /d/ surface as dental flaps or taps when followed by an unstressed vowel (chapter 113: flapping in american english). Word-final and preconsonantal obstruent devoicing is another classic example of a neutralizing sound pattern (chapter 69: final devoicing and final laryngeal neutralization). Incomplete neutralization has been reported outside the domain of obstruent voicing as well. In Eastern Andalusian Spanish, for example, the combined effect of word-internal coda aspiration and the gemination of the consonant following the aspirated coda leads to potential neutralization (e.g. [kaht(a] for both /kasta/ ‘caste’ and /kapta/ ‘s/he captures’). Gerfen (2002), however, reports that aspirating an /s/ results in a longer duration of aspiration, while aspirating a /p/ or /k/ results in longer medial consonant gemination (see also Gerfen and Hall 2001). Bishop (2007) found that listeners make use of the length of the consonant following aspiration as a cue for making phonemic decisions regarding the nature of the underlying coda. In many languages, an epenthetic stop can occur within nasal– fricative or heterorganic nasal–stop clusters (e.g. English dreamt [dPemt] ~ [dPempt]; prince [pP>ns] ~ [pP>nts]). Several studies have found that such epenthetic stops are phonetically different from underlying stops in the same environment. Fourakis and Port (1986), for example, found that underlying /t/ in words like prints [pP>nts] are significantly longer and the neighboring nasal significantly shorter than epenthetic [t] in words like prince. Dinnsen (1985), citing Rudin (1980), reports that long vowels deriving from underlying /VgV/ sequences in Turkish are 13 percent longer than the underlying long vowel /V(/. Simonet et al. (2008) report that the so-called /r/ ~ /l/ neutralization in post-nuclear position in Puerto Rican Spanish (e.g. /’arma/ → [’alma] ‘weapon’ vs. /’alma/ → [’alma] ‘soul’) is incomplete. Based on measurements of duration of the vowel

Mergers and Neutralization

16

+ liquid sequences and examination of formant values and trajectories, Simonet et al. (2008) conclude that, while post-nuclear /r/ is similar to post-nuclear /l/, there nonetheless exist systematic durational and spectral differences, suggesting that the two liquids have not completely merged. Since traditional theories of the phonetics–phonology interface assume that phonological representations in the lexicon are categorical, contrastive elements, and since the phonetic implementation component computes the degree and timing of articulatory gestures, which are gradient and variable, the discovery of near mergers and incomplete neutralization presents a curious conundrum. For a given underlying distinction +F and −F, how can an output −F that corresponds to an underlying +F display systematically different surface phonetic realization from an output −F that corresponds to an underlying −F, when information flow is supposed to be strictly unidirectional? In such a model, no articulatory plan can look backward to phonological encoding, nor can phonological encoding look back to the lexical level. No lexical information can influence the phonetic implementation directly either, bypassing the level of phonological encoding. On this view, the categorical form of a lexeme wholly determines the phonetic outcome. Phonetic variations on the surface are considered artifacts of the context or performance-induced anomalies. In light of such conceptual difficulties, many have sought to explain away the observed sub-phonemic phonetic differences as a consequence of orthographic influence or as variation in speaking style. For example, it has been found that the less the experimental design emphasizes the role of orthography, the smaller the durational effects (Fourakis and Iverson 1984; Jassem and Richter 1989). Port and Crawford (1989) found that discriminant analysis to classify productions by underlying final voicing was most successful (78 percent correct) when speakers dictated the words, but least successful (55 percent correct) when target words were embedded in sentences that do not draw attention to the minimal pairs (whether read or repeated orally). But not all cases of near mergers and incomplete neutralization can be attributed to performance factors. Warner et al. (2004), for example, found sub-phonemic durational differences in the case of final devoicing in Dutch, even when possible orthographic influence was controlled for as a confound. Yu (2007b) found incomplete merger of underived and morphologically derived mid-rising tones in Cantonese, a language whose orthography does not indicate tone. Further support for the existence of a suspended contrast comes from the fact that speakers appear to have some access to subtle phonetic differences. As noted earlier, Bishop (2007) found that Andalusian Spanish speakers can make use of subtle closure duration differences to recover underlying coda consonants. In the case of final devoicing in Dutch, listeners not only can perceive durational differences (Warner et al. 2004), they even use these sub-phonemic distinctions to hypothesize which past tense allomorph nonce forms would take (Ernestus and Baayen 2003; chapter 99: phonologically conditioned allomorph selection).

4.2

Approaches to sub-phonemic phonetic differences

Sub-phonemic distinctions have been analyzed as the result of paradigm uniformity among morphologically related neighbors (e.g. phonetic analogy; Steriade 2000; Yu 2007a; chapter 83: paradigms; chapter 87: neighborhood effects).

Alan C. L. Yu

17

Steriade (2000), for example, argues that grammars prefer words within a paradigm to be uniform.10 Steriade extends this paradigm uniformity preference to the phonetic level. French, for example, has an optional schwa deletion which creates ostensibly homophonous strings (e.g. bas retrouvé [baúHtúuve] ‘stocking found again’ → bas r’trouvé [baútúuve] vs. bar trouvé [baútúuve] ‘bar found’). Various studies have shown that the consonant to the left of the syllable of the deleted schwa maintains phonetic qualities that would only be expected if the schwa were still present (Rialland 1986; Fougeron and Steriade 1997). Steriade (2000) interprets such unexpected phonetic differences as the results of phonetic analogy; forms with schwa deletion are influenced phonetically by the corresponding schwa-full forms (e.g. /r/ in bas r’trouvé [baútúuve] takes on onset-like articulation from the /r/ in the related phrase bas retrouvé [baúHtúuve]). Van Oostendorp (2008) argues that incomplete neutralization in final devoicing can be captured within a Containment model of OT in terms of a turbid representation of phonological outputs (Goldrick 2001). Output structures are characterized in terms of two types of relations: a Projection relation, which is an abstract structural relationship holding between a segment and the feature (represented by ↑ in (25)), and a Pronunciation relation, an output relationship that holds between the feature and the segment and describes the output realization of a structure (represented by ↓ in (25)). On this conception, a three-way distinction obtains between segments that are underlyingly voiceless (i.e. they lack the feature [voice]), segments that are underlyingly voiced and pronounced voiced, and segments that are underlyingly voiced, but are not realized as voiced on the surface (25). (25)

A three-way voicing distinction using turbidity theory a.

ta(t

b.

tad ↑↓ [voice]

c.

ta(↑ [voice]

The selection of a representation like (25c) would be determined by the interaction between markedness constraints that disfavor coda voicing and the constraint Reciprocity(X,F), which holds that if a segment X entertains a projection relation with a feature F, then F must entertain a pronunciation relation with the segment X. Because of their structural differences, (25a)–(25c) will show different surface phonetic realizations. These phonological approaches assume that cases of incomplete neutralization are in fact complete at the phonological level and that the output segment is phonologically unvoiced. The sub-phonemic differences observed would either be due to analogical influences from related forms that retain voicing or to covert structural differences among outputs. Is a complete neutralization interpretation of incomplete neutralization a necessity, or even desirable? The answer to this question hinges on the conception of the phonetics–phonology interface and, specifically, the nature of allophony. What should be considered extrinsic allophones (i.e. allophones that are phonologically governed), and what should be considered intrinsic (i.e. those introduced by phonetic variability; Wang and Fillmore 1961; 10

A paradigm is defined here as “a set of words sharing a morpheme, e.g. {bomb, bomb-ing, bomb-ard, . . . }, or a set of phrases sharing a word, e.g. {bomb, the bomb, . . . }” (Steriade 2000).

Mergers and Neutralization

18

Ladefoged 1971; Tatham 1971)? Must extrinsic allophones be governed by changes in discrete distinctive feature values, or can extrinsic allophones be gradient? The next section offers an alternative interpretation of near mergers and incomplete neutralization, which appeals to the notion of a covert contrast.

4.3

Sub-phonemic distinctions as covert contrasts

Near mergers and incomplete neutralization are problematic from the point of view of a model of the interface between phonetics and phonology sketched above, because, if the phonetic implementation component accounts only for variations due to biomechanical and aerodynamic factors, it is anomalous, to say the least, that speakers of a language with [voice] neutralization vary the realization of the neutralized sounds in accordance with the feature value of their non-neutralized counterparts. The above model of the phonetics–phonology interface is arguably simplistic, however. Kingston and Diehl (1994) articulate a model of the phonetics– phonology interface that affords the phonological component greater control over the range of variability in the phonetic implementation of contrasts. Elastoinertial, biomechanical, aerodynamic, psychoacoustic, and perceptual constraints delimit what a speaker (or listener) can do, but not what they must do. Within this conception of the phonetics–phonology interface, a phonemic contrast is taken to be “any difference in the feature content or arrangement of an utterance’s phonological representation which may convey a difference in semantic interpretation” and allophones are “any phonetic variant of a distinctive feature specification or arrangement of such specification that occurs in a particular context” (1994: 420, fn. 2). To illustrate this framework more concretely, consider Kingston and Diehl’s summary of the phonetic variants of English stops contrasting for [voice] (see also Silverman 2004). Table 80.1 illustrates the fact that the contrastive feature [+voice] in English shows great variability in its phonetic realization. In word-initial position, for example, [+voice] stops are often realized as voiceless unaspirated, even when the preceding word ends in a vowel (Caisse 1982; Docherty 1989). Kingston and Diehl (1994) interpret such data as showing that speakers choose between two active articulations in producing initial [+voice] stops in English: delay glottal closure until the stop release, or close the glottis but expand the oral cavity to overcome the difficulty of initiating voicing. Such controlled variation is made possible by the fact that there are typically multiple, auditorily independent correlates that serve as distinct bases for a minimal phonological distinction. As noted in Stevens and Blumstein (1981), [+voice] consonants are characterized by the “presence of low-frequency spectral energy or periodicity over a time interval of 20 to 30 msecs in the vicinity of the acoustic discontinuity that precedes or follows the consonantal constriction interval” (1981: 29). This low-frequency property, as Kingston and Diehl (1994) call it, has multiple supporting subproperties such as voicing during the consonant constriction interval, a low F1 near the constriction interval, and a low F0 in the same region, as well as enhancing properties such as the duration ratio between a consonant and its preceding vowel. These properties do not all surface in all positions. Crucially, while [+voice] stops do not show prevoicing in word-initial position, the [voice] contrast is nonetheless maintained because [−voice] stops tend to have longer VOT, stronger burst energy and higher F1 and F0 following the consonant constriction interval.

19

Alan C. L. Yu

Table 80.1 Summary of the phonetic variants of English stops that contrast for [voice]

Utterance-initial or pre-tonic

Intervocalic or post-tonic

Utterance-final and post-vocalic

[+voice]

[−voice]

short lag VOT

long lag VOT

F1 lower

F1 higher

F0 lower

F0 higher

weaker burst

stronger burst

closure voicing

no closure voicing

short closure

longer closure

longer preceding vowel

shorter preceding vowel

F1 lower

F1 higher

F0 lower

F0 higher

longer preceding vowel

shorter preceding vowel

closure voicing possible

no closure voicing

short closure

longer closure

F1 lower

F1 higher

From this perspective of the phonetics–phonology interface, sub-phonemic differences observed in near mergers and incomplete neutralization are no more different from those observed between allophones appearing in different phonetic contexts. As noted in Steriade (1997), the percept of voicing hinges on a multitude of acoustic cues: burst amplitude, closure duration, voicing during the closure period, voice onset time, and vowel onset and offset. Phonetic cues that support a [voice] contrast in word-final positions are intrinsically impoverished relative to cues available in word-initial and word-medial positions. Nonetheless, many languages maintain the contrast in word-final positions because there remain sufficient cues that can differentiate the underlying phonological contrast. (See chapter 113: flapping in american english for additional evidence on this interpretation.) The interpretation of near mergers and incomplete neutralization advocated here suggests that traditional methods of introspection and field elicitation may not be adequate in detecting covert contrast (chapter 96: experimental approaches in theoretical phonology). Self-introspection faces inherent problems of analyst bias and thus should not be taken as a definitive source of information. The phonetician’s ears are, after all, human ears. Commutation tests are essentially armchair psycholinguistic tasks that require language consultants to perform a same–different task, with minimal control for potential confounds. Subject responses are inherently probabilistic; analysts insisting on dichotomizing a continuous function will find confident responses when the samples have a wide separation in the sample space. Samples that straddle regions of great overlap, as in the case of near mergers and incomplete neutralization, will elicit more ambiguous responses. Contrasts not detected by linguists using traditional methods of elicitation may nonetheless be detected by native speakers, as demonstrated in laboratory studies reviewed above.

Mergers and Neutralization

4.4

20

Covert contrasts as systems in transition

The existence of covert contrast is readily understandable from the perspective of sound change and phonologization of phonetic variation (chapter 93: sound change). In his seminal work on phonologization, Hyman (1976) conceptualizes the emergence of phonemic tonal distinctions as a three-stage process. At Stage 1, a language displays physiologically based consonantal voicing-induced pitch perturbations on the neighboring vowel. A language reaches Stage 2 when pitch perturbation becomes exaggerated to such an extent that the pitch variation cannot be attributed entirely to the physiological properties of the preceding consonant’s voicing (e.g. *[pa] > [pá] and *[ba] > [bà]). The transition from Stage 1 to Stage 2 – when an intrinsic, thus unintended, variation in pitch associated with consonantal realization becomes an extrinsic feature of the vowel – is phonologization. A language reaches Stage 3 when the voicing distinction is lost completely, and the pitch distinction on vowels becomes the sole feature that signals a meaning difference between words. That is, the language has undergone the phonemicization of tone (i.e. */pa/ > [pá] and */ba/ > [pà]). From the perspective of this model of sound change, covert contrast represents a language at Stage 2 and possibly in transition to Stage 3. That is, the old contrast (e.g. obstruent voicing) has not completely disappeared (i.e. been neutralized), but the new contrast (e.g. tonal distinction) has not fully emerged either. A language in Stage 2 is in principle unstable. As Hyman points out, “accompanying every phonologization is a potential dephonologization” (Hyman 1976: 410). The emergence of a tonal distinction as a result of the phonologization of intrinsic pitch perturbation of obstruent voicing entails the eventual destruction (i.e. neutralization) of the original voicing contrast. The evolution of vowel duration and consonant voicing covariation provides an instructive example of phonologization and its connection to the emergence of covert contrast. As reviewed in Solé (2007), languages differ in the amount of control the speakers have over the maintenance of this sub-phonemic duration difference. Solé (2007) found that English speakers actively maintain durational differences before voiced and voiceless stops, regardless of speaking rates, while speakers of Catalan and Arabic do not exhibit similar control over such sub-phonemic duration differences. Her findings suggest that English has already partially phonologized the effect of consonant voicing on vowel duration, while Catalan and Arabic have not. Recall that one commonly observed feature of the incomplete neutralization of final devoicing is a vowel duration difference. Following Hyman’s dictum that the phonologization of one feature carries the seeds of the destruction of another, the phonologization of a sub-phonemic vowel duration difference entails an eventual loss of the voicing contrast in the following stops. The reasons why such a correlation exists are still a matter of debate. Two factors are noteworthy in this context. First, the longer vowel before voiced stops and the shorter vowel before voiceless stops are, strictly speaking, in complementary distribution. Likewise, post-vocalic voiced and voiceless stops are also in complementary distribution, since they do not appear in the same context. This type of analytic ambiguity (i.e. between vowel duration and consonantal voicing) is typical of a language undergoing phonologization. Second, research on auditory category learning has shown that listeners are not only sensitive to the distributional information of the category cues, but

Alan C. L. Yu

21

also acquire unidimensional contrasts more readily than multidimensional ones (Goudbeek 2006; Clayards 2008; Clayards et al. 2008; Goudbeek et al. 2008). Such results suggest that, all else being equal, listeners will rely more heavily on a single cue for category identification even when multiple cues are available in the signal. For example, as voicing during stop closure becomes less prominent as a feature of voiced stops in final position, vowel length becomes the more reliable contrastive feature. When voicing in closure ceases to be a feature of final obstruents altogether, a contrast in vowel length is expected to emerge. Friulian, a Romance language spoken in northeastern Italy, provides an instructive example of this type of cue trade-off in phonologization. In Friulian, vowel length (chapter 20: the representation of vowel length) is only distinctive in a stressed word-final syllable closed by a single consonant. (26)

Vowel length distinction in Friulian (Baroni and Vanelli 2000: 16) a.

[’la(t] [’bru(t] [fi’ni(t] [’pa(s] [’fu(k]

‘gone (masc)’ ‘brother, mother-in-law’ ‘finished (masc)’ ‘peace’ ‘fire’

b.

[’lat] [’brut] [’frit] [’pas] [’tDk]

‘milk’ ‘ugly’ ‘fried (masc)’ ‘step’ ‘piece’

Stressed vowels are always phonologically long before [r] ([’la(rk] ‘large (masc)’) and always short when they are not in the last syllable of a word ([kan’tade] ‘sung (fem)’), when they occur in the final open syllable ([ku’si] ‘so’), and when they are in a final syllable closed by a consonant cluster, nasal, or affricate ([’gust] ‘taste’, [’maI] ‘hand’, [’braŒ] ‘arm’). Of particular relevance here is the fact that vowel length in word-final syllables before obstruents is predictable: the stressed vowel is long if the following consonant is realized as voiced in intervocalic position (27a); if the following consonant is voiceless intervocalically, the stressed vowel is short (27b). (27)

Vowel length and consonant voicing (Baroni and Vanelli 2000: 17) a.

b.

[’la(t] [fi’ni(t] [’pe(s] [’fu(k] [’lat] [’pas] [pa’taf] [’tDk]

‘gone (masc)’ ‘finished (masc)’ ‘snow’ ‘fire’ ‘milk’ ‘pass’ ‘slap’ ‘piece’

[’lade] [fi’nide] [pe’za] [fogo’la(r] [la’te] [pa’sa] [pata’fa] [tu’kut]

‘gone (fem)’ ‘finished (fem)’ ‘to snow’ ‘fireplace’ ‘to breastfeed’ ‘to pass’ ‘to slap’ ‘little piece’

Based on acoustic evidence, Baroni and Vanelli (2000) establish that long vowels are more than twice as long as the short ones and word-final obstruents are indeed voiceless (i.e. no voicing during closure). While final [t] corresponding to medial [d] is significantly shorter than final [t] corresponding to medial [t], this difference is only observed after certain vowel qualities. Their findings suggest that, while Friulian final obstruent devoicing is incomplete (i.e. there remains some difference between underlying /d/ and underlying /t/ in final positions), this difference is mainly carried by the closure duration of the obstruent and only in very restricted contexts. On the other hand, a full-blown vowel quantity difference has emerged

Mergers and Neutralization

22

in its place. The salience of this vowel length contrast is exemplified by the behavior of vowel length in loanword adaptation. Friulian does not preserve the consonantal length contrast in borrowed Italian words. However, longer vowels before single consonants in Italian are treated as long in Friulian if they occur in word-final position: [impje’ga(t] ‘clerk (masc)’, from Italian [impje’ga(to]. When such long vowels occur in word-internal position, they become short and the following obstruent is voiced ([impje’gade] ‘clerk (fem)’). When borrowed short vowels occur in word-internal position, the obstruents remain voiceless (e.g. [a’fit]/[afi’tut] ‘rent/little rent’; Italian [a’f(it(o]). This loanword evidence suggests that Friulian has restructured its system to one with a limited vowel length contrast; voicing variation has become secondary to vowel length difference. The possibility of a covert contrast in purported cases of neutralization raises questions about the existence of genuine instances of neutralization. Kim and Jongman (1996), for example, report that coda neutralization (i.e. word-final coronal obstruents (e.g. /t t h s/) are all phonetically realized as [t]) is complete (chapter 111: laryngeal contrast in korean). Based on both production and perceptual data, they conclude that complete neutralization is observed despite the fact that Korean orthography distinguishes between the different underlying consonants. The difference between genuine neutralization vs. covert contrasts might also be related to the nature of the evidence supporting the claim of neutralization. Evidence for neutralization may come from distributional information alone (e.g. laryngeal neutralization in coda position in Cantonese) or may additionally be supported by morphological means (e.g. obstruent devoicing in Dutch). All reported cases of incomplete neutralization pertain to morphologically sensitive neutralization. In sum, the question of how pervasive covert contrasts and complete neutralization are is ultimately an empirical one. More systematic phonetic and psycholinguistic investigations are needed to answer this fundamental question in contrast reduction research.

REFERENCES Archangeli, Diana. 1988. Aspects of underspecification theory. Phonology 5. 183–207. Baker, Brett. 1999. Word structure in Ngalakgan. Ph.D. dissertation, University of Sydney. Barnes, Jonathan. 2002. Positional neutralization: A phonologization approach to typological patterns. Ph.D. dissertation, University of California, Berkeley. Baroni, Marco & Laura Vanelli. 2000. The relations between vowel length and consonantal voicing in Friulian. In Lori Repetti (ed.) Phonological theory and the dialects of Italy, 13–44. Amsterdam & Philadelphia: John Benjamins. Bauer, Robert S. & Paul K. Benedict. 1997. Modern Cantonese phonology. Berlin & New York: Mouton de Gruyter. Beckman, Jill N. 1998. Positional faithfulness. Ph.D. dissertation, University of Massachusetts, Amherst. Bishop, Jason. 2007. Incomplete neutralization in Eastern Andalusian Spanish: Perceptual consequences of durational differences involved in s-aspiration. In Trouvain & Barry (2007), 1765–1768. Boersma, Paul. 1998. Functional phonology: Formalizing the interactions between articulatory and perceptual drives. The Hague: Holland Academic Graphics. Bueno-Holle, Juan. 2009. Isthmus Zapotec tone and stress. M.A. thesis, University of Chicago.

23

Alan C. L. Yu

Caisse, Michelle. 1982. Cross-linguistic differences in fundamental frequency perturbation induced by voiceless unaspirated stops. M.A. thesis, University of California, Berkeley. Cho, Young-Mee Yu. 1990. Parameters of consonantal assimilation. Ph.D. dissertation, Stanford University. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clayards, Meghan. 2008. The ideal listener: Making optimal use of acoustic-phonetic cues for word recognition. Ph.D. dissertation, University of Rochester. Clayards, Meghan, Michael K. Tanenhaus, Richard N. Aslin & Robert A. Jacobs. 2008. Perception of speech reflects optimal use of probabilistic speech cues. Cognition 108. 804–809. Clements, G. N. 2003. Feature economy in sound systems. Phonology 20. 287–333. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In Goldsmith (1995), 245–306. Crosswhite, Katherine. 2004. Vowel reduction. In Hayes et al. (2004), 191–231. de Lacy, Paul. 2002. The formal expression of markedness. Ph.D. dissertation, University of Massachusetts, Amherst. de Lacy, Paul. 2006. Markedness: Reduction and preservation in phonology. Cambridge: Cambridge University Press. Dinnsen, Daniel A. 1985. A re-examination of phonological neutralization. Journal of Linguistics 21. 265–279. Di Paolo, Marianne. 1988. Pronunciation and categorization in sound change. In Kathleen Ferrara, Becky Brown, Keith Walters & John Baugh (eds.) Linguistic change and contact: Proceedings of the 16th Annual Conference on New Ways of Analyzing Variation in Language, 84–92. Austin: University of Texas. Docherty, Gerard J. 1989. An experimental phonetic study of the timing of voicing in English obstruents. Ph.D. dissertation, University of Edinburgh. Durand, Jacques & Francis Katamba (eds.) 1995. Frontiers of phonology: Atoms, structures, derivations. London & New York: Longman. Edmondson, Thomas & John T. Bendor-Samuel. 1966. Tone patterns of Etung. Journal of African Languages 5. 1–6. Emeneau, M. B. 1961. Kolami: A Dravidian language. Annamalainagar: Annamalai University. Ernestus, Mirjam & R. Harald Baayen. 2003. Predicting the unpredictable: Interpreting neutralized segments in Dutch. Language 79. 5–38. Flemming, Edward. 1995. Auditory representations in phonology. Ph.D. dissertation, University of California, Los Angeles. Published 2002, London & New York: Routledge. Flemming, Edward. 1996. Evidence for constraints on contrast: The dispersion theory of contrast. UCLA Working Papers in Phonology 1. 86–106. Flemming, Edward. 2004. Contrast and perceptual distinctiveness. In Hayes et al. (2004), 232–276. Fougeron, Cécile & Donca Steriade. 1997. Does deletion of French schwa lead to neutralization of lexical distinctions? Proceedings of the 5th European Conference on Speech Communication and Technology, University of Patras, vol. 7, 943–946. Fourakis, Marios & Gregory K. Iverson. 1984. On the “incomplete neutralization” of German final obstruents. Phonetica 41. 140–149. Fourakis, Marios & Robert F. Port. 1986. Stop epenthesis in English. Journal of Phonetics 14. 197–221. Gerfen, Chip. 2002. Andalusian codas. Probus 14. 247–277. Gerfen, Chip & Kathleen Currie Hall. 2001. Coda aspiration and incomplete neutralisation in Eastern Andalusian Spanish. Unpublished ms., University of North Carolina, Chapel Hill.

Mergers and Neutralization

24

Goldrick, Matthew. 2001. Turbid output representations and the unity of opacity. Papers from the Annual Meeting of the North East Linguistic Society 30(1). 231–245. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. Gordon, Matthew. 1999. Syllable weight: Phonetics, phonology, and typology. Ph.D. dissertation, University of California, Los Angeles. Gordon, Matthew. 2001. A typology of contour tone restrictions. Studies in Language 25. 405–444. Goudbeek, Martijn. 2006. The acquisition of auditory categories. Ph.D. dissertation, Radboud University Nijmegen. Goudbeek, Martijn, Anne Cutler & Roel Smits. 2008. Supervised and unsupervised learning of multidimensionally varying non-native speech categories. Speech Communication 50. 109–125. Hamann, Silke. 2003. The phonetics and phonology of retroflexes. Ph.D. dissertation, University of Utrecht. Harris, John. 1985. Phonological variation and change: Studies in Hiberno-English. Cambridge: Cambridge University Press. Hayes, Bruce. 1999. Phonetically driven phonology: The role of Optimality Theory and inductive grounding. In Michael Darnell, Edith Moravcsik, Frederick Newmeyer, Michael Noonan & Kathleen Wheatley (eds.) Functionalism and formalism in linguistics, vol. 1: General papers, 243–285. Amsterdam & Philadelphia: John Benjamins. Hayes, Bruce & Tanya Stivers. 1995. The phonetics of post-nasal voicing. Unpublished ms., University of California, Los Angeles. Hayes, Bruce, Robert Kirchner & Donca Steriade (eds.) 2004. Phonetically based phonology. Cambridge: Cambridge University Press. Henderson, Eugénie J. A. 1967. Vowel length and vowel quality in Khasi. Bulletin of the School of Oriental and African Studies 30. 564–588. Henderson, Eugénie J. A. 1992. Khasi clusters and Greenberg’s universals. Mon-Khmer Studies 18–19. 61–66. Herbert, Robert K. 1986. Language universals, markedness theory, and natural phonetic processes. Berlin: Mouton de Gruyter. Hockett, Charles F. 1955. A manual of phonology. Baltimore: Waverly Press. Hollenbach, Barbara. 1977. Phonetic vs. phonemic correspondence in two Trique dialects. In W. R. Merrifield (ed.) Studies in Otomanguean phonology, 35–67. Dallas: Summer Institute of Linguistics. Hume, Elizabeth. 2003. Language specific markedness: The case of place of articulation. Studies in Phonetics, Phonology and Morphology 9. 295–310. Hume, Elizabeth & Keith Johnson (eds.) 2001a. The role of speech perception in phonology. San Diego: Academic Press. Hume, Elizabeth & Keith Johnson. 2001b. A model of the interplay of speech perception and phonology. In Hume & Johnson (2001a), 3–26. Hyman, Larry M. 1976. Phonologization. In Alphonse Juilland (ed.) Linguistic studies offered to Joseph Greenberg, vol. 2, 407–418. Saratoga, CA: Anma Libri. Itô, Junko. 1986. Syllable theory in prosodic phonology. Ph.D. dissertation, University of Massachusetts, Amherst. Published 1988, New York: Garland. Itô, Junko. 1989. A prosodic theory of epenthesis. Natural Language and Linguistic Theory 7. 217–259. Jacobsen, William H., Jr. 1964. A grammar of the Washo language. Ph.D. dissertation, University of California, Berkeley. Jakobson, Roman, C. Gunnar M. Fant & Morris Halle. 1952. Preliminaries to speech analysis: The distinctive features and their correlates. Cambridge, MA: MIT Press. Jassem, Wiktor & Lutoslawa Richter. 1989. Neutralization of voicing in Polish obstruents. Journal of Phonetics 17. 317–325.

25

Alan C. L. Yu

Jun, Jongho. 1995. Perceptual and articulatory factors in place assimilation: An Optimality Theoretic approach. Ph.D. dissertation, University of California, Los Angeles. Keyser, Samuel J. & Kenneth N. Stevens. 2001. Enhancement revisited. In Michael Kenstowicz (ed.) Ken Hale: A life in language, 271–291. Cambridge, MA: MIT Press. Kim, Hyunsoon & Allard Jongman. 1996. Acoustic and perceptual evidence for complete neutralization of manner of articulation in Korean. Journal of Phonetics 24. 295–312. Kimenyi, Alexandre. 1979. Studies in Kinyarwanda and Bantu phonology. Carbondale: Linguistic Research Inc. Kingston, John & Randy L. Diehl. 1994. Phonetic knowledge. Language 70. 419–454. Kiparsky, Paul. 1976. Abstractness, opacity, and global rules. In Andreas Koutsoudas (ed.) The application and ordering of grammatical rules, 160–186. The Hague: Mouton. Kiparsky, Paul. 1982. From Cyclic Phonology to Lexical Phonology. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part I, 131–175. Dordrecht: Foris. Kiparsky, Paul. 1985. Some consequences of Lexical Phonology. Phonology Yearbook 2. 85–138. Kirchner, Robert. 1997. Contrastiveness and faithfulness. Phonology 14. 83–111. Labov, William. 1971. Methodology. In William O. Dingwall (ed.) A survey of linguistic science, 412–497. College Park: University of Maryland Linguistics Program. Labov, William, Mark Karen & Corey Miller. 1991. Near-mergers and the suspension of phonemic contrast. Language Variation and Change 3. 33–74. Labov, William, Malcah Yaeger & Richard Steiner. 1972. A quantitative study of sound change in progress. Philadelphia: US Regional Survey. Ladefoged, Peter. 1971. Preliminaries to linguistic phonetics. Chicago: Chicago University Press. Lindblom, Björn. 1986. Phonetic universals in vowel systems. In John J. Ohala & Jeri J. Jaeger (eds.) Experimental phonology, 13–44. Orlando: Academic Press. Lindblom, Björn. 1990. Explaining phonetic variation: A sketch of the H&H theory. In W. J. Hardcastle & A. Marchal (eds.) Speech production and speech modeling, 403–439. Dordrecht: Kluwer. Lombardi, Linda. 1991. Laryngeal features and laryngeal neutralization. Ph.D. dissertation, University of Massachusetts, Amherst. Lombardi, Linda (ed.) 2001a. Segmental phonology in Optimality Theory: Constraints and representations. Cambridge: Cambridge University Press. Lombardi, Linda. 2001b. Why Place and Voice are different: Constraint-specific alternations in Optimality Theory. In Lombardi (2001a), 13–45. Martinet, André. 1952. Function, structure, and sound change. Word 8. 1–32. Martinet, André. 1955. Économie des changements phonétiques. Berne: Francke. Martinet, André. 1964. Elements of general linguistics. Chicago: University of Chicago Press. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. McCarthy, John J. 1994. The phonetics and phonology of Semitic pharyngeals. In Patricia Keating (ed.) Phonological structure and phonetic form: Papers in laboratory phonology III, 191–233. Cambridge: Cambridge University Press. Milroy, James & John Harris. 1980. When is a merger not a merger? The MEAT/MATE problem in a present-day English vernacular. English World-Wide 1. 199–210. Minkova, Donka. 2004. Philology, linguistics, and the history of [hw]~[w]. In Anne Curzan & Kimberly Emmons (eds.) Studies in the history of the English language II: Unfolding conversations, 7–46. Berlin & New York: Mouton de Gruyter. Mohanan, K. P. 1982. Lexical Phonology. Ph.D. dissertation, MIT. Distributed by Indiana University Linguistics Club. Myers, Scott & Benjamin B. Hansen. 2005. The origin of vowel-length neutralisation in vocoid sequences: Evidence from Finnish speakers. Phonology 22. 317–344. Ní Chiosáin, Máire & Jaye Padgett. 2001. Markedness, segment realization, and locality in spreading. In Lombardi (2001a), 118–156.

Mergers and Neutralization

26

Nunberg, Geoffrey. 1980. A falsely reported merger in eighteenth-century English: A study in diachronic variation. In William Labov (ed.) Locating language in time and space, 221–250. New York: Academic Press. Ohala, John J. 1990. The phonetics and phonology of aspects of assimilation. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 258–275. Cambridge: Cambridge University Press. Oostendorp, Marc van. 2008. Incomplete devoicing in formal phonology. Lingua 118. 1362–1374. Padgett, Jaye. 2003. Contrast and post-velar fronting in Russian. Natural Language and Linguistic Theory 21. 39–87. Pater, Joe. 1999. Austronesian nasal substitution and other NY effects. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.) The prosody–morphology interface, 310–343. Cambridge University Press. Peterson, Gordon E. & Ilse Lehiste. 1960. Duration of syllable nuclei in English. Journal of the Acoustical Society of America 32. 693–703. Port, Robert F. & Penny Crawford. 1989. Incomplete neutralization and pragmatics in German. Journal of Phonetics 17. 257–282. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Pulleyblank, Douglas. 1995. Feature geometry and underspecification. In Durand & Katamba (1995), 3–33. Rialland, Annie. 1986. Schwa et syllabes en Français. In W. Leo Wetzels & Engin Sezer (eds.) Studies in compensatory lengthening, 187–226. Dordrecht: Foris. Rudin, C. 1980. Phonetic evidence for a phonological rule: G-deletion in Turkish. Research in Phonetics 1. 217–232. Rumsey, Alan. 2000. Bunuba. In R. M. W. Dixon & Barry J. Blake (eds.) Handbook of Australian languages, vol. 5, 35–152. Oxford: Oxford University Press. Sapir, Edward. 1933. La réalité psychologique des phonèmes. Journal de Psychologie Normale et Pathologique 30. 247–265. English version published 1949 as The psychological reality of phonemes in David G. Mandelbaum (ed.) Selected writings of Edward Sapir in language, culture, and personality, 46–60. Berkeley: University of California Press. Senn, Alfred. 1966. Handbuch der litauischen Sprache. Heidelberg: Carl Winter. Silverman, Daniel. 1996. Phonology at the interface of phonetics and morphology: Root-final laryngeals in Chong, Korean, and Sanskrit. Journal of East Asian Linguistics 5. 301–322. Silverman, Daniel. 2004. On the phonetic and cognitive nature of alveolar stop allophony in American English. Cognitive Linguistics 15. 69–93. Silverman, Daniel. 2006. A critical introduction to phonology: Of sound, mind, and body. London & New York: Continuum. Silverman, Daniel. 2010. Neutralization and anti-homophony in Korean. Journal of Linguistics 46. 453–482. Simonet, Miquel, Marcos Rohena-Madrazo & Mercedes Paz. 2008. Preliminary evidence for incomplete neutralization of coda liquids in Puerto Rican Spanish. In Laura Colantoni & Jeffrey Steele (eds.) Selected Proceedings of the 3rd Conference on Laboratory Approaches to Spanish Phonology, 72–86. Somerville, MA: Cascadilla Press. Smolensky, Paul. 1993. Harmony, markedness, and phonological activity. Paper presented at the Rutgers Optimality Workshop 1, Rutgers University (ROA-87). Solé, Maria-Josep. 2007. Controlled and mechanical properties in speech: A review of the literature. In Maria-Josep Solé, Patrice Speeter Beddor & Manjari Ohala (eds.) Experimental approaches to phonology, 302–321. Oxford: Oxford University Press. Steriade, Donca. 1994. Positional neutralization and the expression of contrast. Unpublished ms., University of California, Los Angeles.

27

Alan C. L. Yu

Steriade, Donca. 1995. Underspecification and markedness. In Goldsmith (1995), 114–174. Steriade, Donca. 1997. Phonetics in phonology: The case of laryngeal neutralization. Unpublished ms., University of California, Los Angeles. Steriade, Donca. 2000. Paradigm uniformity and the phonetics–phonology boundary. In Michael B. Broe & Janet B. Pierrehumbert (eds.) Papers in laboratory phonology V: Acquisition and the lexicon, 313–334. Cambridge: Cambridge University Press. Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual account. In Hume & Johnson (2001a), 219–250. Stevens, Kenneth N. & Sheila Blumstein. 1981. The search for invariant acoustic correlates of phonetic features. In Peter D. Eimas & Joanne L. Miller (eds.) Perspectives on the study of speech, 1–38. Mahwah, NJ: Lawrence Erlbaum. Stevens, Kenneth N. & Samuel J. Keyser. 1989. Primary features and their enhancement in consonants. Language 65. 81–106. Stevens, Kenneth N., Samuel J. Keyser & Haruko Kawasaki. 1986. Toward a phonetic and phonological theory of redundant features. In Joseph S. Perkell & Dennis H. Klatt (eds.) Invariance and variability in speech processes, 426–449. Hillsdale, NJ: Lawrence Erlbaum. Tatham, Mark A. A. 1971. Classifying allophones. Language and Speech 14. 140–145. Teoh, Boon Seong. 1988. Aspects of Malay phonology revisited: A non-linear approach. Ph.D. dissertation, University of Illinois. Traill, Anthony. 1985. Phonetic and phonological studies of !Xóõ Bushman. Hamburg: Buske. Trouvain, Jürgen & William J. Barry (eds.) 2007. Proceedings of the 16th International Congress of Phonetic Sciences. Saarbrücken: Saarland University. Trubetzkoy, Nikolai S. 1939. Grundzüge der Phonologie. Göttingen: van der Hoeck & Ruprecht. Translated 1969 by Christiane A. M. Baltaxe as Principles of phonology. Berkeley & Los Angeles: University of California Press. Trudgill, Peter. 1974. The social differentiation of English in Norwich. Cambridge: Cambridge University Press. Urbanczyk, Suzanne. 1996. Patterns of reduplication in Lushootseed. Ph.D. dissertation, University of Massachusetts, Amherst. Wang, William S.-Y. & Charles Fillmore. 1961. Intrinsic cues and consonant perception. Journal of Speech and Hearing Research 4. 130–136. Warner, Natasha, Allard Jongman, Joan A. Sereno & Rachèl Kemps. 2004. Incomplete neutralization and other sub-phonemic durational differences in production and perception: Evidence from Dutch. Journal of Phonetics 32. 251–276. Willerman, Raquel. 1994. The phonetics of pronouns: Articulatory bases of markedness. Ph.D. dissertation, University of Texas, Austin. Yip, Moira. 1996. Lexical optimization in languages without alternations. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, vol. 2, 757–788. Salford: ESRI. Yu, Alan C. L. 2004. Explaining final obstruent voicing in Lezgian: Phonetics and history. Language 80. 73–97. Yu, Alan C. L. 2007a. Tonal phonetic analogy. In Trouvain & Barry (2007), 1749–1752. Yu, Alan C. L. 2007b. Understanding near mergers: The case of morphological tone in Cantonese. Phonology 24. 187–214. Zec, Draga. 1995. The role of moraic structure in the distribution of segments within syllables. In Durand & Katamba (1995), 149–179. Zhang, Jie. 2001. The effects of duration and sonority on contour tone distribution: Typological survey and formal analysis. Ph.D. dissertation, University of California, Los Angeles. Zhang, Jie. 2002. The effects of duration and sonority on contour tone distribution: Typological survey and formal analysis. New York: Routledge.

81

Local Assimilation Elizabeth C. Zsiga

1

Overview

Local assimilation is a phonological alternation in which two sounds that are adjacent become more similar. Its opposite is dissimilation, an alternation in which two sounds that are similar become more different (see chapter 60: dissimilation). Local assimilation can also be contrasted with long-distance assimilation (harmony – see also chapter 91: vowel harmony: opaque and transparent vowels; chapter 77: long-distance assimilation of consonants; chapter 78: nasal harmony; chapter 118: turkish vowel harmony; chapter 123: hungarian vowel harmony), in which sounds that are not immediately string-adjacent influence one another, and with coalescence (see Casali 1996; Pater 1999), in which two adjacent sounds merge into a single segment that shares properties of both. Local assimilation can be illustrated by different forms of the English negative prefix />n-/, as in (1). Examples in (1a) and (1b) illustrate common place assimilation: the basic form of the nasal consonant is /n/ (1a), but it assimilates to the place of articulation of a following stop (1b). When the prefix precedes a labiodental fricative (1c), assimilation of /n/ is optional, as it is with other prefixes or across a word boundary (1d). In words like illegal and irregular (1e), the /n/ is not pronounced at all: in Latin, the /n/ became identical to a following /l/ or /r/. These different aspects of the />n-/ alternation illustrate many of the questions and issues that arise in the cross-linguistic description of local assimilation. (1)

An example of local assimilation in English a.

b.

i[n]ability i[nh]ospitable i[ns]olvent i[mp]ossible i[mb]alance i[nt]erminable i[nd]ecisive i[Ik]ongruent

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0081

Elizabeth C. Zsiga

2

i[nf]requent i[nv]ariant d. u[nb]alanced i[np]assing e. i[l]egal i[r]egular c.

or i[,f]requent or i[,v]ariant or u[mb]alanced or i[mp]assing (Latin il-legalis) (Latin ir-regularis)

If two adjacent sounds come to share just one feature, or a subset of their features, the assimilation is termed partial, illustrated by the word impossible, where the consonants in prefix and root share place of articulation, but not nasality or voicing. If two adjacent sounds become identical, as in illegalis, the assimilation is total or complete. It is also useful to distinguish the direction of assimilation. In a sequence of sounds AB, if A changes to become more like B, the assimilation is termed anticipatory: A anticipates some feature of B. If B changes to become more like A, the assimilation is perseverative: some feature of A continues into B. (Anticipatory assimilation may also be called regressive, since the assimilating feature is moving backwards, and perseverative assimilation may be termed progressive, since the assimilating feature is moving forward.) The assimilations in (1) are all anticipatory. Local assimilation is the most common type of phonological alternation, and as such has played an important role in phonological theory. Phonological issues that arise with respect to local assimilation include the following: (2)

a. b. c. d. e. f. g.

What features assimilate? What groups of features assimilate together? How can directional asymmetries be accounted for? What is the influence of morphological and prosodic context? What are the roles of production and perception in local assimilation? How is local assimilation different from co-articulation? How should local assimilation be formalized?

These questions concerning the nature and representation of local assimilation will be addressed in the remainder of this chapter, which is divided into two parts. §2 provides a cross-linguistic sampling of types of local assimilation, providing the data for more general theoretical discussion that follows in §3. Issues addressed in §3 are directionality and perception (§3.1), production and co-articulation (§3.2), and formalism (§3.3). In addition to these larger questions that focus on the linguistic status of assimilation per se, other more specific issues often come up in the discussion of particular datasets or types of assimilation. Various processes of local assimilation have been important in providing evidence for and against phonological issues such as underspecification, privative vs. binary features, feature geometry, and Lexical Phonology. Such connections will not be treated in depth in this chapter, but will be noted, along with cross-references to other chapters where the issue is addressed more fully.

2

Examples of local assimilation

Local assimilation can affect nearly every phonological feature. In fact, participating in assimilation is considered prime evidence for featural status (McCarthy

Local Assimilation

3

1994; Hume and Odden 1996; see also chapter 17: distinctive features). Some of the most common types of local assimilation are exemplified below.

2.1

Voicing and other laryngeal features

When obstruent consonants become adjacent, they often come to agree in voice, and sometimes in other laryngeal features as well. For example, in Russian, a string of obstruents always agrees in voicing with the rightmost obstruent in the sequence (Jakobson 1978; Padgett 2002). (3)

Voicing assimilation in Russian a.

b.

[ot papq] [od babuœki] [od vzbuŒki] [ot fspleska] [ot mamq]

‘from ‘from ‘from ‘from ‘from

papa’ grandma’ a scolding ’ a splash’ mama’

A similar alternation is found in Yiddish (Katz 1987, cited in Lombardi 1999), illustrated in (4). (4)

Voicing assimilation in Yiddish [vog] [bak] but

‘weight’ ‘cheek’

[vok-œoi] [bag-bejn] [nud-nik] [mit-niten]

‘scale’ ‘cheekbone’ ‘boring person’ ‘co-respondent’

In both Russian and Yiddish, the assimilation is anticipatory: consonants anticipate the voicing of the rightmost obstruent in the cluster, whether voiced or voiceless. This is the unmarked direction for assimilation (Lombardi 1999). Perseverative voicing assimilation, in which the voicing value is determined by the leftmost consonant, may also be seen, usually in the case of assimilation of a suffix to a stem (Lombardi 1999; Borowsky 2000). Examples from English and Turkish are shown in (5). (5)

a.

Voicing assimilation in English [ro-z] ‘rows’ [ro-d] [ræg-z] ‘rags’ [beg-d] [rak-s] ‘rocks’ [k>k-t]

b.

Voicing assimilation [git-tim] [kqz-dqm] [komsu-muz-dan] [raf-tan]

(plural and past tense) ‘rowed’ ‘begged’ ‘kicked’

in Turkish (Lewis 1967) ‘go-past.1sg’ ‘got mad-past.1sg’ ‘neighbor-poss-abl’ ‘shelf-abl’

Cases of voicing assimilation have been central to the debate over whether [voice] is a privative or binary feature. Cases like that of Yiddish, where either a voiced or voiceless cluster may be formed, and where assimilation is independent

Elizabeth C. Zsiga

4

of syllable-final devoicing, have been crucial. Is it possible to account for alternations such as [vog] ~ [vokœoi] without reference to a feature [−voice]? See Cho (1999), Lombardi (1999), and Wetzels and Mascaro (2001) for further discussion. Another important point to note about voicing assimilation in consonant clusters is that sonorant consonants are often neutral with respect to voicing assimilation (see chapter 8: sonorants). Thus there are sequences like [ot fspleska] in Russian and [mitniten] in Yiddish, where the rightmost consonant in the cluster is a (voiced) sonorant, but the other consonants are voiceless. (Though note that the behavior of Russian /w/, which alternates with [v] and participates only partially in voicing assimilation, has been the subject of much discussion: see Padgett 2002 and references therein.) On the other hand, the laryngeal features of obstruents and sonorants do sometimes interact. Common cases include intervocalic (or intersonorant) voicing (6) and post-nasal voicing (7). (6)

Intersonorant voicing in Korean (Silva 1992) [pap] ‘rice’ [i-bab-i] [kuk] ‘soup’ [i-gug-i] [tal] ‘moon’ [pan-dal] [palp] ‘walk’ [palb-qn] /motun kilim/ → [modun gilim] /kulimul pota/ → [kulimul boda]

(7)

‘this rice-nom’ ‘this soup-nom’ ‘half moon’ ‘that is walking’ ‘every picture’ ‘to look at a picture’

a.

Post-nasal voicing in Yao (Nurse [ku-pélék-a] ‘to send’ [ku-túm-á] ‘to order’ [ku-kwéél-a] ‘to climb’

b.

Post-nasal voicing in Puyo Pungo Quechua (Orr 1962; Rice 1993; Pater 1999) [sinik-pa] ‘porcupine’s’ [wasi-ta] [kam-ba] ‘yours’ [wakin-da]

and Phillipson 2003) [kuu-m-bélek-a] ‘to send me’ [kuu-n-dúm-a] ‘to order me’ [kuu-I-gwéel-a] ‘to climb on me’

‘the house’ ‘the others’

It may also be the case that obstruents cause devoicing in sonorants, as in high vowel devoicing in Japanese (8a), in which /i/ and /u/ devoice when surrounded by voiceless consonants, or sonorant devoicing in English (8b), in which /l/ and /r/ devoice when preceded by a voiceless aspirated consonant. (8)

a.

High vowel [kok‚sai] [kêtai] [akêko] [z‚ton]

devoicing in Japanese (Tsuchida 1997) ‘international’ ‘expectation’ ‘woman’s name’ ‘bed’

b.

Sonorant devoicing in English [ple] ‘play’ [pãe] ‘pray’ [tãu] ‘true’ [kle] ‘clay’ [kão] ‘crow’

Local Assimilation

5

The interaction or non-interaction of sonorants and obstruents in voicing assimilation has played an important role in feature theory. On the one hand, the non-participation of sonorants has been cited as evidence that sonorants are underspecified for voice in underlying representation, with later fill-in by rule (Hayes 1984; Kiparsky 1985; Itô and Mester 1986). Alternatively, it has been argued that cases of sonorant/obstruent interactions involve features other than [voice] or [sonorant]. Rice (1993) argues that sonorants are specified with a different feature, [sonorant voice], which may spread to neighboring consonants, accounting for cases of intersonorant or post-nasal voicing. In the case of devoicing, as in (8), the assimilating feature may be aspiration: [spread glottis] rather than [−voice]. For further discussion, see chapter 17: distinctive features, chapter 7: feature specification and underspecification, and chapter 8: sonorants. Another approach has been to argue that cases of apparent voicing assimilations between vowels and consonants are not featural assimilation at all, but phonetic co-articulation. Jun (1995), for example, argues that Korean intersonorant voicing comes about because the glottal opening gesture for lax voiceless consonants is weak, allowing vocal fold vibration to continue throughout a short closure duration. Browman and Goldstein (1988) point out that the large and late glottal opening gesture for English initial aspirated stops is sufficient to delay voice onset in a following liquid, without further addition of a rule assimilating either [−voice] or [spread glottis]. §3 below returns to the issue of disentangling co-articulation and assimilation. When languages have multiple laryngeal contrasts, examples of the assimilation of multiple laryngeal features have been identified. Smyth (1920) gives examples of assimilation of both aspiration and voicing in Ancient Greek. (9)

Ancient Greek assimilation of both voicing and aspiration (Smyth 1920) a.

b.

[graph-o] [gegrap-tai] [grab-den] [trib-o] [tetrip-tai] [etriph-t he(n]

‘I write’ ‘has been written’ ‘writing/scraping’ ‘I rub’ ‘has been rubbed’ ‘it was rubbed’

Sanskrit also exhibits assimilation of multiple laryngeal features. The pattern of assimilation of voicing and aspiration in Sanskrit is complex, and its description and analysis has a long history (Whitney 1889; Wackernagel 1896). The examples in (10) represent part of this interaction, and serve to illustrate local assimilation of voicing and aspiration from the coda of the verb root to the onset of the suffix. (10)

Assimilation of voicing and aspiration in Sanskrit (Calabrese and Keyser 2006) /bhaudh-ta/ → [buddha] ‘awake-pst part’ /rudh-ta/ → [ruddha] ‘obstruct-pst part’ /sa(dh-ta/ → [sa(ddha] ‘succeed-pst part’

Cases of simultaneous assimilation of more than one feature, such as those in (9) and (10), have been important in providing evidence for hierarchical

Elizabeth C. Zsiga

6

organization of features. See §3.3 below, and chapter features.

2.2

27:

the organization of

Nasality

Assimilation of nasality is very common. Vowels generally become nasalized when adjacent to a nasal consonant, as illustrated in (11). Such nasalization may be anticipatory, as in English (11a), or perseverative, as in Sundanese (11b). Sundanese nasalization can also be iterative and in some cases long-distance, applying across an intervening /h/: see Cohn (1993) and chapter 78: nasal harmony. (11)

a.

Anticipatory nasalization in [khæt] ‘cat’ [kh]n] [P>b] ‘rib’ [P)m] [h>k] ‘thick’ [h)I]

English ‘can’ ‘rim’ ‘thing’

b.

Perseverative nasalization in Sundanese (Cohn 1993) [Iãtur] ‘arrange’ [mãrios] ‘examine’ [Jhãr] ‘seek’ [mãhãl] ‘expensive’

Cohn (1993) argues that, in addition to differing in direction, English and Sundanese represent two distinct types of assimilation: the one categorical and phonological (Sundanese), the other gradient and phonetic (English). §3.2 below returns to this distinction. Assimilation of nasality may also apply between adjacent consonants, as shown in (12). (12)

2.3

a.

Nasal assimilation from onset to coda in Korean (Kim-Renaud 1991) [pap] ‘rice’ [pam mekta] ‘eat rice’ [ot] ‘clothes’ [on man] ‘only clothes’ [jak] ‘medicine’ [jaI mekta] ‘take medicine’

b.

Nasal assimilation from prefix to root in Twi [bá] ‘comes’ [m-má] ‘does not come’ [gu] ‘pours’ [I-Iu] ‘does not pour’ cf. [pe] ‘likes’ [m-pe] ‘does not like’ [tD] ‘does’ [n-tD] ‘does not do’

Continuant

Stops often become continuants when surrounded by, or in some cases just preceded by, continuants. The change from stop to fricative, termed spirantization, may be considered assimilation of the feature [continuant] (see chapter 28: the representation of fricatives). Examples from Spanish and Italian are shown in (13).

Local Assimilation (13)

a.

Post-continuant spirantization of voiced stops in Spanish /la gata/ → [la :ata] ‘the (fem) cat’ /la data/ → [la Ïata] ‘the date’ /la bola/ → [la ßola] ‘the ball’ /las gata/ → [las :atas] ‘the (fem) cats’ /las bolas/ → [las ßolas] ‘the balls’

b.

Intervocalic spirantization of voiceless stops in Florentine Italia (Villafana 2006) /la kaza/ → [la xaza] ‘the house’ /la torta/ → [la horta] ‘the cake’ /la pal(a/ → [la zal(a] ‘the ball’

7

Similarly, continuants often “harden” to stops or affricates in post-nasal position, an alternation that may be considered assimilation of [−continuant] from the preceding nasal (Padgett 1994). (14)

a.

Post-nasal hardening in Setswana (Tlale 2006) [supa] ‘point at’ [n-tsh upa] ‘point at me’ [œapa] ‘hit’ [J-Œ hapa] ‘hit me’ [xapa] ‘capture’ [I-kxhapa] ‘capture me’ [rut’a] ‘teach’ [n-t h ut’a] ‘teach me’

b.

Post-nasal hardening in Kikuyu (Armstrong 1967; Clements 1985) imperative 1sg imperfect [ßur-a] [m-bur-eete] ‘lop off’ [reh-a] [n-deh-eete] ‘pay’ [:or-a] [I-gor-eete] ‘buy’

Spirantization and hardening are not necessarily considered to be cases of assimilation, however, but cases of a separate phonological process of lenition or fortition, in which features other than [continuant] may be involved. Spanish stops may weaken to more open approximant articulations (/la bola/ → [la sola]) and intervocalic /k/ in Florentine often weakens to [h] (/la kaza/ → [la haza]). Conversely, post-nasal fortition in Setswana involves changes in laryngeal features as well as in continuant. See Kirchner (1998), Lavoie (2001), Gurevich (2003), and chapter 66: lenition for numerous further examples and discussion.

2.4

Consonantal place of articulation

2.4.1 Nasal place assimilation Assimilation of place of articulation is probably the most ubiquitous phonological alternation. Especially common is nasal place assimilation: nasals assimilate in place of articulation to a following consonant. Examples could be found in almost any language. Nasal place assimilation in English and in the African languages Yao, Twi, Setswana, and Kikuyu was seen in examples (1), (7a), (12b), (14a), and (14b) above. Additional examples are shown in (15): Catalan (a), Zoque (b), Malayalam (c), Sri Lankan Creole (d), and Zulu (e). Zulu is included to illustrate the point that in place assimilation to complex segments such as clicks and labial-velars, assimilation to the dorsal place of articulation is most common (Maddieson and Ladefoged 1989; see also chapter 18: the representation of clicks).

Elizabeth C. Zsiga

8 (15)

a.

Nasal place assimilation in Catalan (Mascaró 1976; Kiparsky 1985) so[n]amics ‘they are friends’ so[m]pocs ‘they are few’ so[,]felicos ‘they are happy’ so[| {]os ‘they are two’ so[K]rics ‘they are rich’ so[J lj]iures ‘they are free’ so[I]grans ‘they are big’

b.

Nasal place assimilation in Zoque (Wonderly 1946, cited in Padgett 1994) [pama] ‘clothing’ [m-bama] ‘my clothing’ [tatah] ‘father’ [n-datah] ‘my father’ [Œima] ‘calabash’ [J-–ima] ‘my calabash’ [kaju] ‘horse’ [I-gaju] ‘my horse’ [gaju] ‘rooster’ [I-gaju] ‘my rooster’ cf. nasal deletion preceding fricatives: [faha] ‘belt’ [faha] ‘my belt’ [œapun] ‘soap’ [œapun] ‘my soap’ [ranŒo] ‘ranch’ [ranŒo] ‘my ranch’

c.

Nasal place assimilation in Malayalam (Mohanan 1993) [awan] ‘he’ [awam-paraJJu] ‘he said’ [awa|-}aÍiccu] ‘he became fat’ [awaJ-ca(Íi] ‘he jumped’ [awaI-ka7aJJu] ‘he cried’ [kamalam] (proper name) [kamalam-paraJJu] ‘Kamalam said’ [kamala|-}aÍiccu] ‘Kamalam became fat’ [kamalaJ-ca(Íi] ‘Kamalam jumped’ [kamalaI-ka7aJJu] ‘Kamalam cried’

d. Assimilation of non-coronal nasals in Sri Lankan Portuguese Creole (Hume and Tserdanelis 2002; Hume 2003) Nom sg Gen sg Dat sg Verbal noun [ma(m] [ma(n-su] [ma(m-pH] [ma(I-ki] ‘hand’ [mi(tiI] [mi(tin-su] [mitim-pH] [mi(tiI-ki] ‘meeting’ [si(n] [si(n-su] [si(n-pH] [si(n-ki] ‘bell’ e.

Assimilation to Padgett 2002) /iziN-/ [izim-paphe] [izin-ti] [iziI-kezD] [iziI-|ezu] [iziI-ÔuIÔulu] [iziI-||aI||a]

the dorsal place of clicks in Zulu (Doke 1926, cited in (class 10 plural prefix) ‘feathers’ ‘sticks’ ‘spoons’ ‘slices’ ‘species of bird (pl)’ ‘green frogs’

A number of cross-linguistic differences and similarities in nasal place assimilation are worth noting.

Local Assimilation

9

As in the Zoque and Zulu examples, it is often the case that a nasal (or nasalfinal) affix undergoes obligatory place assimilation in every lexical item in which it occurs. In such cases, it may be impossible to determine empirically the basic or underlying form, and it is often argued that such nasals are unspecified for place (e.g. Kiparsky 1985). A segment specified only as [nasal], but with no underlying place features, may be symbolized /N/. Depending on how a particular alternation is formalized, however, underspecification may or may not be assumed. (See chapter 7: feature specification and underspecification.) Relevant to the debate on underspecification is the observation that the coronal nasal assimilates more often than either labial or velar nasals. In many languages, such as Catalan and Spanish (Navarro Tomás 1970; Honorof 1999), only the coronal nasal assimilates, although for some languages such as Malayalam, assimilation of non-coronal nasals is also attested. In Sri Lankan Creole, all nasals except [n] assimilate. Asymmetries in place assimilation are discussed further in §3.1 and §3.2 below. Another point of interest in nasal place assimilation is whether or not nasals assimilate to [+continuant] segments. In Catalan and Sri Lankan, nasals assimilate to both stops and continuants, but in Malayalam and Zoque, nasals assimilate only to stops. In Malayalam, unassimilated nasal–fricative clusters are tolerated, but in Zoque the nasal deletes when a fricative follows. In other languages, other processes may apply to repair disfavored nasal–fricative clusters. In Setswana and Kikuyu ((14) above), fricatives harden to stops or affricates in post-nasal position. In English, as was noted in (1), assimilation of /n/ to /f/ is optional: the nasal– fricative cluster in the word infrequent may be pronounced [nf] in careful speech or [,f] in less careful speech, but impolite is invariably [mp]. The propensity for place assimilation and continuant assimilation to occur together leads Padgett (1994), among others, to posit a dependency relation between features for place and the feature [continuant], though this requires a different explanation for cases like Spanish so[, f]elicos and English i[,f]requent (see the discussion in §3.2 below). Data on nasal place assimilation, probably more because it is so common than because of any inherent phonological property, has often been invoked in debates on domains of application. Catalan nasal place assimilation played an important role in arguments for cyclic rule application, and the distinction between lexical and post-lexical phonology (Kiparsky 1985). The observation that place assimilation applies to English />n-/ impossible, /kan-/ congruent, and /s>n-/ sympathy, but not /Zn-/ unprepared, provided important data for level ordering of affixes in English. The ability of nasal place assimilation to create sounds that are not part of the underlying inventory of the language, such as [,] in English and Catalan, has informed debate on Structure Preservation, and on the lexical/ post-lexical distinction (Kiparsky 1985; chapter 94: lexical phonology and the lexical syndrome). Finally, data on nasal place assimilation has also been crucial in the theory of feature geometry (discussed in §3.3 below), and in development of the theory of Articulatory Phonology (discussed in §3.2).

2.4.2 Other consonantal place assimilations Place assimilation most often involves nasals, but other consonants undergo place assimilation as well. In Korean, for example, optional place assimilation applies to certain obstruent clusters, as illustrated in (16): final [t] may assimilate to a following labial or dorsal stop, and [p] to a following dorsal.

Elizabeth C. Zsiga

10 (16)

Place assimilation in Korean obstruents (Kim-Renaud 1991; Kochetov and Pouplier 2008) /pat h-pota/ /pat h-kwa/ /pap-kqlqs/ /t hop-khal/ cf. /pap-to/ /pak-to/ /kuk-po/

→ → → →

[pat p’oda] [pat k’wa] [pap k’q7qt"] [t hop khal]

→ → →

[papt’o] [pakt’o] [kukp’o]

or or or or

[pap p’oda] [pak k’wa] [pak k’q7qt] [t hok khal]

*[patt’o] *[patt’o] *[kupp’o]

‘rather than the field’ ‘field and’ ‘rice bowl’ ‘handsaw’

‘rice also’ ‘outside also’ ‘national treasury’

In other cases, subsidiary place features assimilate between adjacent consonants. Most often the features [anterior] and [distributed] assimilate in sequences of coronal consonants. It was seen above that Catalan nasals (15a) assimilate to a following consonant at all places of articulation. Catalan laterals also assimilate (17), but only to a following coronal. (See chapter 31: lateral consonants for further discussion.) (17)

Assimilation of laterals in Catalan (Mascaró 1976) e[l p]a e[* {]ia e[Î r]ic e[O Ú]erma

‘the bread’ ‘the day’ ‘the rich’ ‘the brother’

In English, coronal stops and nasals assimilate the [−anterior] feature of a following [P], or the dental articulation of a following dental fricative. (18)

Assimilation of retroflex and dental in English train drain tenth eighth width

[¥Pe>n] [ÆPe>n] [t he|h] [e>}h] [w>{h]

In Sanskrit, Murinbata, and other languages of India and Australia (Steriade 2001), place assimilation among coronal clusters is often perseverative: that is, the onset assimilates to the coda, as shown in (19). This reversal in expected direction is discussed further in §3.1 below. (19)

a.

Perseverative retroflex assimilation in Sanskrit (Whitney 1889, cited in Steriade 2001) /iÏ-ta/ → [iÏ-Ía] ‘sacrificed’ /ÏaK-nam/ → [ÏaK-Kam] ‘of six’ /giÈ-su/ → [giÈ-Ïu] ‘in songs’

b.

Perseverative retroflex assimilation in Murinbata (Street and Mollinjin 1981, cited in Steriade 2001) /pan-Íal/ → [pan-tal] ‘cut it-3sg’ /Iudu-ÎeÎ-nu/ → [Iudu-ÎeÎ-Ku] ‘roll-fut’

Local Assimilation

11

As was noted with respect to nasal place assimilation, evidence from consonantal place assimilation in general has been crucial in the development of phonological theory. One point of particular interest is how place assimilation, which may involve a whole set or subset of different features, may be formalized as a unitary process (see §3.3 below). Another point is the problem of directionality (§3.1): why is it that codas usually assimilate to onsets, rather than vice versa? Place assimilation has also played an important role in the debate over assimilation vs. co-articulation, and in the development of the theory of Articulatory Phonology (§3.2).

2.4.3 Vowel and consonant place interactions While place assimilation usually applies to consonant clusters, vowels and consonants may also assimilate to each other (see chapter 75: consonant–vowel place feature interactions for more discussion). Consonants often assimilate the properties of adjacent vocalic articulations. For example, in the Wakashan language Oowekyala (Howe 2000), velar and uvular consonants contrast in rounding in initial position and following most vowels (20a). Immediately following /u/, however, all velars and uvulars assimilate to the vowel’s [round] feature (20b). (20)

Oowekyala (Howe 2000) a.

Contrastive rounding [qwut’a] ’full’ [quxa] ‘bent’

b.

Rounding assimilation [pusq’a-x?it] ‘to become very hungry’ w [O’u’xwalasu-x ?it] ‘to become sick’ [mHja-gila] ‘make (draw or carve) a fish’ w [?amastu-g ila] ‘make kindling’

In the neighboring language Nuxalk, Howe reports that rounding assimilation is anticipatory rather than perseverative: velars and uvulars become round preceding /u/. Another assimilation from vowel to consonant is palatalization of a consonant adjacent to front vowels and glides. Palatalization may take the form of an alveolar or dental (or sometimes velar) becoming alveopalatal, or it may take the form of secondary articulation, adding an additional high front tongue position without changing the consonant’s primary place of articulation. Languages differ in the input sequences that trigger palatalization, and in the resulting outputs. Three examples are shown in (21); see chapter 71: palatalization for further examples and discussion. In English (21a), alveolars become alveopalatals before /j/. In Japanese (21b), alveolars become alveopalatals before /i/, while velar and labial consonants become secondarily palatalized. In one pattern of palatalization in Polish (21c), labials receive secondary palatalization before [i] and [e], while velars and alveolars change their primary place. (Palatalization in Slavic languages is both common and complex: see chapter 121: slavic palatalization and chapter 122: slavic yers.)

Elizabeth C. Zsiga

12 (21)

a.

English palatalization before /j/ [d ~ –] grade gradual [t ~ Œ] habit habitual [s ~ œ] press pressure [z ~ ,] use usual

b.

Palatalization in Japanese (Vance 1987; Chen 1996) [kas-anai] ‘lend-neg’ [kaœ-ita] ‘lend-past’ [kat-anai] ‘win-neg’ [kaŒ-itai] ‘win-volitional’ [wak-anai] ‘boil-neg’ [wakj-itai] ‘boil-volitional’ [job-anai] ‘call-neg‘ [job j-itai] ‘call-volitional’

c.

Palatalization in Polish [ûupq] ‘booty’ [zabava] ‘game’ [zûoto] ‘gold’ [kwas] ‘acid’ [rana] ‘wound’ [sok] ‘juice’ [mex] ‘moss’

(Szpyra 1989; Chen 1996) [ûup j-ic] ‘to rob’ [zabav j-ic] ‘to entertain’ [zûoc-ic] ‘to gild’ [kwaç-ic] ‘to make sour’ [raJ-ic] ‘to wound’ [soŒ-ek] ‘juice-dim’ [meœ-ek] ‘moss-dim’

Vowels may also assimilate to adjacent consonants. In Russian (22a), palatalization on consonants is contrastive, and the vowels [i] and [q] are in complementary distribution: [i] is found in initial position and following palatalized consonants, [q] follows non-palatalized consonants. (See chapter 121: slavic palatalization for additional discussion.) In the Dravidian language Tulu (22b), the accusative suffix /-q/ becomes round when it follows a labial consonant (or another round vowel). (22)

a.

Vowel backing in Russian (Halle 1959; Padgett 2002) [ivan] (proper name) [k-qvanu] ‘to Ivan’ [italjia] ‘Italy’ [v-qtaljiju] ‘to Italy’

b.

Rounding assimilation in Tulu (Bright 1957, cited in Kenstowicz 1994) [katt-q] ‘bond-acc’ [kapp-u] ‘blackness-acc’ [ucc-u] ‘snake-acc’

Another type of assimilation from consonant to vowel is vowel lowering. Vowel lowering after uvular, pharyngeal, and laryngeal consonants (the class of guttural consonants) is found in many Semitic, Caucasian, and North American languages (Herzallah 1990; Bessell 1992, 1998; McCarthy 1994; Rose 1996). In Syrian Arabic (23a), for example, the feminine suffix is realized as [a] after laryngeals, pharyngeals, and uvulars, and as [e] after all other consonants. In Oowekyala (23b), /i/ and /u/ are lowered to [e] and [o].

Local Assimilation (23)

a.

Vowel lowering in Syrian Arabic (Cowell 1964; Rose 1996) [daraÚ–e] ‘step’ [œerk-e] ‘society’ [madras-e] ‘school’ [wa(Úh-a] ‘display’ [mni(p-a] ‘good’ [dagga(ö-a] ‘tanning’

b.

Vowel lowering in Oowekyala /Oiq-ila/ → [dliqøela] /qusa/ → [qosa] /hula/ → [hola] /gwiøila/ → [gwiøela]

13

(Howe 2000) ‘to give a name to someone’ ‘bent, crooked’ ‘heap up’ ‘to bake bread’

Vowel lowering has figured prominently in the debates over what features constitute the class of guttural consonants. In various cases the feature has been argued to be [low], [−high], or [pharyngeal]. See chapter 25: pharyngeals for further examples and discussion. In general, the question of when and how vowels and consonants interact with each other has been important in the area of feature theory. Consonants are often transparent to long-distance vowel-to-vowel assimilation, yet they also interact with vowels in local assimilations, as has been illustrated. Transparency to vocalic alternations suggests that vowels and consonants have different features, or that consonants bear vocalic features only as secondary articulations: e.g. front vowels and palatalized consonants are [−back], while alveolar and dental consonants are [coronal]. Such an approach accounts for cases in which processes of palatalization and rounding result in secondary articulations. But it fails to account for cases where the vowel causes a change in the consonant’s primary place of articulation, or where the consonant causes a change in the backness of a vowel. Such alternations have led to proposals (e.g. Clements 1993; Hume 1994; Clements and Hume 1995) that vowels and consonants share the same features: e.g. alveolar consonants, alveopalatal consonants, and front vowels are all [coronal]. See chapter 19: vowel place, chapter 22: consonantal place of articulation, and chapter 75: consonant–vowel place feature interactions.

2.5

Complete assimilations

Complete assimilation occurs when two adjacent sounds become identical. Complete assimilation is particularly common in clusters involving /r/ and /l/ (see chapter 30: the representation of rhotics and chapter 31: lateral consonants). Complete assimilation of the Latin prefix />n-/ to both /l/ and /r/ as in illegalis and irregularis was seen in (1) above. Similar cases are found in Ponapean and Korean. (24)

a.

Assimilation of Rice 1993) /nanras/ /nanleI/ /pahn liIan/ /pahn roI/

/n/ to /l/ and /r/ in Ponapean (Rehg and Sohl 1981; → → → →

[narras] [nalleI] [pahl liIan] [pahr roI]

‘ground level of a feasthouse’ ‘heaven’ ‘will be beautiful’ ‘will listen’

Elizabeth C. Zsiga

14 b.

Assimilation of /n/ to /l/ in Korean (Davis and Shin 1999) /non-li/ → [nolli] ‘logic’ /tan-lan/ → [tallan] ‘happiness’ /tshHn-li/ → [tshHlli] ‘natural law’

In some cases, a sonorant may assimilate completely to a following obstruent. In Arabic (25a), /l/ assimilates to a following coronal, but not to consonants at other places of articulation. In Havana Spanish (25b), the /l/ assimilates completely to most following consonants. The exception is that if the following consonant is a voiceless stop, the /l/ assimilates in all features except [voice]. (25)

a.

b.

Assimilation [?aœ-œams] [?ad-daar] [?an-nahr] [?az-zajt] cf. [?al-qamr] [?al-kitaab] [?al-faras]

of /l/ in Arabic (Kenstowicz 1994) ‘the sun’ ‘the house’ ‘the river’ ‘the oil’ ‘the moon’ ‘the book’ ‘the mare’

Assimilation of /l/ in Havana Spanish (Harris 1985) albañil a[bb]añil ‘mason’ tal droga ta[dd]droga ‘such a drug’ pulga pu[gg]a ‘flea’ tal mata ta[mm]ata ‘such a shrub’ el fino e[ff]ino ‘the refined one’ el pobre e[bp]obre ‘the poor man’ el tres e[dt]res ‘the three’

A similar case of near-complete assimilation occurs in Kannada (26). The final consonant of the morpheme meaning “big” copies all features from the following consonant, except that the resulting cluster must be voiced, regardless of input. (26)

Complete assimilation with voicing in Kannada (Roca and Johnson 1999) [tere] [kumbaÎa] [dzenu] [mara]

‘screen’ ‘pumpkin’ ‘bee’ ‘tree’

[hed-dere] ‘big screen’ [heg-gumbaÎa] ‘big pumpkin’ [hedz-dzenu] ‘big bee’ [hem-mara] ‘big tree’

Finally, complete local assimilation of one vowel to another can also be found. Many languages will not tolerate successive non-identical vowels (chapter 61: hiatus resolution). While vowel hiatus is often repaired by deleting one vowel or the other (see Casali 1996, 1997), another strategy is assimilation, as shown in (27). (27)

Vowel assimilation in Yoruba (Welmers 1973) [owo] [owe-epo] [owa-ade]

‘money’ ‘oil money’ ‘Ade’s money’

Local Assimilation

2.6

15

Instances where local assimilation doesn’t apply

The preceding list of types of local assimilation has been long. Nonetheless, there are situations where local assimilation is not typically found. These include, on the one hand, environments where the trigger and target of assimilation tend not to be immediately string-adjacent, as in tone and vowel harmony. On the other hand, there are features for which languages prefer an alternating pattern, such as CVC within a syllable, or stress–unstress within a foot. Features such as [round], [back], and [advanced tongue root] often assimilate from vowel to vowel within a word, but such assimilation is usually not local at the level of the segment, since vowels are most often separated by consonants (see chapter 91: vowel harmony: opaque and transparent vowels; chapter 118: turkish vowel harmony; chapter 123: hungarian vowel harmony). Similarly, tone assimilations are quite common, and have played an important role in the development of theories of phonological representation. Tone assimilation, however, is also generally a long-distance phenomenon, applying at least from vowel to vowel across intervening consonants, and often across stretches of multiple syllables (see chapter 114: bantu tone; chapter 45: the representation of tone; chapter 107: chinese tone sandhi). Because this chapter focuses on local processes, vowel harmony and tone assimilation are not treated further here. It has been argued that the features [consonantal] and [sonorant] do not assimilate (McCarthy 1988). Consonants do not become vowels when adjacent to vowels, and vice versa (but see Kaisse 1992 for a possible counterexample). Although consonant clusters may come to agree in sonorancy as a result of nasal assimilation or complete assimilation, the feature [sonorant] does not assimilate independently. But see Rice (1993) for discussion of a feature [sonorant voice], which is proposed to distinguish sonorants from obstruents, and to be active in cases of sonorant/obstruent voicing interactions. See also chapter 8: sonorants and chapter 13: the stricture features. Length does not assimilate: if anything, lengthening of one segment will induce shortening of neighboring segments, or vice versa (see chapter 64: compensatory lengthening). Stress does not assimilate. If two stressed syllables become adjacent, languages will often resolve the “clash” by moving or deleting a stress to restore the alternating pattern. (See chapter 41: the representation of word stress.)

3

General phonological issues in local assimilation

§2 provided examples of the most common kinds of local assimilations, and pointed out theoretical issues raised by specific cases, such as privativity of the feature [voice] and the featural description of the class of guttural consonants. §3 now turns to broader questions, which are applicable to many or all kinds of local assimilation. These include directionality and perception (§3.1), the relation between assimilation and co-articulation (§3.2), and the formal treatment of local assimilation (§3.3).

3.1

Directionality and perception

In nearly every case discussed above, there has been a preference in the directionality of assimilation. The following principles can be deduced:

Elizabeth C. Zsiga

16 (28)

a. b. c.

Assimilation in consonant clusters tends to be anticipatory: the specification of the rightmost consonant dominates. Codas assimilate to onsets, rather than vice versa. Affixes assimilate to stems and roots, rather than vice versa.

Many phonologists have analyzed these asymmetries in structural terms. Itô (1998), for example, proposes that onsets “license” place features, and that in many languages codas can only acquire place features by sharing them with an onset consonant, thus forcing assimilation. Lombardi (1999) proposes a similar argument for laryngeal features. Beckman (1998) extends the positional analysis in proposing a theory of “positional faithfulness”: certain structural positions, including onset of a syllable or word, are privileged, and changes to these privileged positions are dispreferred. Stems and roots are privileged over affixes, thus affixes tend to assimilate to stems rather than vice versa. Hyman (2008) proposes a structural account of directional asymmetries in a number of Bantu languages. Other linguists, however, argue that asymmetries in direction of assimilation can be explained by asymmetries in perceptibility, without reference to structural positions. Steriade (2001), for example, emphasizes that consonantal place of articulation is most clearly cued by the formant transitions and burst noise that occur when a closure is released into a vowel. Steriade argues that codas most often assimilate to onsets because the phonological features of a postvocalic stop are less clearly perceived than the features of a prevocalic stop, and thus a change to the coda consonant is less obvious. In cases where a particular distinction is better cued in coda position, the direction of assimilation is reversed: there is perseverative assimilation of retroflexion in Sanskrit and Murinbata consonant clusters ((19) above), because retroflexion is best cued by formant transitions on the preceding vowel. Similar arguments from perception can be applied to explain why nasals and coronals so often undergo assimilation. Nasals may be especially prone to assimilate, because nasal resonances interfere with the formant information that conveys place of articulation. Coronals may more frequently assimilate because cues to coronal place of articulation are weakest, and may be overwhelmed by the stronger cues from a following stop at a different place (Kawasaki 1982; Byrd 1992). Cho and McQueen (2008) and Sohn (2008) offer perceptual accounts of Korean place assimilation. See also Paradis and Prunet (1991) and Hume (2003) for arguments respectively for and against general coronal unmarkedness, and chapter 12: coronals. Integral to the discussion of perception in local assimilation is the role of misperception. A speaker may produce a word or phrase in a way that is faithful to the lexical representation, but if perceptual cues to a particular contrast in a particular position are weak or non-existent, a listener may perceive something different. That is, a speaker may say [np], but the listener may hear [mp]. If the listener assumes [mp] was the intended pronunciation of /np/, the listener may postulate a phonological alternation. For further discussion see Ohala (1981), Hume and Johnson (2001), and chapter 98: speech perception and phonology.

3.2

Assimilation and co-articulation

Processes of local assimilation are “natural,” in the sense the word is used in the theory of Natural Phonology (Donegan and Stampe 1979); that is, the phonetic

Local Assimilation

17

motivation for such processes is clear, and the motivation works in the direction of making speaking easier. While concepts like “ease of articulation” and “articulatory effort” are difficult to quantify (see Lindblom 1983; Kirchner 1998, 2000), local assimilation has an obvious phonetic basis in co-articulation. The term co-articulation describes the influence segments have on one another simply by being adjacent, apart from any featural change. Because articulators cannot change position instantly, there is necessarily either some anticipatory or perseverative effect, if not both, on neighboring segments, as articulators move from one target to the next. Two examples illustrate the point. If the velum is to be fully open by the time a consonant closure is achieved, then opening must begin during the preceding vowel, resulting in some inevitable nasal resonance during the vocalic portion. If the tongue body is to reach its target vowel position by the time the onset consonant in a CV syllable is released, articulation of vowel and consonant must begin simultaneously. Thus a [k] is made further forward in the mouth when it precedes a front vowel. Some articulatory overlap is inevitable, but degree and direction of coarticulation will differ from language to language. Given that language-specific patterns of co-articulation must be learned as part of the grammar, some linguists have argued that there is no need to state independent phonological rules of nasalization, rounding, palatalization, or place assimilation. In particular, the theory of Articulatory Phonology (Browman and Goldstein 1992) argues that all productive phonological changes can be accounted for in terms of differences in articulatory organization, particularly gestural overlap and reduction, without invoking any phonological feature change (see chapter 5: the atoms of phonological representation). Browman and Goldstein (1990), using X-ray microbeam data, show that a coronal closing gesture is still present in English phrases which sound as though a coronal nasal had become labial: for example in the phrase seven plus heard as assimilated se[vmp]lus. They argue that the [n] is not deleted or changed from [coronal] to [labial], but is overlapped by the following [p], according to the general pattern of consonant coordination at word boundaries in English. The [n] and [p] articulated together sound like [m] (see also Byrd 1992). Browman and Goldstein further argue that place assimilation in tenth ((18) above) is also the result of overlap and blending: the tongue tip cannot be both dental and alveolar at the same time, so a compromise blended position is reached. Zsiga (1995) argues for an overlap account of palatalization at word boundaries in English. The phrase this year may sound like thish year, but data from electropalatography shows that the word-final fricative is not identical to an underlying [œ]. Rather, it is the acoustic result of an [s] and [j] articulated at the same time, with tongue tip and blade gestures blended together. Some proponents of Articulatory Phonology incorporate gestural dynamics into constraint-based theory (Gafos 2002; Bradley 2007). It is not clear, however, whether all local assimilations are best described in terms of gestural overlap. One distinction that is often made is that categorical phonological alternations should be represented as the result of a change in featural specification, while partial and gradient changes are attributed to gestural overlap (see chapter 89: gradience and categoricality in phonological theory). Thus Cohn (1993), for example, identifies two different kinds of nasalization in English and Sundanese. Using nasal and oral airflow data, Cohn demonstrates that nasalization of a vowel in English is partial and gradient,

Elizabeth C. Zsiga

18

due to co-articulation with the opening velum, and very much dependent on timing and context. In contrast, nasalization in Sundanese is categorical: a nasalized vowel must be specified with its own featural target. In a similar vein, Zsiga (1995) argues that palatalization at word boundaries in English is the gradient result of overlap, while palatalization at morpheme boundaries ((21a) above) is the categorical result of a featural change. Ladd and Scobbie (2003: 16) provide data that vowel assimilation at word boundaries in Sardinian is categorical, and conclude that gestural overlap is on the whole not a suitable model of most of the assimilatory external sandhi phenomena in Sardinian, and more generally that accounts of gestural overlap in some cases of English external sandhi cannot be carried over into all aspects of post-lexical phonology.

Other researchers, however, follow Browman (1995) in arguing that apparently categorical deletions and assimilations are just the endpoints of a gradient distribution: deletion being the limiting case of reduction and categorical assimilation the limiting case of overlap. Thus Kochetov and Pouplier (2008), for example, describe the categorical change of /pk/ → [kk] in Korean ((16) above), in which they show the assimilated sequence to be identical to an underlying /kk/ cluster, as full reduction of the lip closing gesture and temporal extension of the velar closing gesture. One crucial question is whether there is a theory of gestural timing and organization that is both powerful enough to account for gradient changes, and constrained enough to account for changes that result in category neutralization (see the discussion in Zsiga 1997; Ladd and Scobbie 2003; Scobbie 2007). Another challenge lies in integrating articulatory and perceptual approaches. Further discussion of co-articulation and gestural overlap can be found in chapter 89: gradience and categoricality in phonological theory.

3.3

Formalizing local assimilation

Local assimilation has played an important role in the development of phonological formalism. McCarthy (1988: 84) states: “The goal of phonology is the construction of a theory in which cross-linguistically common and wellestablished processes emerge from very simple combinations of the descriptive parameters of the model.” He further argues that the ubiquitous presence of assimilation, both local and long-distance, warrants assigning it a “privileged status” in phonological formalism (1988: 86). Despite its clear phonetic bases, the process of assimilation has not necessarily been simple to capture in phonological representation. In the formal theory of Chomsky and Halle (1968), processes of assimilation were expressed with the use of alpha notation. In this formalism, Greek letters stand for variables over “+” and “−”, and every instance of the variable in a rule must be filled in with the same value. Thus, a rule of obstruent voicing agreement, as would be needed for example in Yiddish (4), would be written as in (29). (29)

Obstruents agree in voicing: a notation [−sonorant] → [avoice] / __ [−sonorant, avoice]

Local Assimilation

19

While the use of a special notation does convey the privileged status of the notion of “agreement,” non-occurring rules can also be easily represented, with no increase in formal complexity. (30)

Obstruent voicing must match the value for [±back] [−sonorant] → [avoice] / __ [−sonorant, aback]

Thus, as pointed out, for example, by Bach (1968) and Anderson (1985), this rule formalism is too powerful, in that it predicts that rules (29) and (30), being equal in complexity, should be equally likely to occur. On the other hand, the common and straightforward process of nasal place assimilation (§2.4.1 above) is represented via a complicated formula (31): (31)

Catalan nasal assimilation using alpha notation [+nasal] → [acoronal] [banterior] [clabial] [dback] [ehigh] [tdistributed]

/ __

[acoronal] [banterior] [clabial] [dback] [ehigh] [tdistributed]

It was the study of long-distance assimilation – tone and vowel harmony – that led to the introduction of autosegmental phonology (Goldsmith 1976; Clements and Sezer 1982), but this formalism was quickly adopted for local assimilations as well. In autosegmental representation, assimilation is represented by “feature spreading” through the addition of an “association line”: a feature that begins as a property of one segment comes to be associated with more than one, as in the anticipatory voicing assimilation in (32): (32)

Obstruents agree in voicing: Autosegmental notation [+voice] [−son]

[−son]

Feature spreading gives assimilation a privileged status as an elementary operation, while more complicated feature switches have a correspondingly more complicated representation. The addition of class nodes in a more elaborated feature geometry allows for a simple representation of rules that target a group of features. As noted by Clements (1985: 226), If we find that certain sets of features consistently behave as a unit with respect to certain types of rules of assimilation or resequencing, we have good reason to suppose that they constitute a unit in phonological representation.

Local place assimilation is the prime example of a set of features that behave as a unit. McCarthy (1988: 86–87) states: “The basic motivation for feature geometry [is] the naturalness of place assimilation.”

Elizabeth C. Zsiga

20

Consensus has not been reached, however, on exactly which geometry is correct. The need is clear for a class node grouping consonantal place features to account for assimilations such as that in Catalan (15a), one grouping laryngeal features to account for assimilation of voicing and aspiration together as in Greek (9) and Sanskrit (10), and a root node grouping all features for complete assimilation as in Ponapean (24a) or Arabic (25a). Less clear is the need for a supralaryngeal node that groups all features except the laryngeal features. Cases like those in Havana Spanish (25b) and Kannada (26), where all features except voice assimilate, would argue for such a node (see Clements 1985); however, McCarthy (1988: 92) counters that spreading of the supralaryngeal node “is known from only one or two examples that are subject to reanalysis.” Other points of contention include where to attach manner features (Padgett 1994), how to represent the class of guttural consonants (McCarthy 1994), and, probably most difficult, how to handle vowel and consonant interactions and lack of interaction. Clements and Hume (1995; see also Clements 1993; Hume 1994) suggest separate Place nodes for C-place and V-place: different patterns of interaction and transparency will depend on which nodes are targeted for assimilation. For extended further discussion, see chapter 13: the stricture features; chapter 14: autosegments; chapter 19: vowel place; chapter 75: consonant–vowel place feature interactions; chapter 22: consonantal place of articulation; chapter 25: pharyngeals; chapter 27: the organization of features. Constraint-based theories (Prince and Smolensky 1993) offer a different way of formalizing assimilations. Although autosegmental representation is generally assumed, the details of feature-geometrical representations become less crucial. One way of representing local assimilation is through the mechanism of Agree constraints: markedness constraints that state that two adjacent segments must agree with respect to the specified feature. These markedness constraints interact with constraints requiring faithfulness to underlying features, with languagespecific rankings producing different patterns of assimilation (see chapter 63: markedness and faithfulness constraints). Thus Lombardi (1999) proposes the constraints in (33) to account for voicing assimilation in Yiddish ((4) above). The positional faithfulness constraint (33c) is needed to account for the fact that the coda assimilates to the onset and not vice versa. (33)

Constraints on obstruent voicing agreement (Lombardi 1999) a. b.

c.

Agree Obstruent clusters should agree in voicing. Ident(Laryngeal) Consonants should be faithful to underlying laryngeal specification. Ident-Onset(Laryngeal) Consonants in [pre-sonorant position] should be faithful to underlying laryngeal specification.

If these are ranked such that the agreement constraint and the positional faithfulness constraint outrank general faithfulness, as in (34), the result is that the coda will assimilate in voicing to the onset.

Local Assimilation (34)

21

Voicing assimilation in Yiddish (Lombardi 1999) /bak bejn/ Agree Ident-Onset(Lar) Ident(Lar) *!

a. bak.bejn ☞ b. bag.bejn

*

c. bak.pejn

*!

*

Steriade (2001) treats place assimilation with parallel formalism, but substitutes positional faithfulness constraints that reference differences in perceptibility rather than syllable structure (see §3.1 above). (35)

Place assimilation with perceptibility constraints (Steriade 2001) /at pa/ Agree Ident(Place)/C_V Ident(Place)/V_C a. atpa

*!

☞ b. appa

*

c. atta

*!

*

Place assimilation is also often handled with reference to positional markedness as well as positional faithfulness (Kager 1999). In this approach, assimilation is not driven by a constraint requiring agreement. Rather, the markedness constraint that forces the alternation is based on Itô’s (1998) insight that codas may not license place features alone. Direct reference to a “coda condition” captures the insight that assimilation to the place of an adjacent onset consonant is just one way to repair the coda violation; epenthesis and deletion, which change the syllable structure rather than featural content, are others. The use of different constraints for place assimilation and voice assimilation captures the generalization that, crosslinguistically, epenthesis and deletion often occur to repair clusters that do not match in place, but they do not occur to repair clusters that do not match in voicing (see Bakovio 2000; Lombardi 2001). The account of nasal place assimilation in Spanish [tampoko] ‘neither’ in (36) and (37) is adapted from Shepherd (2003). (36)

Coda condition A coda cannot license place features.

(37)

Nasal place assimilation in Spanish /taN.po.ko/ CodaCond Ident-Onset(Place) Ident(Place) a. tan.po.ko

*!

☞ b. tam.po.ko c. tan.to.ko

* *!

*

Note that in the tableaux above, there is no specific reference to feature geometry or a Place node. In keeping with a general move away from solutions based in representations and rules, the sets of features targeted for assimilation are defined within the content of the constraints, not in terms of a universal hierarchical structure

22

Elizabeth C. Zsiga

that must be made to work for all cases. Padgett (1995) specifically argues against a Place node in feature geometry, proposing instead that constraints that target defined sets of features better account for partial place assimilations. In conclusion, it may be said that questions of representation encapsulate the debates that continue over the linguistic nature of local assimilation. Phonologists are working toward finding the representation that will capture crucial crosslinguistic generalizations about assimilation in the simplest and most straightforward form, while accounting for the details of individual datasets. Debates continue over defining the features and feature classes that are active in assimilation, and whether the definition of classes should be representational or set-theoretic. It remains a question whether structural or perceptual approaches to directional asymmetries best account for the range of cross-linguistic data. Another important question is whether assimilation is featural at all: should local assimilation be defined in terms of manipulation of phonological features, in terms of articulatory organization, or in some other way? Accounting for both gradience and variability on the one hand and systematic category change on the other continues to be a challenge. Finally, theories of the phonology–morphology interface, the phonetics– phonology interface, and, most generally, theories of the overall structure and architecture of the phonological grammar continue to reference processes of local assimilation. Certainly local assimilation, the most common phonological alternation, will continue to play a central role in phonological theorizing.

ACKNOWLEDGMENT The outline of the argument in this chapter follows that of the briefer treatment of the same subject in Zsiga (2006).

REFERENCES Anderson, Stephen R. 1985. Phonology in the twentieth century: Theories of rules and theories of representations. Chicago: University of Chicago Press. Armstrong, Lilias E. 1967. The phonetic and tonal structure of Kikuyu. London: Dawsons of Pall Mall for the International African Institute. Bach, Emmon. 1968. Two proposals concerning the simplicity metric in phonology. Glossa 2. 128–149. Bakovio, Eric. 2000. Nasal place neutralization in Spanish. In Michelle Minnick Fox, Alexander Williams & Elsi Kaiser (eds.) Proceedings of the 24th Annual Penn Linguistics Colloquium, 1–12. Philadelphia: University of Pennsylvania. Beckman, Jill N. 1998. Positional faithfulness. Ph.D. dissertation, University of Massachusetts, Amherst. Bessell, Nicola J. 1992. Towards a phonetic and phonological typology of post-velar articulations. Ph.D. dissertation, University of British Columbia. Bessell, Nicola J. 1998. Local and non-local consonant–vowel interaction in Interior Salish. Phonology 15. 1– 40. Borowsky, Toni. 2000. Word faithfulness and the direction of assimilations. The Linguistic Review 17. 1–28. Bradley, Travis G. 2007. Morphological derived-environment effect in gestural coordination: A case study of Norwegian clusters. Lingua 117. 950–985.

Local Assimilation

23

Bright, William. 1957. The Karok language. Berkeley & Los Angeles: University of California Press. Browman, Catherine P. 1995. Assimilation as gestural overlap: Comments on Holst and Nolan. In Connell & Arvaniti (1995), 334–342. Browman, Catherine P. & Louis Goldstein. 1988. Some notes on syllable structure in articulatory phonology. Phonetica 45. 140 –155. Browman, Catherine P. & Louis Goldstein. 1990. Tiers in articulatory phonology, with some implications for casual speech. In John Kingston & Mary E. Beckman (eds.) Papers in laboratory phonology I: Between the grammar and physics of speech, 341–376. Cambridge: Cambridge University Press. Browman, Catherine P. & Louis Goldstein. 1992. Articulatory phonology: An overview. Phonetica 49. 155 –180. Byrd, Dani. 1992. Perception of assimilation in consonant clusters: A gestural model. Phonetica 49. 1–24. Calabrese, Andrea & Samuel J. Keyser. 2006. On the peripatetic behavior of aspiration in Sanskrit roots. In Eric Bakovio, Junko Ito & John J. McCarthy (eds.) Wondering at the natural fecundity of things: Essays in honor of Alan Prince. Linguistic Research Center, University of California, Santa Cruz. Available (June 2010) at http://escholarship.org/ uc/item/96k332nm. Casali, Roderic F. 1996. Resolving hiatus. Ph.D. dissertation, University of California, Los Angeles. Casali, Roderic F. 1997. Vowel elision in hiatus contexts: Which vowel goes? Language 73. 493–533. Chen, Su-I. 1996. A theory of palatalization and segment implementation. Ph.D. dissertation, New York State University, Stony Brook. Cho, Taehong & James McQueen. 2008. Not all sounds in assimilation environments are perceived equally: Evidence from Korean. Journal of Phonetics 36. 239 –249. Cho, Young-mee Yu. 1999. Parameters of consonantal assimilation. Munich: Lincom Europa. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. 1993. Lieu d’articulation des consonnes de des voyelles: Une théorie unifiée. In Bernard Laks & Annie Rialland (eds.) Architecture des répresentations phonologiques, 101–145. Paris: CNRS. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In John A. Goldsmith (ed.) The handbook of phonological theory, 245 –306. Cambridge, MA & Oxford: Blackwell. Clements, G. N. & Engin Sezer. 1982. Vowel and consonant disharmony in Turkish. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 213–255. Dordrecht: Foris. Cohn, Abigail C. 1993. Nasalisation in English: Phonology or phonetics. Phonology 10. 43–81. Connell, Bruce & Amalia Arvaniti (eds.) 1995. Papers in laboratory phonology IV: Phonology and phonetic evidence. Cambridge: Cambridge University Press. Cowell, Mark W. 1964. A reference grammar of Syrian Arabic (based on the dialect of Damascus). Washington, DC: Georgetown University Press. Davis, Stuart & Seung-Hoon Shin. 1999. The syllable contact constraint in Korean: An optimality-theoretic analysis. Journal of East Asian Linguistics 8. 285–312. Doke, Clement M. 1926. The phonetics of the Zulu language. Johannesburg: University of the Witwatersrand Press. Donegan, Patricia J. & David Stampe. 1979. The study of Natural Phonology. In Daniel A. Dinnsen (ed.) Current approaches to phonological theory, 126–173. Bloomington: Indiana University Press.

24

Elizabeth C. Zsiga

Gafos, Adamantios I. 2002. A grammar of gestural coordination. Natural Language and Linguistic Theory 20. 269 –337. Goldsmith, John A. 1976. Autosegmental phonology. Ph.D. dissertation, MIT. Gurevich, Naomi. 2003. Functional constraints on phonetically-conditioned sound changes. Ph.D. dissertation, University of Illinois at Urbana-Champaign. Halle, Morris. 1959. The sound pattern of Russian: A linguistic and acoustical investigation. The Hague: Mouton. Harris, James W. 1985. Autosegmental phonology and liquid assimilation in Havana Spanish. In Larry D. King & Catherine A. Maley (eds.) Selected papers from the 13th Linguistic Symposium on Romance Languages, 127–148. Amsterdam: John Benjamins. Hayes, Bruce. 1984. The phonetics and phonology of Russian voicing assimilation. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 318–328. Cambridge, MA: MIT Press. Herzallah, Rukayyah. 1990. Aspects of Palestinian Arabic phonology: A non-linear approach. Ph.D. dissertation, Cornell University. Honorof, Douglas. 1999. Articulatory gestures and Spanish nasal assimilation. Ph.D. dissertation, Yale University. Howe, Darin. 2000. Oowekyala segmental phonology. Ph.D. dissertation, University of British Columbia. Hume, Elizabeth. 1994. Front vowels, coronal consonants and their interaction in nonlinear phonology. New York: Garland. Hume, Elizabeth. 2003. Language specific markedness: The case of place of articulation. Studies in Phonetics, Phonology and Morphology 9. 295 –310. Hume, Elizabeth & Keith Johnson (eds.) 2001. The role of speech perception in phonology. San Diego: Academic Press. Hume, Elizabeth & David Odden. 1996. Reconsidering [consonantal]. Phonology 13. 345–376. Hume, Elizabeth & Georgios Tserdanelis. 2002. Labial unmarkedness in Sri Lankan Portuguese Creole. Phonology 19. 441–458. Hyman, Larry M. 2008. Directional asymmetries in the morphology and phonology of words, with special reference to Bantu. Linguistics 46. 309 –350. Itô, Junko. 1998. Syllable theory in prosodic phonology. New York: Garland. Itô, Junko & Armin Mester. 1986. The phonology of voicing in Japanese: Theoretical consequences for morphological accessibility. Linguistic Inquiry 17. 49 –73. Jakobson, Roman. 1978. Mutual assimilation of Russian voiced and voiceless consonants. Studia Linguistica 32. 107–110. Jun, Sun-Ah. 1995. Asymmetrical prosodic effects on the laryngeal gesture in Korean. In Connell & Arvaniti (1995), 235–253. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kaisse, Ellen M. 1992. Can [consonantal] spread? Language 68. 313–332. Katz, David. 1987. A grammar of the Yiddish language. London: Duckworth. Kawasaki, Haruko. 1982. An acoustical basis for universal constraints on sound sequences. Ph.D. dissertation, University of California, Berkeley. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kim-Renaud, Young-Key. 1991. Korean consonantal phonology. Seoul: Hanshin. Kiparsky, Paul. 1985. Some consequences of Lexical Phonology. Phonology Yearbook 2. 85–138. Kirchner, Robert. 1998. An effort-based approach to consonant lenition. Ph.D. dissertation, University of California, Los Angeles. Kirchner, Robert. 2000. Geminate inalterability and lenition. Language 76. 509 –545. Kochetov, Alexei & Marianne Pouplier. 2008. Phonetic variability and grammatical knowledge: An articulatory study of Korean place assimilation. Phonology 25. 399 –431.

Local Assimilation

25

Ladd, D. Robert & James M. Scobbie. 2003. External sandhi as gestural overlap? Counter-evidence from Sardinian. In John Local, Richard Ogden & Rosalind Temple (eds.) Papers in laboratory phonology VI, 162–180. Cambridge: Cambridge University Press. Lavoie, Lisa. 2001. Consonant strength: Phonological patterns and phonetic manifestations. Ph.D. dissertation, Cornell University. Lewis, Geoffrey L. 1967. Turkish grammar. Oxford: Oxford University Press. Lindblom, Björn. 1983. Economy of speech gestures. In Peter F. MacNeilage (ed.) The production of speech, 217–245. New York: Springer. Lombardi, Linda. 1999. Positional faithfulness and voicing assimilation in Optimality Theory. Natural Language and Linguistic Theory 17. 267–302. Lombardi, Linda. 2001. Why Place and Voice are different: Constraint-specific alternations in Optimality Theory. In Linda Lombardi (ed.) Segmental phonology in Optimality Theory: Constraints and representations, 13–45. Cambridge: Cambridge University Press. Maddieson, Ian & Peter Ladefoged. 1989. Multiply articulated segments and the feature hierarchy. UCLA Working Papers in Phonetics 72. 116 –138. Mascaró, Joan. 1976. Catalan phonology and the phonological cycle. Ph.D. dissertation, MIT. McCarthy, John J. 1988. Feature geometry and dependency: A review. Phonetica 45. 84–108. McCarthy, John J. 1994. The phonetics and phonology of Semitic pharyngeals. In Patricia Keating (ed.) Phonological structure and phonetic form: Papers in laboratory phonology III, 191–233. Cambridge: Cambridge University Press. Mohanan, K. P. 1993. Fields of attraction in phonology. In John A. Goldsmith (ed.) The last phonological rule: Reflections on constraints and derivations, 61–116. Chicago & London: University of Chicago Press. Navarro Tomás, Tomás. 1970. Manual de pronunciación española. Madrid: Publicaciones de la Revista de Filología Española. Nurse, Derek & Gérard Philippson (eds.) 2003. The Bantu languages. London & New York: Routledge. Ohala, John J. 1981. The listener as a source of sound change. Papers from the Annual Regional Meeting, Chicago Linguistic Society 17(2), 178–203. Orr, Carolyn. 1962. Ecuador Quichua phonology. In Benjamin F. Elson (ed.) Studies in Ecuadorian Indian languages, vol. 1, 60–77. Norman, OK: Summer Institute of Linguistics. Padgett, Jaye. 1994. Stricture and nasal place assimilation. Natural Language and Linguistic Theory 12. 465 –513. Padgett, Jaye. 1995. Partial class behavior and nasal place assimilation. In Keiichiro Suzuki & Dirk Elzinga (eds.) Proceedings of the 1995 Southwestern Workshop on Optimality Theory (SWOT), 145–183. Tucson: Department of Linguistics, University of Arizona. Reprinted 2004 in John J. McCarthy (ed.) Optimality Theory: A reader in phonology, 379–393. Oxford: Blackwell. Padgett, Jaye. 2002. Russian voicing assimilation, final devoicing, and the problem of [v]. Unpublished ms., University of California, Santa Cruz (ROA-528). Paradis, Carole & Jean-François Prunet (eds.) 1991. The special status of coronals: Internal and external evidence. San Diego: Academic Press. Pater, Joe. 1999. Austronesian nasal substitution and other NY effects. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.) The prosody–morphology interface, 310–343. Cambridge: Cambridge University Press. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Rehg, Kenneth L. & Damian G. Sohl. 1981. Ponapean reference grammar. Honolulu: University of Hawai’i Press. Rice, Keren. 1993. A re-examination of the feature [sonorant]: The status of “sonorant obstruents.” Language 69. 308–344.

26

Elizabeth C. Zsiga

Roca, Iggy & Wyn Johnson. 1999. Workbook in phonology. Malden, MA & Oxford: WileyBlackwell. Rose, Sharon. 1996. Variable laryngeals and vowel lowering. Phonology 13. 73–117. Scobbie, James M. 2007. Interface and overlap in phonetics and phonology. In Gillian Ramchand & Charles Reiss (eds.) The Oxford handbook of linguistic interfaces, 17–52. Oxford: Oxford University Press. Shepherd, Michael. 2003. Constraint interactions in Spanish phonotactics. M.A. thesis, Cal State Northridge (ROA-639). Silva, David J. 1992. The phonetics and phonology of stop lenition in Korean. Ph.D. dissertation, Cornell University. Smyth, Herbert Weir. 1920. Greek grammar. Cambridge, MA: Harvard University Press. Sohn, Hyang-Sook. 2008. Phonological contrast and coda saliency of sonorant assimilation in Korean. Journal of East Asian Linguistics 17. 33–59. Steriade, Donca. 2001. Directional asymmetries in place assimilation: A perceptual account. In Hume & Johnson (2001), 219 –250. Street, Chester S. & Gregory P. Mollinjin. 1981. The phonology of Murinbata. In Bruce Waters (ed.) Australian phonologies: Collected papers, 183–244. Darwin: Summer Institute of Linguistics. Szpyra, Jolanta. 1989. The phonology–morphology interface: Cycles, levels and words. London & New York: Routledge. Tlale, One. 2006. The phonetics and phonology of Sengwato, a dialect of Setswana. Ph.D. dissertation, Georgetown University. Tsuchida, Ayako. 1997. Phonetics and phonology of vowel devoicing. Ph.D. dissertation, Cornell University. Vance, Timothy J. 1987. An introduction to Japanese phonology. Albany: State University of New York Press. Villafana, Christina. 2006. Consonant weakening in Florentine Italian: An acoustic study of gradient and variable sound change. Ph.D. dissertation, Georgetown University. Wackernagel, J. 1896. Altindische Grammatik, vol 1. Göttingen: Vandenhoeck & Ruprecht. Welmers, William E. 1973. African language structures. Berkeley: University of California Press. Wetzels, W. Leo & Joan Mascaró. 2001. The typology of voicing and devoicing. Language 77. 207–244. Whitney, William Dwight. 1889. Sanskrit grammar, including both the classical language, and the older dialects of Veda and Brahmana. 2nd edn, Cambridge, MA: Harvard University Press. Wonderly, William L. 1946. Phonemic acculturation in Zoque. International Journal of American Linguistics 12. 92–95. Zsiga, Elizabeth C. 1995. An acoustic and electropalatographic study of lexical and postlexical palatalization in American English. In Connell & Arvaniti (1995), 282–302. Zsiga, Elizabeth C. 1997. Features, gestures, and Igbo vowels: An approach to the phonology–phonetics interface. Language 73. 227–274. Zsiga, Elizabeth C. 2006. Assimilation. In Keith Brown (ed.) Encyclopedia of language and linguistics, 2nd edn., vol. 1, 553–558. Oxford: Elsevier.

82

Featural Affixes Akinbiyi Akinlabi

1

Characteristics of featural affixes

Featural affixes are phonological features that function as grammatical morphemes. The most commonly found cases are tonal (Akinlabi 1996). An example is the associative marker in Bini (Amayo 1976), exemplified in (1). (The forms before the arrow indicate the isolation forms of the nouns and the forms after the arrow are associative constructions. For clarity, the tones in the examples in (1) are indicated with both tone marks and the letters L, H for Low, High respectively. ↓ indicates a downstepped tone on the following vowel.) (1)

Bini (Amayo 1976)

L L leg

LL chimpanzee

L HLL

‘a chimpanzee’s leg’

L L water

L H pepper

L HL H

‘solution of water and pepper’

L L leg

LL this one

L HL L

‘this one’s leg’

However, several cases of non-tonal features functioning as grammatical morphemes have also been described in the literature. A representative list is given in (2).1 1

See the references cited here for additional examples. Reviewers have pointed out a number of other examples which might have been included here. Two of them are: (a) in Coatzospan, the 2nd person familiar is marked by nasality (Gerfen 1999: 127), and (b) in Shuswap, glottalization is a floating feature (Kuipers 1974; Idsardi 1992). The list in (2) is not intended to be exhaustive. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0082

Akinbiyi Akinlabi

2 (2)

Non-tonal examples of featural morphemes a.

In Chaha, the 3rd masculine object is indicated by labialization. (Johnson 1975; McCarthy 1983; Hendricks 1989; Archangeli and Pulleyblank 1994; Rose 1994, 2007) b. Nuer indicates tense/aspect distinctions with the features [continuant] and [voice]. (Crazzolara 1933; Lieber 1987; Frank 1999) c. In Zoque, the 3rd person singular is marked by palatalization. (Wonderly 1951) d. [nasal] is the 1st person possessive marker in Terena. (Bendor-Samuel 1960, 1966) e. The feature of “uncontrolledness” is signaled by palatalization in Japanese. (Hamano 1986; Mester and Itô 1989; Archangeli and Pulleyblank 1994; Alderete and Kochetov 2009) f. Noun class 5 is marked by voicing the first consonant of the root in Aka (Bantu, Zone C). (Kosseke and Sitamon 1993; Roberts 1994) g. Noun class morphemes in Fula include the features [continuant] and [nasal]. (Arnott 1970; Lieber 1984, 1987) h. The Athapaskan D-classifier consists solely of the feature [−continuant]. (Rice 1987) i. In Seereer Siin, an Atlantic (Niger Congo) language, consonant mutation (involving the features [voice] and [continuant]) constitute all or part of the noun class prefix in nouns and dependent adjectives, and number in verbs. (Mc Laughlin 2000, 2005) j. In Mafa, a central Chadic language of Cameroon, imperfectives of verbs ending in a consonant are formed with a palatal featural affix. (Ettlinger 2003, 2004) The features in (2), like segmental morphemes, often refer to specific edges of stems, and thus are featural affixes (e.g. Chaha labialization and palatalization, Aka voicing, Zoque palatalization). While the fact that phonological features may function as grammatical morphemes is uncontroversial, the status of such features as prefixes or suffixes often remained muted in spite of traditional intuition, with some scholars contented with referring to the morphemes simply as “floating autosegments.”2 The reason why the status of featural affixes as prefixes or suffixes is often problematic is that, while segmental affixes may be phonetically realized independently, featural affixes are always phonetically realized as part of some other segment or segments of the stem. The question therefore is why featural affixes get realized as part of the stem. The answer to this is that features have to be “licensed” (i.e. their occurrences have to be sanctioned) in order to get phonetically realized, therefore featural affixes must associate with a licensor in the stem or elsewhere. 2

Most studies on tone are exceptions to this generalization (see Clements and Goldsmith 1984; Pulleyblank 1986; Anderson 1991; van der Hulst and Snider 1993).

Featural Affixes

3

In this chapter I am assuming a feature geometry in which all segments have a root node, which “gathers” the features into one unit (chapter 27: the organization of features). In addition, I assume that vowels (and all syllable peaks, including syllabic nasals) are dominated by a mora (chapter 33: syllableinternal structure). Finally, I assume that class nodes, such as those for place of articulation, are monovalent. However, terminal features, such as aperture features, are bivalent. Since this chapter has a constraint-based, optimalitytheoretic bias, I will not be assuming underspecification here (chapter 7: feature specification and underspecification). Universally, feature licensors can (only) be either a mora or a root node (Itô 1989; Itô and Mester 1993; etc.). Therefore, while edges in tones refer to the initial or final mora, edges in nasal harmony and the like may refer to the first or last root node; i.e. a real morphological edge, since the last licensor also coincides with the last segment of the morpheme (see Archangeli and Pulleyblank 1994).3 But, with featural affixes, an edge does not necessarily mean a morphological edge; an edge is defined for a feature on the basis of a possible licensor in a language. Another characteristic of featural affixes, as distinct from segmental affixes, is their domain. While most segmental affixes occur at the beginning, middle, or end of a base, featural affixes often occur throughout the base, or span it. Features that commonly have this characteristic are the “prosodic” features, in the Firthian sense of the word. As is well known, such features may include pitch, nasality, roundness, palatalization, and the like (see Firth 1948). Since these are the featural spell-out (or content) of the morphological categories in question, they are featural affixes. In their study of alignment in (regular) segmental affixation, McCarthy and Prince (1993b: 103) observe that an alignment constraint, such as one that aligns the left edge of one morpheme with the right edge of another (as in Tagalog -umprefixation) may be violated when dominated by a prosodic constraint, such as one that disallows a coda. This may force a prefix to be realized as an infix. The Tagalog affix -um- “falls as near as possible to the left edge of the stem, so long as it obeys the phonological requirement that its final consonant m not be syllabified as a coda” (McCarthy and Prince 1993b: 79). Therefore, it appears as a prefix before a vowel-initial word: /um + aral/ → [um-aral] ‘teach’, but as an infix when the word is consonant-initial: /um + sulat/ → [s-um-ulat] ‘write’, /um + gradwet/ → [gr-um-adwet] ‘graduate’. A similar characteristic is found in featural affixes. One important distinction from segmental prefixes/suffixes is that featural affixes often behave like “infixes,” because they frequently do not occur at an edge of the stem. A feature may be forced away from an edge when the feature cannot co-occur with another feature(s) of the segment at the edge (see Pulleyblank 1993), leading to 3

It should be noted that the accounts in this chapter allow for affixes which involve more than one autosegmental feature, though we do not discuss such cases here. For example, in Mokulu (Eastern Chadic, Chad Republic) the completive aspect marker consists of the features [voice] and [high] (Jungraithmayr 1990; Roberts 1994). The first consonant of the stem becomes voiced while the first vowel becomes high, even if it was a low vowel in the input. In the approach taken here, both features constitute parts of a featural prefix. However, such features may be realized on the same segment in the stem or on different segments, depending on licensing. In the case in question, licensing forces [voice] and [high] on different segments.

4

Akinbiyi Akinlabi

misalignment. A featural suffix may for example be realized elsewhere in the stem, resulting in featural infixation. However, featural affixes occur as “infixes” more often than segmental affixes. Finally, one characteristic that has recently been observed in featural affixation is one in which a grammatical category is marked by a feature which has both segmental and featural allomorphs, as in Mafa (Ettlinger 2003, 2004). In the following sections I illustrate each of the above characteristics of featural affixes. Each case study discussed below has been selected because it illustrates a particular characteristic or characteristics of featural affixes. In the discussion of Chaha (§2.1), I show that a featural suffix [round] is realized as a featural infix, or even as a featural prefix, when the featural suffix is forced away from the edge. The opposite effect is illustrated with Nuer mutation (§2.3). Tonal data from Etsako, an Edoid language, and nasalization data from Terena show situations in which featural morphemes span the entire base of affixation. In the discussions of Terena nasalization and the Etsako tone, I suggest that these are still cases of prefixation and suffixation respectively, but in conjunction with harmony. Therefore there are no special treatments of featural affixes required. Mc Laughlin (2000, 2005) notes that, taking into consideration featural affixes, a morphological category can be expressed in one of three ways: as a segmental affix, as a featural affix, or as a combination of both segmental and featural affixes (chapter 103: phonological sensitivity to morphological structure). In summary, the primary focus in this chapter will be illustrating the characteristics of featural affixes. To do this, I will provide short descriptions of several of the featural affixes listed in (2). The characteristics include (a) marking morphological categories (like segmental affixes), (b) occurring as part of other segments rather than independently, (c) varying between prefixes and suffixes, (d) occurring elsewhere in the stem (because of feature co-occurrence constraints), (e) spanning the entire base of affixation, and (f) varying occurrence as a feature or a segment in the same language. I will argue that these characteristics of featural affixes do not require any new type of morphology, because the same machinery already developed for segmental affixes can handle them as well. I discuss seven case studies in all, divided into four groups. The first group, Chaha and Zoque, illustrates the most basic characteristics of featural affixes mentioned above, that of directionality. Chaha illustrates suffixation and Zoque shows prefixation. The second group, Nuer and Seereer Siin, combines featural affixes with consonant mutation. Nuer is suffixal, and Seereer Siin is prefixal. The third group, Etsako and Terena, shows featural affixes that span the whole stem domain. They illustrate featural affixation combined with “harmony.” Again, Etsako shows the harmony from the right (suffixal), and Terena shows it from the left (prefixal). The fourth group contains only one language, Mafa. Mafa shows a special case of affixation, in that the segment involved is at the same time a segment and a feature. I refer to this as segmental realization of a featural affix.

2

Directionality

The first case studies illustrate the need to consider featural morphemes as either prefixes or suffixes, a property that is formally accounted for by the directional

Featural Affixes

5

component of alignment. In this light, Chaha illustrates prefixation, and Zoque illustrates suffixation.

2.1

Chaha labialization

In Chaha, a Gurage language of Ethiopia, the 3rd person masculine singular object is indicated by labialization (with the suffix /n/) (Johnson 1975; McCarthy 1983; Hendricks 1989; Archangeli and Pulleyblank 1994; Rose 1994, 2007). Labialization surfaces on the “rightmost labializable consonant” of the stem. Labializable consonants in Chaha include labial and dorsal consonants, but not coronal consonants.4 The data in (3) (from McCarthy 1983: 179) show the surface realization of this morpheme. (3)

without object with 3rd masc sg object Rightmost consonant of the stem is labializable dænæg dænægw ‘hit’ w nædæf nædæf ‘sting’ w nækæb nækæb ‘find’ b. Medial consonant of the stem is labializable, final is not nækæs nækwæs ‘bite’ w kæfæt kæf æt ‘open’ w bækær bæk ær ‘lack’ c. Only the leftmost consonant of the stem is labializable qætær qwætær ‘kill’ mæsær mwæsær ‘seem’ mækjær mwækjær ‘burn’ d. No labializable consonant sædæd sædæd ‘chase’

a.

A number of observations are important here. Labialization must be realized only on the rightmost labializable consonant, and on no other. This is obvious from the third example in (3a), /nækæb/ → /nækæbw/. Both of the last two consonants of the verb root in this example are labializable, but only the rootfinal consonant is labialized. The medial consonant is not labialized, because of this requirement of rightmostness. In the forms in (3b), all of the final consonants of the verb roots are coronal, e.g. /nækæs/, therefore only the root-medial consonants, which are either labial or dorsal, are rightmost; and so only these receive the labialization feature. Note further that the initial consonants in the last two examples, /kæfæt/ and /bækær/, are labializable, but again are not labialized, because of the requirement of rightmostness. In (3c) the only labializable consonants of the verb root are the leftmost consonants, /qætær/ → /qwætær/, and so by rightmostness they receive labialization. Finally, in (3d) none of the consonants is labializable and so the feature is not realized. An explanation of the above facts is as follows. Following earlier analyses we assume that the 3rd person masculine singular object marker in Chaha is 4

This statement is from McCarthy. Rose (2007) states the labialization rule as “labialize the rightmost velar or labial consonant, unless already palatalized.” The key point in both definitions is that labialization targets dorsal and labial consonants.

Akinbiyi Akinlabi

6

the feature [round]. It must be a featural suffix, as indicated by the insistence on rightmostness. The 3rd person masculine singular object [round] aligns with (or coincides with; Zoll 1996) the right edge of the stem. In Chaha, [round] may be licensed by any consonantal root node. The position explicitly treats the morpheme as a suffix, but the segmental content is a feature [round], hence what the constraint aligns is the feature [round]. The right edge of the stem has to coincide with the feature [round], the featural content of the affix. Thus the feature [round] seeks out the rightmost consonantal root node in the verb root for licensing, given the discussion of licensing and edges above. As noted in our description of the facts, coronal consonants cannot receive the labialization feature. This means that the feature [round] cannot be articulated with a coronal consonant in Chaha. We can bar this with a feature co-occurrence constraint, which forbids [round] from linking to a root node associated to [coronal]. To conclude, there are several characteristics of featural affixes, which this affix illustrates. First, it marks a morphological category, the 3rd person masculine singular object. Second, the realization is a feature, the feature [round]. Third, it must be realized as part of another segment, a consonant, because it is not a segment. Fourth, like any affix, it has a position. However, like a featural affix it seeks the rightmost dorsal or labial consonant for licensing. Therefore it is a suffix. Fifth, like segmental affixes, it can be pushed from the suffix position. As it is a featural affix, however, co-occurring with other features is what matters. It cannot co-occur with a coronal consonant; therefore it gets pushed more and more inwards until it finds the right consonant to co-occur with. Sixth, if it does not find the right licensor, it simply does not get realized. This is comparable with the null realization of certain segmental morphemes in language, as for example where a segmental affix is not realized for some phonotactic reason. One example is Dutch, which does not have geminate consonants. Here the 3rd person singular ending [-t] is not realized on verbs which end in a coronal plosive.5 (4)

Dutch 3rd person suffix [-t] absent after verb-final [t] a. b. c.

2.2

ik lees hij leest ik zie hij ziet ik eet hij eet

[>k les] [hei lest] [>k zi] [hei zit] [>k et] [hei et]

*[et(]

‘I read’ ‘he reads’ ‘I see’ ‘he sees’ ‘I eat’ ‘he eats’

Zoque palatalization

In this section, I consider the process of morphological palatalization in Zoque (Zoque-Mixe of southern Mexico). Zoque palatalization contrasts with Chaha labialization (§2.1) in some crucial senses. First, while Chaha labialization illustrates a case of long-distance realization of an affix, Zoque palatalization illustrates local realization; i.e. the affix must be realized at the edge, and nowhere else (Akinlabi 1996). Second, Zoque differs from Chaha in the sense that the featural affix is a prefix as opposed to a suffix. 5

I am grateful to Marc van Oostendorp for this example from Dutch.

Featural Affixes

7

Wonderly (1951: 117–118) describes a process of palatalization (chapter 71: palatalization) in Zoque, which marks the 3rd person singular. He represents this morpheme as a prefix [j],6 and treats this process of palatalization as “metathesis” of [j] and the following consonant. A rule-based treatment assuming metathesis is proposed in Dell (1980). The relevant examples are listed in (5), with the morpheme transcribed as [j], following Wonderly.7 My interpretation here is that Wonderly’s [j] is a palatal feature, which I will assume is [−back]. (5)

Zoque 3rd person singular a.

b.

c.

With labial consonants j - pata p j ata j - p j esa p j esa j - buro b j uro j - faha f j aha j - mula m j ula j - wakas w j akas With alveolar consonants j - tatah t j atah j - tih nZ t j ihu j - duratsZhk nZ d j uratsZhku j - tsZhk ts j ahku j - sZk s jZk j - swerte œwerte j - nanah n j anah

[catah] [nZ cihu] [nZ –uratsZhku] [Œahku] [œZk] [œwerte] [Janah]

With palatal consonants (no change) j - Œo?ngoja Œo?ngoja j - œapun œapun

d. With velar consonants j - kama k j ama j - gaju g j aju e.

‘his ‘his ‘his ‘his ‘his ‘his

With laryngeal j - ?atsi j - hajah j - huj

consonants ? j atsi h j ajah h j uju

mat’ room’ burro’ belt’ mule’ cow’

‘his father’ ‘he is arriving’ ‘it is lasting’ ‘he did it’ ‘his beans’ ‘his fortune’ ‘his mother’ ‘his rabbit’ ‘his soap’ ‘his cornfield’ ‘his rooster’ ‘his older brother’ ‘her husband’ ‘he bought it’

All words in Zoque are consonant-initial. The data in (5) show that the 3rd person singular morpheme produces secondary palatalization of the first consonant of the stem if it is labial (5a), velar (5d), or laryngeal (5e); it turns alveolars into 6

Wonderly used the symbol [y]. I have re-transcribed Wonderly’s examples to be as close as possible to the IPA. 7 The transcription here (from Wonderly 1951) is somewhat misleading, because one can be led to believe that the morpheme here is indeed /j-/, and not a feature. However, if this were a full segment as opposed to a feature, it would be completely unnecessary for the segment to seek licensing from another segment. It would also be completely accidental that metathesis is limited to glide–consonant sequences in this language. Note that this cannot be blamed on the sonority rise in an onset (i.e. [jC] → [Cj]), because the so-called metathesis also occurs in a sequence of two glides (which in many accounts are equal in sonority); /j - wakas/ → /wjakas/ ‘his cow’.

8

Akinbiyi Akinlabi

alveopalatals in (5b), and has no phonetic effect on underlying palatals (5c). As Wonderly (1951: 118) puts it, “when y [i.e. /j/] precedes an alveopalatal consonant L, :, the y is lost.” In this analysis we assume that the morpheme is not “lost,” but that it has no phonetic effect if the initial consonant of the stem is palatal. I assume that the 3rd person singular in the above data is the feature [−back] (see Sagey 1986). [−back] is licensed by any root node in Zoque. It is apparently a featural prefix, given its restriction to the first (or leftmost) consonant. The palatalization case in Zoque is completely straightforward. All consonants participate in the palatalization, regardless of place of articulation. For example, labials are not barred from being palatalized, as coronals are barred from being labialized in Chaha. The only set of consonants that require additional comment is the set of palatal consonants, as seen in (5c) (/[−back]-œapun/ → [œapun] ‘his soap’). There are two approaches to this set of consonants. One is to assume that the [−back] 3rd singular morpheme is unparsed when the first segment is underlyingly palatal. The second approach is to assume that [−back] links vacuously to a palatal segment. I adopt the second position here, since linking [−back] to a palatal consonant will not change the consonant’s realization. If palatal consonants are assumed to have underlying tokens of [−back], then linking the morpheme in this case simply implies that the [−back] specification in the surface representation corresponds to two tokens of the same feature in the input. Phonetically, it will be impossible to distinguish one or two tokens of the same feature. In conclusion, Zoque provides evidence for featural affixes which must be realized, and which must be realized at an edge and nowhere else. In Chaha, a co-occurrence constraint forces a featural affix away from the edge. In Zoque, such co-occurrence constraints (which must be universal) have no surface effect. In Chaha, a featural affix may not be realized if none of the segments can license it. In Zoque, the affix can be licensed by all consonants, and so it is always realized.8 In the two case studies of featural affixes discussed above, one is a suffix (Chaha [round] or Labial), and the other is a prefix (Zoque [−back] or Coronal). Both of these involve only features. I now turn to cases in which the affix has both segmental and featural content.

3

Features plus segments: Segment mutations

Systematic alternation in homorganic segment classes that reflect morphological distinction is often called mutation (chapter 65: consonant mutation; chapter 117: celtic mutations). The second group of case studies consists of languages which combine featural affixes with consonant mutation. 8

There are two important issues here. First, there is a technical complication for feature geometry. If [−back] is dependent on some supralaryngeal node, and if laryngeal consonants have no supralaryngeal specification, then what does [−back] dock on? A possible explanation is that the addition of [−back] automatically generates a place node. The second issue is whether palatalized sounds occur outside of the contexts described here. If they do, it will confirm that these are not clusters, but single segments. Wonderly is silent on this question.

Featural Affixes

9

What is interesting about these cases is that languages with consonant mutations often combine both featural and segmental affixation. That is, the featural affix may occur by itself or with additional segments. In this section, we examine two cases, Nuer and Seereer Siin. Nuer is suffixal and Seerer Siin is prefixal.

3.1

Nuer mutation

The consonant mutation process of Nuer, a Nilo-Saharan language of Sudan, presents an interesting contrast to Chaha, in that the featural suffix must be realized at the very right edge of the verb stem rather than anywhere else in the stem. If the featural suffix cannot be realized on the last consonant of the stem due to a co-occurrence constraint, it is simply not realized at all (see Chaha palatal prosody). But our interest in Nuer mutation is that the suffixes do not just consist of features, but of segments and features. In the Nuer verb roots, final consonant mutation is associated with various tenses and aspects in the verbal paradigms, as the following examples illustrate. The alternation is only productive in verbs and not in nouns. (All data presented here are from Crazzolara 1933: 156–160; see Frank 1999 for more details on Nuer morphology.) Rule (6) summarizes the observed consonant alternations and (7) provides examples. In the following examples each place of articulation is represented by two verb paradigms. I have converted Crazzolara’s representations to IPA, following his descriptions. (6)

Nuer final consonant alternation (Crazzolara 1933; Lieber 1987)9 labial voiced b voiceless continuant f voiceless stop p

(7)

velar : h k

Verbal paradigms10 a.

Labial final verbs 3rd sg indic pres act 1st pl indic pres act pres pple neg past pple

9

interdental alveolar palatal Ï d ù h ã ç } t c

‘to overtake a person’ cóbé jè còDfkG je còp cof

‘to scoop (food) hastily’ kébé jè kèafkG je kep kèf

Following Crazzolara’s descriptions, his transcriptions have been modified as follows. [dh] and [th] (interdental) are transcribed here as [Ï] and [h] respectively. [Í] (a trilled alveolar continuant) is [ã]. Finally, [y] (palatal fricative) is retranscribed as [ ù]. Cazzolara suggests that what he writes as [b] is actually the continuant [ß] in final position (Crazzolara 1933: 6). One can imagine that the same is true for what he writes as [d], since he notes that Nuer stems can have up to three forms, one ending with a voiceless stop, one with a voiceless continuant, and the third with a voiced sound which in most cases is a continuant. 10 I will not discuss the vocalic changes, since they are largely unpredictable from Crazzolara’s transcriptions.

Akinbiyi Akinlabi

10 b.

Interdental final verbs 3rd sg indic pres act 1st pl indic pres act pres pple neg past pple

‘to suck’ lóÏé jè loGhkG je lo} loh

‘to wade’ jÒÏé jè jÚhkG je jæ} jæh

c.

Alveolar final verbs 3rd sg indic pres act 1st pl indic pres act pres pple neg past pple

‘to sharpen’ paádè jé páaãkF jè paat pàaã

‘to cut a point’ wRdé jè wHãkG je wqt wqã

‘to hit’ jáaùè je jáaçkF jè jaac jaaç

‘to dismiss a person’ jùéeùè je jùáaçkF jè j ùèec j ùeeç

‘to throw away’ ùÒ:é jè ùÚkG je ùæk ùæh

‘to find’ jp:é jè jHkG je jHk jHh

d. Palatal final verbs 3rd sg indic pres act 1st pl indic pres act pres pple neg past pple e.

Velar final verbs 3rd sg indic pres act 1st pl indic pres act pres pple neg past pple

First, Crazzolara (1933: 102) notes that the verb root is monosyllabic in Nuer. Second, all verbs begin and end in consonants. I assume, following Lieber (1987), that the features implicated here are [continuant] and [voice]. I will also assume that the morphemes involved in the mutation consist of the following inputs.11 (8)

The Nuer suffixes indic pres act 3rd sg 1st pl pres pple neg past pple

= = = = =

[je] [cont] [e] [cont] [kD] Ø [cont]

The most important illustration of the theme of this section is the past participle morpheme, which under any analysis must include the feature [continuant], and the 1st plural morpheme, which, in addition to the feature [continuant], also includes the segment sequence [kD]. A comparison of all the past participle forms with the 1st plural indicative present active forms shows that the latter always include the additional [kD]. What is interesting is that the suffix [kD] also triggers spirantization of the preceding stop.12 Therefore we must assume that this suffix has a preceding floating [continuant]. Finally, we must assume that Nuer also has intervocalic voicing, as seen in all the 3rd singular forms. 11

But see Lieber (1987) for a different assumption on input. In the case of the forms ‘throw away’ and ‘find’, there is no spirantization. I assume that this is because the final consonant of the verb and the 1st plural suffix [kD] are identical. Crazzolara apparently transcribes the unspirantized sequence [hk] as a single stop [k]. 12

Featural Affixes

11

It is clear from the mutation cases in Nuer that the features involved are suffixes, since in two cases the free feature [continuant] is paired with traditional segmental suffixes. In the case of the past participle morpheme, the entire content of the morpheme is the free feature [continuant]. [continuant] is licensed by a root node in Nuer. This feature links to the rightmost consonant of the verb. This association formally defines the past participle morpheme in Nuer as a suffix. This morpheme happens to have just a single featural content [continuant]. The derivation of all the forms with the past participle suffix [continuant] is the same as that of the 1st plural forms, except for the additional segments [kD] in the suffix. This example actually shows that if we call the segments [kD] a suffix, we must treat the preceding [continuant] the same way, since they, together, mark the same morpheme. And if this feature [continuant] of the 1st plural is a suffix, so is the feature [continuant] that marks the past participle alone. Crazzolara notes that a number of segments do not undergo this mutation process in Nuer. These segments are the nasals /m n | J I/, the liquids /l r/, and glide /w/. I will split these segments into two groups, the nasals on the one hand and the liquids and glide on the other. I suggest that the nasals do not undergo mutation because of a co-occurrence constraint forbidding the association of [continuant] to a consonant specified for [nasal]. The examples in (9) illustrate this. (9)

Non-alternating final consonant 3rd sg indic pres act 1st pl indic pres act pres pple neg past pple

‘to see’13 néenè je néeankF jè nèen nèen

‘to hear’ lRqIé jè lqeIkG je lqI lîI

Since morphemes with final nasals never alternate, and since [continuant] does not show up anywhere else, we must assume that in these cases [continuant] must remain unrealized (i.e. unparsed). This is parallel to the case of the nonrealization of [round] in Chaha. I assume that the remaining sonorants, liquids, and glide undergo the process, though the surface forms appear invariant; i.e. [continuant] links vacuously to stems whose final consonants belong to this class, but without any apparent surface effect, since they are already continuants.14 In conclusion, [continuant] in Nuer provides a significant contrast to labialization in Chaha and palatalization in Zoque. In both Chaha and Nuer, the featural affix is a suffix, given the insistence on linkage to the final consonant. In both languages, the featural content of the affix cannot co-occur with a class of segments. 13

Crazzolara (1933: 124) points out that there is a separate negative particle /cq/, which occurs before the subject clitic. Forms with nasals are the only complete paradigms that Crazzolara gives, and in these cases he provides no forms in which the first consonant is an oral stop and the second is a nasal. In all the other forms where the stem consonant does not alternate he provides the 3rd singular indicative present active and the 1st plural indicative present active for the rest of the cases. 14 This implies that a single [continuant] specification on the final consonant on the surface corresponds to two in the input. See also the discussions of Zoque palatalization (§2.2) and Edoid tone (§4.1) for similar characteristics.

Akinbiyi Akinlabi

12

This results in the non-realization of the featural suffix on the final segment. This fact is captured by the co-occurrence constraints between the feature content of the affix and the feature content of the class of segments. Thus it is co-occurrence constraints that force featural affixes from edges. The substantive difference between the two languages is seen in Chaha’s insistence on realizing the featural suffix on other segments even if it cannot be realized on the edgemost segment, while Nuer will not realize the featural suffix at all. It is important to note that other languages with consonant mutation have been identified in the literature, e.g. Fula (Arnott 1970) and North Atlantic languages (Mc Laughlin 2000, 2005), which confirm the above analysis of Nuer mutation. These languages also differ significantly from Nuer. I will briefly discuss the case of one, Seereer Siin (Mc Laughlin 2000).

3.2

Seereer Siin consonant mutation

In her work on several Northern Atlantic languages of Niger Congo (Pulaar, Seereer Siin, Wolof), Mc Laughlin (2000, 2005) argues that consonant mutation can be viewed and accounted for as the prefixation of a floating feature to the root node of the stem-initial consonant.15 She proposes a constraint-based account to locate the feature on the left edge of a word. Seereer Siin consonant mutation is morphologically conditioned by noun class in nouns and dependent adjectives, and by number in verbs. There are two patterns of consonant mutation in Seereer: (a) voicing mutation, and (b) continuancy mutation. In each, there is a three-way homorganic range of alternations, called grades (Arnott 1970). I will only discuss the voicing mutation, and I will discuss only the fully mutating forms. The reader is referred to Mc Laughlin’s work for the partially mutating forms, and the continuancy mutation. In the voicing mutation, the three grades are “voiced stop,” “voiceless stop,” and “prenasalized voiced stop.” Grade-a refers to the voiced set, grade-b to the voiceless set, and grade-c to the prenasalized set. Seereer Siin has sixteen noun classes. Of the sixteen, classes 2, 3a, 5, 7, 8, and 10 condition the a-grade mutation, while classes 3b, 6, 12, 13, and 14 condition the c-grade mutation. The remaining classes (1, 4, 9, 11, and 15) condition the b-grade mutation. I will now illustrate the above statements with the examples in (10), from Mc Laughlin (2000: 339–340). The numbers in parenthesis beside the forms indicate the noun classes of the forms. (10)

Voicing mutation (fully mutating) voiced a-grade ogac (10) Áir (5) o+aj (10)

15 16

voiceless b-grade akac (4) acir (4) xaÛaj (11)

nasal16 c-grade foIgac (13) aJÁir (3b) foÛaj (13)

‘stone’ ‘illness’ ‘hand, arm’

For reasons of space, only a brief summary of the facts of Seereer Siin is given here. Voiceless implosives cannot be prenasalized in Seereer Siin.

Featural Affixes

13

I follow Mc Laughlin in assuming that the b-grade forms constitute the “underlying” forms in the stems with voicing mutation. The stem patterns show that the features involved in the class prefixation are [+voice] and [+nasal]. [+voice] drives the voicing of underlying voiceless-initial stems, which are fully mutating. In addition, one must conclude that the class 10 prefix has both segmental and featural contents: /o [+voiced]/, as Mc Laughlin does. Finally, class 13 also has both segmental and featural contents: /fo [+nasal]/. “There is a [+voice] floating feature that drives the a-grade mutations . . . and there is a [+nasal] floating feature that drives the c-grade mutations” (Mc Laughlin 2000: 340). Comparing the Seereer Siin forms with those from Nuer, the mutating consonants in Seereer Siin are the stem-initial consonants. The mutating features [+voiced] and [+nasal] are prefixes. They must link to the stem-initial consonant and no other. In Nuer on the other hand, the mutating feature is a suffix. The system in Seereer Siin sometimes includes featural affixes alone, and sometimes featural affixes as well as segmental affixes. As seen above, the class 10 prefix includes both segmental and featural content: /o [+voiced]/, and the class 13 prefix also has both segmental and featural content: /fo [+nasal]/. These combinations are in fact more apparent than the Nuer combinations. The segmental features causing the mutation either get associated or not, and are never pushed inwards in the stem. They only occur at the edges.

4

Harmony: Featural affixes with stem domains

The third set of case studies consists of languages that combine featural affixes with featural harmony. By “harmony,” I mean featural propagation that is domain-based. The domain of a featural affix is often the entire stem. By definition, we must take these features to be affixes, since they are the featural spell-out of some morphological category. Since the domain of the featural affix is the entire stem, I take the phenomenon to be the combination of a featural prefix or suffix, plus harmony involving the feature in question. I will illustrate with two languages. I will discuss one case involving a featural suffix (Edoid tone), and one involving a featural prefix (Terena nasalization).

4.1

Edoid associative construction

Tonal data from Edoid languages (Niger Congo, Nigeria) provide the first example of featural suffixation plus harmony. Suffixation is detectable from the fact that priority is given to right alignment, and harmony is seen in the transmission of the feature throughout the entire domain. In several Edoid languages the “associative morpheme” is a free (floating) High tone. The list includes Etsako (Elimelech 1976), Yekhee (Elugbe 1989), Bini (Amayo 1976), Isoko (Donwa 1982), and Emai (Egbokhare 1990). In this section I will only examine Etsako (Ekpheli dialect). Several other Edoid languages have similar tonal systems to that of Etsako. Etsako is a two-tone language, with High and Low tones (Elimelech 1976: 41). (Recall that full specification is assumed in this chapter.) In this language, the associative High tone links to the head noun, replacing all Low tones in a

Akinbiyi Akinlabi

14

right-to-left manner, until it reaches a segmental High tone. The examples below consist of disyllabic nouns, but they are representative of what happens in longer forms. The forms cited here (from Elimelech 1976: 55) exhaust all possible tonal combinations of disyllabic nouns. The tones in the first row in each of (11)–(14) indicate the underlying tone pattern of the head noun in isolation, and the corresponding tones after the arrow indicate its tone pattern in an associative construction. For clarity, I have indicated the tonal pattern of the first example in each set with the tone letters H and L, in addition to the tone marks. The crucial tones to focus on are those of the first noun, since the tones of the second noun remain constant. (11) a.

L àmè

èhà



L L (H) water father

H

[ámêhà]

L ‘father’s water’

b.

àmè water

òké ram



áméòké

[ámôkê]17 ‘a ram’s water’

c.

àmè water

FmG child



áméFmG

[ámFmG] ‘a child’s water’

d.

àmè water

ódzí crab



áméódzí

[ámó–î] ‘a crab’s water’



H(H) únóèhà

(12) a.

HL únò

èhà

L H L (H) father mouth

17

H ámé èhà

[únêhà]

H L ‘father’s mouth’

b.

únò mouth

òké ram



únóòké

[únôkê] ‘a ram’s mouth’

c.

únò mouth

FmG child



únóFmG

[únFmG] ‘a child’s mouth’

d.

únò mouth

ódzí crab



únóódzí

[únó–î] ‘a crab’s mouth’

At the phrasal level, a phrase-final High tone is realized as a fall, hence the final falling tones in forms with underlying final Highs such as (11b), (11d), etc.

Featural Affixes (13)

H a. ódzí

èhà



H (H) L crab father

(14)

H ódzíèhà

[ó–êhà]

H L ‘father’s crab’

b. ódzí crab

òké ram



ódzíòké

[ó–ôkê] ‘a ram’s crab’

c. ódzí crab

FmG child



ódzíFmG

[ó–FmG] ‘a child’s crab’

d. ódzí crab

ódzí crab



ódzíódzí

[ó–ó–î] ‘a crab’s crab’



L H Gtéèhà

LH a. Gté

èhà

15

L H (H) L father cricket

[Gtêhà]

LH L ‘father’s cricket’

b.

Gté cricket

òké ram



Gtéòké

[Gtôkê] ‘a ram’s cricket’

c.

Gté cricket

FmG child



GtéFmG

[GtFmG] ‘a child’s cricket’

d.

Gté cricket

ódzí crab



Gtéódzí

[Gtó–î] ‘a crab’s cricket’

The tone changes on the head noun in associative constructions may be summarized descriptively as follows: (15)

a. b. c. d.

L → H (11) H L → H H (12) H → H (13) L H → L H (14)

In (11) we assume there is a single Low tone associated with both syllables (moras) of the noun, following the Obligatory Contour Principle (Leben 1973; McCarthy 1986). The associative High tone replaces this underlying Low tone, and this Low tone itself is not realized on the surface. That the assumption made here with disyllabic forms is true of longer forms is confirmed by the trisyllabic examples in (16), where the three syllables of the head noun are now realized on a High tone in the associative constructions. Therefore all adjacent Low tone syllables become High regardless of the number of syllables.

16 (16)

Akinbiyi Akinlabi L à:ò:ò

òké



L (H) L H skull ram àjèjè butterfly

èhà father

H á:ó:óòké [á:ó:ôkê] H LH ‘a ram’s skull’



ájéjéèhà

[ájéjêhà] ‘father’s butterfly’

In (12) (with the HL pattern), the final Low tone of the head noun becomes High. Given the forms in (16), we assume that any number of adjacent syllables with Low tones will become High. Therefore we predict that HLL head nouns will be realized as HHH. This prediction cannot be confirmed, because our sources do not have any examples with such patterns. The forms in (13) are unremarkable, since the head noun is underlyingly High-toned. Finally, in (14), underlying LH remains the same. Our assumption here is that the associative High tone links vacuously to the final syllable of the head noun, just as [−back] links to palatal consonants in Zoque. The above facts can be analyzed as follows. Following Elimelech I assume that “the associative marker (AM) . . . is underlyingly a High floating tone” (Elimelech 1976: 42). Tone is licensed by any mora in Etsako. Only vowels and syllabic nasals can be moraic in this language. Based on the facts in (11)–(15) above (especially (14)), as well as on facts presented in the Edoid studies cited at the beginning of this section, I suggested that the associative High tone is a featural suffix. It is suffixed to the head noun. However, a (separate) process of tonal harmony transmits the associative High tone throughout the entire head noun. Therefore the domain of the associative High tone is the entire head noun, a prosodic word (Nespor and Vogel 1986; Selkirk 1986; McCarthy and Prince 1990). This type of phenomenon must be handled with two constraints. One is a morphological alignment constraint, the type of which we have seen so far. This alignment places the featural affix at a particular edge of the stem, characterizing it as a prefix or as a suffix (see Kirchner 1993; Pulleyblank 1993, 1996; Akinlabi 1994, 1997; Cole and Kisseberth 1994). The second is phonological feature spread: harmony. This handles feature propagation by establishing the fact that the domain of the feature is a phonological category, such as the prosodic word. It is crucial to note that the associative High tone is different from an underlyingly linked segmental High tone of a head noun (the segmental High tone). First, while the associative High tone is a morpheme, the segmental High tone is not. And second, the segmental High tone is underlyingly linked, while the associative High tone is underlyingly free, i.e. it belongs to a morpheme with no other content. Any analysis of Etsako must recognize these differences.

4.1.1 H-tone opacity In Etsako, the segmental High tone is “opaque”: it blocks the propagation of the suffixal High tone. That is, the suffixal H tone cannot spread through the lexical H tone. The examples in (14) demonstrate this fact. In the LH head nouns, the output associative construction begins as LH, which does not become HH, as one would expect if the suffixal H tone were to spread through the segmental H tone. This indicates two things. First, only the suffixal H tone spreads, while

Featural Affixes

17

the segmental H tone does not; otherwise we would once again have HH on the head noun in the output. Second, the segmental H tone is opaque to the spread of the suffixal H tone. We must assume that the constraint responsible for the association of a segmental H supersedes the one for tone spreading.

4.1.2 German sign language Pfau (2000) has found a parallel of this type of affix in an unexpected place, the negative morpheme in the German Sign Language (DGS).18 Pfau proposes an analysis of the negative headshake of DGS as an autosegment, in other words as a featural affix [headshake], which is associated with a manual form. The negative headshake, he argues, behaves in a way similar to tonal prosodies in tone languages. He proposes that this feature represents the negative morpheme, in the same way as tone functions as a grammatical morpheme associated with an entire base. The main goal of §4.1 has been to show, first, that the domain of a featural affix may be the whole lexical category, but that it can still be identified as a prefix or suffix. Second, the featural affixation, unlike segmental affixation, may combine with harmony involving the feature itself. In §4.2, we show that nontonal featural affixes also behave the same way, using Terena nasalization as illustration.

4.2

Terena nasalization

The second example of a system that combines featural affixation with harmony is Terena. In this section I argue that the feature [nasal] in Terena is a featural prefix, given the insistence on association to the initial consonant of the stem (in direct contrast to the Edoid associative High tone), and that the featural prefixation is accompanied by harmony. Terena also confirms the accounts already given in the preceding sections about both featural alignment and misalignment. In contrast to the Edoid associative marker, the lexical feature [nasal] is transparent to the propagation of the featural affix [nasal] (chapter 78: nasal harmony). In Terena, an Arawakan language of Brazil (Bendor-Samuel 1960, 1966), the category of the 1st person is marked through a process of progressive nasalization. Thus the difference between the Terena examples in the first and the third columns is that the latter are marked for the 1st person. (17)

1st person in Terena a.

b.

18

ajo arine unae emo?u owoku iwu?iœo ituke nokone

‘his brother’ ‘sickness’ ‘boss’ ‘his word’ ‘his house’ ‘he rides’ (poss pron) ‘need’

ã1õ ãXhng jnãg gmõ?j õSõIgu hSj?h nÚo h nduke nõIgone

‘my brother’ ‘my sickness’ ‘my boss’ ‘my word’ ‘my house’ ‘I ride’ (1pers poss pron) ‘I need’

I am deeply indebted to a reviewer for helping to make sense of this section.

Akinbiyi Akinlabi

18 c.

taki tuti paho piho d. ahja?aœo ha?a hjiœoe

‘arm’ ‘head’ ‘mouth’ ‘he went’ ‘he desires’ ‘father’ ‘dress’

n

daki duti n baho m biho ãnÚa?aœo n za?a n Úiœoe

n

‘my arm’ ‘my head’ ‘my mouth’ ‘I went’ ‘I desire’ ‘my father’ ‘my dress’

The descriptive generalizations from the above data are as follows. The 1st person pronoun is expressed by nasalizing the noun or verb. Nasalization affects vowels, liquids, glides, and underlying nasal consonants. Therefore, nasalization spreads through underlying nasal consonants. Laryngeal stops, but not laryngeal fricatives, are affected by nasalization. That is, nasalization may spread through a laryngeal stop, but not through a laryngeal fricative. The examples in (17b) show that nasalization proceeds in an apparent left-toright fashion until it reaches an obstruent. The interesting thing here is that the obstruent becomes prenasalized (and voiced), as in the first example in (17b), but nothing after it is nasalized (except of course it is an underlying nasal consonant, as in the last example in (17b)). Therefore obstruents block [nasal] spreading, but not before they become prenasalized. If a form begins with an obstruent, the effect of the 1st person progressive nasalization is to turn that obstruent into a prenasalized consonant, as in (17c), and there is no nasalization of subsequent segments. I shall not be concerned with further changes in obstruents, other than prenasalization. For example, I shall not discuss the fact that laryngeal continuants change to coronals when nasalized in (17d). Continuing the discussion in the preceding sections, an analysis of the above Terena facts may be presented as follows. The 1st person marker is a free feature [nasal]. [nasal] can be associated with any root node in Terena, consonant or vowel. Given the insistence on associating to the first segment of the noun or verb regardless of the nature of the segment, it is a featural prefix. However, a process of harmony transmits nasality from the prefix through the stem; and thus the apparent domain of the [nasal] morpheme is the entire stem, which is a prosodic word. The surface realization of this morpheme may be accounted for the same way as tone in Etsako. An alignment constraint places [nasal] as a prefix, while a feature spread constraint accounts for spreading to the end of the word. Just like the High tone in Etsako, [nasal] is both the featural content of a morpheme and a lexically contrastive feature in Terena. These two functions must be recognized by any analysis.

4.2.1 Nasal transparency

Forms like /arine/ → [ãXhng] in (17a) reveal that nasal stops do not block the propagation of the [nasal] morpheme in Terena, i.e. underlying nasal stops are transparent to the morphemic [nasal] spread. Our account of this transparency is that (the constraint responsible for) the domain of association of the [nasal] morpheme takes precedence over the segmentally specified [nasal], and could therefore pass “over” the segmentally specified [nasal]. This constitutes an important difference between the underlying segmental High tone in Edoid (as exemplified by Etsako) and the segmentally specified [nasal] in Terena. While the segmental High tone in Edoid blocks the propagation

Featural Affixes

19

of the morphemic High tone, the propagation of the [nasal] morpheme in Terena is not blocked by the segmentally specified [nasal].

4.2.2 Obstruents and co-occurrence We now turn to account for the behavior of obstruents in Terena. As noted above, obstruents block the rightward propagation of the [nasal] morpheme, while becoming prenasalized: /owoku/ → [õSõIgu] ‘my house’. To account for this, we assume a co-occurrence constraint forbidding the co-occurrence of [−sonorant] and [+nasal] in Terena (see Pulleyblank 1989: 109). Note however that, while nasality is always barred from obstruents in general (as in Orejon; Pulleyblank 1989), Terena obstruents are partly nasalized. We can account for this by assuming that nasality is barred from the release phase of obstruents in Terena, but not from closure phase (Steriade 1993). Prenasalization in Terena can be seen as the association of the [nasal] morpheme to the closure phase of the obstruent stops, and not to the release phase. Finally, though, the domain of the [nasal] morpheme is the entire stem (a prosodic word), like the High tone in Etsako; it is formally a featural prefix, in contrast to Etsako, where H is a featural suffix. Gerfen (1999: 127–131) describes an interestingly similar case in Coatzospan Mixtec. In this language, the 2nd person familiar is marked by a [nasal] feature. As in Terena, the entire base is nasalized. However, unlike in Terena, the free feature [nasal] is a suffix, because the spreading is from right to left. Furthermore, if spreading is blocked, only the final vowel of the base is nasalized, indicating that the feature [nasal] links to the final vowel. Spreading is blocked when the final syllable has a voiceless obstruent onset. Finally, like in Terena, lexical nasal consonants are transparent to nasal spread.

5

Segmental realization: Mafa imperfective

In our fourth case study, the featural affix is at the same time a “feature” and a “segment.” I refer to this as segmental realization of a featural affix. The case is exemplified by palatalization in Mafa. This language is interesting because of its unique morphological properties. The affix expressing the imperfective in Mafa can be characterized both as a segmental affix and as a featural affix at the same time.19 This allomorphy gives languages like this a special place in the study of featural affixes. Ettlinger (2003, 2004) describes the morphosyntactic process of imperfective aspect formation in Mafa, a central Chadic language of Cameroon, as follows. The imperfective is formed in one of two ways, depending on whether the final segment of the root is a vowel ([a]) or a consonant. In the case of verbs ending in [a], /j/ is suffixed to the base, as seen in (18). (All vowel-final verb stems end in an /a/ and all other suffixes are positioned after the imperfective suffix.)

19

Another language with similar properties is Yokuts (Archangeli 1984, 1991; Archangeli and Pulleyblank 1994). In Yokuts, the glottal feature can surface as a segment or as part of another segment (or not surface at all).

Akinbiyi Akinlabi

20 (18)

Palatalization of /a/-final verbs gudza bHra n da keÏa

‘tremble’ ‘insult’ ‘cut a hole’ ‘divide’

gudzaj bHraj n daj keÏaj

‘is ‘is ‘is ‘is

trembling’ insulting’ cutting a hole’ dividing’

The imperfective of verbs ending in a consonant, however, is formed with a palatal featural suffix. Apparently, the palatal prosody targets either vowels or coronal stridents, and no more. There is one complication, regarding the vowel [u]. [u] is not palatalized (to [y]) in two contexts: (a) when it occurs after a dorsal, and (b) after a coronal strident in a disyllabic root. I will not discuss this complication here. Readers are referred to Ettlinger (2003, 2004) for an explanation. The vowel inventory of Mafa is given in (19). (19)

Mafa vowel inventory i e

y œ

H u a o

The surface realizations of vowels under palatalization are as follows: (20)

/H/ /u/ /o/ /a/

→ → → →

/i/ /y/ /œ/ /e/

The forms in (21a) represent monosyllabic verb roots, and those in (21b) represent disyllabic forms. The last two forms in (21) show that both vowel and coronal stridents can be palatalized, if both are present in the verb root. In the forms in (22), palatalization appears to skip some segments, while other segments are palatalized (Ettlinger 2004). This is not skipping. The skipped segments are not licensors (Akinlabi 1996) of the palatal prosody in Mafa, hence the apparent skipping. (21)

a.

Palatalization of monosyllabic consonant-final pan‘wash’ pentHv‘light (vb)’ tivdad‘add water to’ dedguts‘squirt’ gutœtsap‘speckle’ Œepsur‘sleep with a woman’ œyr-

b.

Palatalization of disyllabic consonant-final verbs sHban‘work’ œiben‘is working’ lubat ‘twist’ lybet ‘is twisting’ suwdHk ‘miss’ œuwdik ‘is missing’

c.

No palatalization gum‘carve wood’ gud‘search with anxiety’ kurkw‘carve everywhere’

gumgudkurkw-

verbs ‘is washing’ ‘is lighting’ ‘is adding water to’ ‘is squirting’ ‘is speckling with clay’ ‘is sleeping with a woman’

‘is carving wood’ ‘is searching with anxiety’ ‘is searching everywhere’

Featural Affixes

21

Following the way featural affixes work, there is no doubt that the imperfective is a featural suffix in Mafa (Akinlabi 1996), as the vowel-final verbs show. It scans the verb root in a right-to-left manner. If the last segment of the verb root is a vowel, then the imperfective is a full segment, i.e. a suffix. If the palatal prosody finds a consonant as the final segment, then it seeks out a licensor, preferably a vowel. I assume that the coronal palatalization is just a default. This is because this is the only consonant that can be changed without actually completely changing the primary place of articulation. Finally, I suggested that the vowel [u] is blocked from change after a dorsal consonant because it shares the dorsal specification with the preceding dorsal consonant.

6

Formal insights into featural affixation

In general, there has not been much disagreement about whether features can be affixes or not. What has varied is the formal approach to featural affixes. Much of the formal work on featural affixes has been carried out within autosegmental phonology, which allows for autonomous representation of features (chapter 14: autosegments). The featural affix is commonly represented as a floating feature, and linked to a segment by some rule. Work done on featural affixes within this approach includes McCarthy (1983), Lieber (1984), and others. Feature geometry (Clements 1985; Sagey 1986; Clements and Hume 1995; and others; see also chapter 27: the organization of features) has also provided significant insights. For example, feature geometry provides significant insight into the grouping of features, and into why some features co-occur together and others don’t. In addition, certain nodes can serve as anchors for some featural affixes while others cannot. Work like Archangeli and Pulleyblank (1994) is situated within this approach. The formal approach to featural affixation adopted in this chapter is the constraint-based Optimality Theory (Prince and Smolensky 1993). Within this theory, grammars are composed of hierarchies of ranked and violable universal markedness and faithfulness constraints. In the theory, faithfulness constraints monitor input and output to ensure that they are the same, and markedness constraints ensure that output structures are unmarked to the highest degree possible, depending on the conflict between all markedness and faithfulness constraints. However, there are various approaches to featural affixes within Optimality Theory itself. Variations include Zoll’s (1998) subsegmental approach, which proposes that the input and output correspondence of “subsegments,” including “floating features” and latent segments (undominated F-element), is monitored by Max(subseg) (see Lombardi 1998 for similar Max(F)), stated as in (22). (22)

Max(subseg) Every subsegment in the input has a correspondent in the output.

As Zoll (1998: 44) notes, featural affixes are realized as part of other segments, therefore the correspondence relation returns the output segment that hosts the feature, not the feature itself. If that is the case, Mc Laughlin (2000) argues

22

Akinbiyi Akinlabi

that, since subsegments do not occur as output forms, there is no evidence for positing a Dep constraint of the sort Dep(subseg). She proposes that we employ Ident-IO(F) to monitor subsegments in general. This may be stated as in (23): (23)

Ident-IO(F) Correspondent IO segments have identical values for the feature F.

Kirchner (1993), Akinlabi (1996), and Zoll (1996) suggest that features are subject to the same kind of alignment, or coincide constraints as segments. Akinlabi (1996) suggests that featural affixes are subject to the same kind of alignment constraints as non-featural morphemes. He proposes that alignments constraints account for the determination of featural affixes as prefixes or suffixes. All featural affixes, he proposes, are subject to the featural alignment in (24) (see McCarthy and Prince (1993a, 1993b). The specific morphological alignment constraint in (25) accounts for Chaha labialization (Akinlabi 1996: 246). (24)

Featural alignment Align(PFeat, GCat) A prosodic feature is aligned with some grammatical category.

(25)

Align-3masc-sg Align (3masc sg, R; Stem, R) The right edge of 3masc sg must be aligned with the right edge of the stem. 3masc sg is a suffix in stem.

A constraint like (25) does not say whether 3masc sg is a segment or a feature; it simply refers to the morphological category. Therefore it should not matter whether 3masc sg is a feature or a segment. As Akinlabi (1996: 243) points out, PFeat (in (24)) is simply the featural spell-out of the morphological category in question. Misalignment of featural affixes is controlled by feature co-occurrence constraints (Archangeli and Pulleyblank 1994). An example of this is *NasCont (Akinlabi 1996: 254), which forbids nasal consonants from be continuants. (26)

*NasCont If [nasal] then not [continuant].

The above represents the core of the grammar of featural affixes. The variations are derived from ranking the constraints. This analysis also represents the point of departure for some scholars.20 20

Piggott (2000) argues against the idea that features can align to word edges, like segments. He sees featural alignment as proposed by Akinlabi (1996) as an overly powerful mechanism. He proposes instead that morphological alignment be supplemented by a provision for prosodic licensing, so that, for example, features may be incorporated into a prosodic category such as a foot or a prosodic word. See Mc Laughlin (2000: 344–345) and Horwood (2004) for answers to Piggott’s objections. Another notable counterposition is that of Kurisu (2001), who proposes a “relational morphology theory” instead of “featural alignment.” I will not discuss this here, since it is an entirely different theory.

Featural Affixes

7

23

Are featural affixes really featural?

I will end this discussion by examining whether “featural affixes” are really “featural.” We can examine this issue from the theoretical and the empirical points of view. The traditional view of an affix is that of a “whole segment” (or segments), which marks a morphological category. The affix is dependent on, or attached to, some host, a base. The category represented could be inflectional or derivational. By segment is traditionally meant a unity of several articulatory gestures that are produced simultaneously and that paradigmatically contrast with one another. By this definition [t] and [s] are segments in tip and sip. [t] and [s] are also segmental affixes (suffixes), representing the English past tense, and 3rd person singular verbal agreement in [sækt] sacked and [sæks] sacks, respectively. In this definition of an affix, it represents a timing slot or more in the paradigmatic string. Featural affixes on the other hand, from the cases that we have been discussing, do not always occupy a timing slot. Rather they share the same time slot with one or more of the segments in the base. For example, in Zoque palatalization (Wonderly 1951) (§2.2), palatalization simply changes an alveolar consonant to a palatal ([s] → [œ], in [sZk] ‘beans’, [œZk] ‘his beans’), yet that difference signifies the distinction between ‘beans’ and ‘his beans’. In some cases, it in fact makes no sense to talk about timing slots in the string. Such is the case in Terena nasalization (Bendor-Samuel 1960, 1966) and in Mafa palatalization (Ettlinger 2003, 2004), where the featural affix attaches to more than one segment of the base. In Mafa [lubat] ‘twist’, [lybet] ‘is twisting’, the palatal feature is attached to both vowels in the base. But even with these facts there are problems about what a featural affix really is. The problem is those features that can be realized as full segments as well as as features. These include palatalization, labialization, nasalization, and glottalization. Note that, in Mafa and languages like it, the palatal feature can be realized as a full segment [j], when the verb is vowel-final. Does this, then, mean that this is both a segmental affix and a featural affix? Or is it a featural affix that is sometimes realized as a full segment? Mafa is intriguing because, on any account, it would satisfy the definition of a segmental affix as well as that of a featural affix. The same applies to nasalization in Seereer Siin (Mc Laughlin 2000). It is easy to assume that the nasal feature in all these cases is a full segment. However, certain features are never realized as full segments. These include voicing and continuancy. There is no other way that I know of than to analyze the feature [continuant] as a featural affix marking the past participle in Nuer (Crazzolara 1933) (§3.1). From the theoretical point of view, this question relates to the way a “segment” is defined. If segments (or feature bundles) are the contrastive elements in a language, such that the meaning contrast between [tip] and [dip] is seen as represented by the first consonants [t] and [d] in these words, rather than by the fact that [t] and [d] differ only in that [d] is voiced and [t] is not, then there are featural affixes, because the elements that represent featural affixes are “less than” segments, as the empirical facts above reveal. On the other hand, the current assumption is that the contrastive elements in language are “features,” and not “feature bundles.” This distinction is captured

24

Akinbiyi Akinlabi

by feature theory (chapter 17: distinctive features) and the internal organization of segments (e.g. Clements 1985). On this viewpoint, the meaning contrast between [tip] and [dip] is seen as represented by voicing. If we equate minimal units with contrastive units, then there are no featural affixes. The only distinction is between affixes that are feature bundles and affixes that are single features. Empirical data from the Mafa imperfective aspect (Ettlinger 2003, 2004) suggest that the distinction between a segmental affix and a featural affix may not be real. In this case the same feature, “palatality,” sometimes behaves as a feature bundle, and sometimes as a single feature. The importance of the Mafa data is that even the distinction between “single feature” and “feature bundle” may not be real.

8

Conclusions

In summary, in this chapter I have illustrated the characteristics of featural affixes. These features include (a) marking morphological categories (like segmental affixes), (b) occurring as part of other segments rather than independently, (c) varying between prefixes and suffixes, (d) occurring inside the stem (because of feature co-occurrence constraints at edges), (e) spanning the entire base of affixation, and (f) varying occurrence as a feature or a segment in the same language. I have illustrated these with facts from Dutch, Chaha, Zoque, Nuer, Seereer Siin, Etsako, Terena, Mafa, Coatzospan Mixtec, and German Sign Language. Comparing featural affixes with traditional regular affixes, featural affixes share four characteristics with the traditional affixes: (a) marking morphological categories, (b) varying between prefixes and suffixes, (c) (sometimes) occurring as independent segments, and (d) occurring inside the stem (because of feature co-occurrence constraints at edges). Other characteristics are unique to featural affixes alone: (a) occurring as part of other segments, (b) spanning the entire base of affixation, and (c) varying occurrence a feature or a segment in the same language. There are a number of important lessons that the unique characteristics of “featural affixes” teach us. First, the so-called “normal affixes” always contain a timing unit, while “featural affixes” do not normally contain a timing unit. Second, they raise the question of whether segments or features are the basic elements that sound systems manipulate. Finally, they reveal that all features are not the same. Some features can be morphemic but can never be realized independently of some other segments ([continuant], [voice]), while other features that are morphemic may dock on some sound in the stem but may also become segments in their own right ([glottal], [nasal], [palatal], [labial]).

ACKNOWLEDGMENTS I am grateful to two anonymous reviewers for insightful comments, and to the editors of the Companion for all their help with this chapter.

Featural Affixes

25

REFERENCES Akinlabi, Akinbiyi. 1994. Alignment constraints in ATR harmony. Studies in the Linguistic Sciences 24. 1–18. Akinlabi, Akinbiyi. 1996. Featural affixation. Journal of Linguistics 32. 239–289. Akinlabi, Akinbiyi. 1997. Kalabari vowel harmony. The Linguistic Review 14. 97–138. Alderete, John & Alexei Kochetov. 2009. Japanese mimetic palatalization revisited: Implications for conflicting directionality. Phonology 26. 369–388. Amayo, Moses A. 1976. A generative phonology of Edo (Bini). Ph.D. dissertation, University of Ibadan. Anderson, Stephen C. (ed.) 1991. Tone in five languages of Cameroon. Dallas: Summer Institute of Linguistics & University of Texas, Arlington. Archangeli, Diana. 1984. Underspecification in Yawelmani phonology and morphology. Ph.D. dissertation, MIT. Archangeli, Diana. 1991. Syllabification and prosodic templates in Yawelmani. Natural Language and Linguistic Theory 9. 231–283. Archangeli, Diana & Douglas Pulleyblank. 1994. Grounded phonology. Cambridge, MA: MIT Press. Arnott, David W. 1970. The nominal and verbal systems of Fula. Oxford: Oxford University Press. Bendor-Samuel, John T. 1960. Some problems of segmentation in the phonological analysis of Terena. Word 16. 348–355. Bendor-Samuel, John T. 1966. Some prosodic features of Terena. In C. E. Bazell, J. C. Catford, M. A. K. Halliday & R. H. Robins (eds.) In memory of J. R. Firth, 30–39. London: Longman. Clements, G. N. 1985. The geometry of phonological features. Phonology Yearbook 2. 225–252. Clements, G. N. & John A. Goldsmith (eds.) 1984. Autosegmental studies in Bantu tone. Dordrecht: Foris. Clements, G. N. & Elizabeth Hume. 1995. The internal organization of speech sounds. In John A. Goldsmith (ed.) The handbook of phonological theory, 245–306. Cambridge, MA & Oxford: Blackwell. Cole, Jennifer & Charles W. Kisseberth. 1994. An optimal domains theory of vowel harmony. Studies in the Linguistic Sciences 24. 101–114. Crazzolara, J. P. 1933. Outlines of Nuer grammar. Vienna: Verlag der Internationalen Zeitschrift “Anthropos.” Dell, François. 1980. Generative phonology. Cambridge: Cambridge University Press. Donwa, Shirley O. 1982. The sound system of Isoko. Ph.D. dissertation, University of Ibadan. Egbokhare, Francis. 1990. A phonology of Emai. Ph.D. dissertation, University of Ibadan. Elimelech, Baruch. 1976. A tonal grammar of Etsako. UCLA Working Papers in Phonetics 35. Elugbe, Benjamin O. 1989. Comparative Edoid: Phonology and lexicon. Port Harcourt: University of Port Harcourt Press. Ettlinger, Marc. 2003. Aspect in Mafa: A case of featural affixation. Unpublished ms., University of California, Berkeley. Ettlinger, Marc. 2004. Aspect in Mafa: An intriguing case of featural affixation. Papers from the Annual Regional Meeting, Chicago Linguistic Society 40. 73–86. Firth, J. R. 1948. Sounds and prosodies. Transactions of the Philological Society. 127–152. Frank, Wright J. 1999. Nuer morphology. M.A. thesis, State University of New York, Buffalo. Gerfen, Chip. 1999. Phonology and phonetics in Coatzospan Mixtec. Dordrecht: Kluwer. Hamano, Shoko. 1986. The sound-symbolic system of Japanese. Ph.D. dissertation, University of Florida, Gainesville. Hendricks, Sean. 1989. Palatalization and labialization as morphemes in Chaha. Unpublished ms., Yale University.

26

Akinbiyi Akinlabi

Horwood, Graham. 2004. Relational faithfulness and position of exponence in Optimality Theory. Ph.D. dissertation, Rutgers University. Hulst, Harry van der & Keith L. Snider (eds.) 1993. The phonology of tone. Berlin & New York: Mouton de Gruyter. Idsardi, William J. 1992. The computation of prosody. Ph.D. dissertation, MIT. Itô, Junko. 1989. A prosodic theory of epenthesis. Natural Language and Linguistic Theory 7. 217–259. Itô, Junko & Armin Mester. 1993. Licensed segments and safe paths. Canadian Journal of Linguistics 38. 197–213. Johnson, C. Douglas. 1975. Phonological channels in Chaha. Afroasiatic Linguistics 2. 1–13. Jungraithmayr, Hermann. 1990. Lexique mokilko. Berlin: Dietrich Reimer. Kirchner, Robert. 1993. Turkish vowel disharmony in Optimality Theory. Paper presented at the Rutgers Optimality Workshop 1, Rutgers University. Kosseke, Dominique & Jérôme Sitamon. 1993. Aka field notes. Unpublished ms., Summer Institute of Linguistics. Kuipers, Aert. 1974. The Shuswap language: Grammar, texts, dictionary. The Hague & Paris: Mouton. Kurisu, Kazutaka. 2001. The phonology of morpheme realization. Ph.D. dissertation, University of California, Santa Cruz. Leben, William R. 1973. Suprasegmental phonology. Ph.D. dissertation, MIT. Lieber, Rochelle. 1984. Consonant gradation in Fula: An autosegmental approach. In Mark Aronoff & Richard T. Oehrle (eds.) Language sound structure, 329–346. Cambridge, MA: MIT Press. Lieber, Rochelle. 1987. An integrated theory of autosegmental processes. Albany: SUNY Press. Lombardi, Linda. 1998. Evidence for MaxFeature constraints from Japanese. University of Maryland Working Papers in Linguistics 7 (ROA-247). McCarthy, John J. 1983. Consonantal morphology in the Chaha verb. Proceedings of the West Coast Conference on Formal Linguistics 2. 176–188. McCarthy, John J. 1986. OCP effects: Gemination and antigemination. Linguistic Inquiry 17. 207–263. McCarthy, John J. & Alan Prince. 1990. Foot and word in prosodic morphology: The Arabic broken plural. Natural Language and Linguistic Theory 8. 209–283. McCarthy, John J. & Alan Prince. 1993a. Prosodic morphology I: Constraint interaction and satisfaction. Unpublished ms., University of Massachusetts, Amherst & Rutgers University. McCarthy, John J. & Alan Prince. 1993b. Generalized alignment. Yearbook of Morphology 1993. 79–153. Mc Laughlin, Fiona. 2000. Consonant mutation and reduplication in Seereer Siin. Phonology 17. 333–363. Mc Laughlin, Fiona. 2005. Reduplication and consonant mutation in the Northern Atlantic languages. In Bernhard Hurch (ed.) Studies on reduplication, 111–133. Berlin & New York: Mouton de Gruyter. Mester, Armin & Junko Itô. 1989. Feature predictability and underspecification: Palatal prosody in Japanese mimetics. Language 65. 258–293. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Pfau, Ronald. 2000. The grammar of headshake: Sentential negation in German Sign Language. Unpublished ms., University of Amsterdam. Piggott, Glyne L. 2000. Against featural alignment. Journal of Linguistics 36. 85–129. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Pulleyblank, Douglas. 1986. Tone in Lexical Phonology. Dordrecht: Reidel.

Featural Affixes

27

Pulleyblank, Douglas. 1989. Patterns of feature co-occurrence: The case of nasality. In S. Lee Fulmer, Masahide Ishihara & Wendy Wiswall (eds.) Proceedings of the Arizona Phonology Conference 2, 98–115. Tucson: University of Arizona. Pulleyblank, Douglas. 1993. Vowel harmony and Optimality Theory. Actas do Workshop Sobre Fonologia, University of Coimbra. 1–18. Pulleyblank, Douglas. 1996. Neutral vowels in Optimality Theory: A comparison of Yoruba and Wolof. Canadian Journal of Linguistics 41. 295–347. Rice, Keren. 1987. The function of structure preservation: Derived environments. Papers from the Annual Meeting of the North East Linguistic Society 17. 501–519. Roberts, James S. 1994. Nontonal features as grammatical morphemes. Work Papers of the Summer Institute of Linguistics, University of North Dakota 38. 87– 99. Rose, Sharon. 1994. Palatalization, underspecification, and plane conflation in Chaha. Proceedings of the West Coast Conference on Formal Linguistics 12. 101–116. Rose, Sharon. 2007. Chaha (Gurage) morphology. In A. S. Kaye (ed.) Phonologies of Asia and Africa (including the Caucasus), 399–424. Winona Lake, IN: Eisenbrauns. Sagey, Elizabeth. 1986. The representation of features and relations in nonlinear phonology. Ph.D. dissertation, MIT. Selkirk, Elisabeth. 1986. On derived domains in sentence phonology. Phonology Yearbook 3. 371–405. Steriade, Donca. 1993. Closure, release, and nasal contours. In Marie K. Huffman & Rena A. Krakow (eds.) Nasals, nasalization, and the velum, 401–470. Orlando: Academic Press. Wonderly, William L. 1951. Zoque II: Phonemes and morphophonemes. International Journal of American Linguistics 17. 105–123. Zoll, Cheryl. 1996. Parsing below the segment in a constraint-based framework. Ph.D. dissertation, University of California, Berkeley. Zoll, Cheryl. 1998. Positional asymmetries and licensing. Unpublished ms., MIT (ROA-282).

83

Paradigms Adam Albright

1

Introduction

Morphological paradigms are a mainstay of traditional descriptions of inflectional systems and of diachronic change. Only in recent years, however, have paradigms played a formal role in the grammatical analysis of phonological systems, in the form of correspondence constraints and contrast constraints on paradigmatically related forms. In this chapter I review some of the evidence that has been taken to indicate that paradigm structure plays an active role in synchronic phonology, and discuss some of the grammatical mechanisms that have been proposed to capture such effects, focusing especially on work within Optimality Theory (OT: Prince and Smolensky 2004). It must be acknowledged at the outset that one cannot meaningfully discuss phonological paradigm effects without a precise definition of “paradigm.” I begin by adopting a very general and widely assumed definition: a paradigm is the exhaustive set of inflected forms that share a single root or stem – e.g. inflected case and number forms of a noun, or person, number or tense/aspect/mood forms of a verb. In some cases, phonology treats all inflected forms of a root alike, and this broad definition suffices. In many cases, however, it is necessary to restrict the domain of discussion to a specific subset of inflected forms that constitute a local subparadigm – e.g. the set of verb forms that share present tense subjunctive inflection (the “present subjunctive paradigm”), or the set of noun forms that share dual inflection (the “dual paradigm”). For the purpose of illustrating paradigm effects and their analysis, I will simply stipulate the scope of the effect as necessary; in §4, I return to the issue of the formal definition of paradigms. An important related issue is whether phonological paradigm effects are conditioned by the same type of paradigm structure that is posited by paradigmbased theories of morphology, such as Word-and-Paradigm Morphology (Hockett 1954; Matthews 1965; Zwicky 1985; Anderson 1986) or Paradigm Function Morphology (Stump 2001). On the face of it, paradigm-based theories of morphology seem especially well suited for capturing phonological paradigm effects, since they provide a representational unit (the “paradigm”) that can be used to condition morphological and phonological grammar distributions. The use of The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0083

Paradigms

2

phonological paradigm constraints does not presuppose the existence of paradigms as morphological representations, however. In fact, the phonological constraints described in §3 make very limited use of morphological structure: if two forms share a root and all derivational affixes, then they are treated by the phonology as members of the same inflectional paradigm. Although arguments against morphological and phonological uses of paradigms are often presented side by side (e.g. Bobaljik 2008), it appears that the question of whether morphology is paradigm-based may be orthogonal to the question of whether phonology imposes constraints on relations between inflectionally related forms. The current chapter focuses on the role that paradigmatic relations play in conditioning phonological distributions, and leaves aside arguments for and against paradigms as morphological representations. A long-term goal, beyond the scope of this chapter, would be to establish whether morphological and phonological distributions rely on a common set of representational units, or whether phonological constraints make only indirect reference to morphological representations. In this chapter, I consider a variety of effects that demonstrate a role for paradigms in phonological grammar. Before turning to the analysis of synchronic paradigm effects, however, it is useful to review briefly the type of data that have traditionally been taken as evidence that phonological processes are sensitive to paradigm structure, drawn from the domain of language change.

1.1

Paradigms as a factor in conditioning diachronic change

It is often convenient to present complex inflectional systems in tabular form, with each cell in the paradigm representing a particular combination of morphosyntactic features. In some theories of morphology, paradigms are taken to be not merely a matter of descriptive convenience, but also a grammatically relevant representation of the distribution of inflectional markers (Hockett 1954; Matthews 1965; Anderson 1986; Wurzel 1989; Stump 2001; Blevins 2003; Ackerman et al. 2009). According to proponents of paradigm-based models, representations in terms of paradigms permit concise or insightful statements about morphological distribution that would be difficult to capture if each morpheme were represented individually. For instance, patterns of syncretism and alternation are often shared across multiple inflection classes, suggesting that a paradigmatic template is in force (Williams 1994; Baerman et al. 2005; Maiden 2005; though see Bobaljik 2002 and Harley 2008 for alternative approaches). An example is seen in Modern German, which has stem vowel alternations in the 2nd and 3rd singular present tense forms of many verbs. These alternations can be traced back to historically unrelated raising processes, but the conditioning context (originally, high vowels in the suffix) is no longer present in the modern language, nor can the various changes found in the 2nd and 3rd singular be straightforwardly unified as the same featural change. Thus it appears that the broadest generalization one can make is that the 2nd and 3rd singular differ from the rest of the paradigm in having a raised and/or fronted vowel. This relation cannot be captured by a single (morpho)phonological rule, but can be characterized by a template in which the 2nd and 3rd singular forms differ from the remaining forms.

3 (1)

Adam Albright Vowel alternations in German present tense indicative verb forms Verb inf 1sg 2sg 3sg 1pl 2pl 3pl

‘travel’ fa(úHn fa(ú(H) fe(úst fe(út fa(úHn fa(út fa(úHn

‘run’ laÁfHn laÁf(H) lD>fst lD>ft laÁfHn laÁft laÁfHn

‘give’ ge(bHn ge(b(H) g>+st g>+t ge(bHn ge(+t ge(bHn

‘see’ ze(Hn ze((H) zi(st zi(t ze(Hn ze(t ze(Hn

A compelling source of evidence for a paradigmatic effect comes from language change: in fact, verbs like [ge(bHn] ‘give’ and [ze(Hn] ‘see’ originally showed a different distribution, in which the 1st singular had the same vowel as the 2nd and 3rd singular: [gibH], [gi+st], [gi+t]. The change of 1st singular [gibH]→ [ge(bH] to match the vowel of the plural and infinitive appears to have been motivated by the influence of verbs like [fa(úHn] ‘travel’, in which raising has always been limited to the 2nd and 3rd singular (Paul et al. 1989: 243).1 Put differently, a prevalent pattern of alternation within the paradigm (2nd and 3rd singular vs. others) was generalized to verbs with a similar but less robustly attested pattern (singular vs. plural). Such changes are often taken as evidence that speakers evaluate the relations between forms within paradigms, and that language change may enforce or regularize such relations.

1.2

Paradigm uniformity in language change

One common way in which paradigmatic relations are strengthened is by loss (or “leveling”) of alternations among inflectionally related forms. A widely discussed example comes from immediately pre-classical Latin, in which [s] ~ [r] alternations created by rhotacism of intervocalic stridents were leveled to invariant [r] (Hock 1991: 179–190; Barr 1994; Kenstowicz 1996; Hale et al. 1997; Kiparsky 1998; Baldi 1999: 323; Albright 2005). (2)

Leveling of rhotacism alternations in Latin honor nom sg gen sg dat sg acc sg abl sg

1

Stage 1 hono(s hono(ris hono(ri( hono(rem hono(re

Stage 2 honor hono(ris hono(ri( hono(rem hono(re

An alternative view, put forward by Joesten (1931) and discussed by Dammers et al. (1988: 449) and by Hartweg and Wegera (1989: 129), holds that the distribution in (1) is etymologically expected for all of these verbs, including geben, and that the use of 1st singular gibe in literary Middle High German represents a partial leveling of the vowel from the 2nd and 3rd singular to the 1st singular in certain dialects. Either way, the difference between dialects with 1st singular gibe and those with gebe requires some form of analogical change within the paradigm to match a pattern found in other verbs.

Paradigms

4

Crucially, the change of [s] to [r] in [hono(s] → [honor] took place only within the inflectional paradigm, while derivationally related forms such as [hones-tus] ‘honorable’, [hones-te(] ‘honorably’, and [hones-ta(s] ‘honorableness’ remained unchanged. Furthermore, [s]-final nouns that had no inflected forms with [r], such as the indeclinable noun [nefa(s] ‘sacrilege’ (related to the derived adjective [nefa(r-ius] ‘wicked’), also remained unchanged, as did [s]-final words in other parts of speech, such as the adverb [nimis] ‘excessively’. These facts show that the change of [s] to [r] was not part of a broader sound change extending rhotacism to the word-final position. Furthermore, if we assume that forms like [hones-tus] remained synchronically linked to the related noun [honor], the change cannot be viewed as a re-analysis to /honor/ by learners missing the presence of [s] ~ [r] alternations (Vincent 1974). Such examples are often taken as evidence that the domain of analogical leveling is the inflectional paradigm. Another common feature of paradigm leveling is that it may affect only a subset of the inflectionally related forms. For example, several classes of Middle High German (MHG) verbs showed vowel alternations between the singular and plural, with the infinitive always matching the vowel of the plural, and the past participle frequently showing yet a different vowel. In Yiddish, these alternations have been eliminated completely in some verbs (3a), while they have been leveled only among present tense forms for others (3b). We may plausibly suppose that the infinitive and finite present tense forms of /visHn/ ‘know’ remain synchronically related, but the identity of the 1st/3rd plural and infinitive has been abandoned in favor of an invariant verb stem within the subparadigm of present tense forms (Albright 2010). (3)

Full and partial leveling in Yiddish a.

‘need’ inf 1sg 2sg 3sg 1pl 2pl 3pl past prt

MHG dürf-en darf darf-t darf dürf-en dürf-t dürf-en ge-dorft

Yiddish darf-è darf darf-st darf-(t) darf-è darf-t darf-è ge-darft

b.

‘know’ inf 1sg 2sg 3sg 1pl 2pl 3pl past prt

MHG wiÚÚ-en weiÚ weis-t weiÚ wiÚÚ-en wis-t wiÚÚ-en ge-wus-t

Yiddish vis-è ve>s ve>s-t ve>s ve>s-è ve>s-t ve>s-è ge-vus-t

Language change provides numerous examples of paradigmatic conditioning. However, we frequently cannot be certain whether the changes reflect a synchronic preference for non-alternating paradigms, or whether they represent lexical reanalyses on the part of learners. Such changes are difficult to interpret, because we cannot be sure that Latin speakers synchronically derived forms like [hono(s] and [hones-tus] (without rhotacism) from a single stem /hono(s/. If [hono(s] and [honestus] were derived from different stems, we could interpret the change as a re-analysis of the nominal stem from /hono(s/ to /honor/ (perhaps motivated by the preponderance of rhotacized [hono(r-] forms), while the adjectival stem /hones/ remained intact. Similarly, although there is no reason to think that MHG speakers treated the infinitive and plural stems wiÚÚ- and wiÚÚ- as distinct, if for some reason they did encode them as separate stems, then we could simply say

5

Adam Albright

that Yiddish lost the plural stem while retaining the infinitive stem. In order to show that a phonological process misapplies within inflectional paradigms, we must demonstrate that the forms in question are synchronically derived from the same stem, and that the process in question continues to apply except when trumped by identity among paradigmatically related forms. In the following section, we review several cases with exactly this flavor: a phonological process continues to apply straightforwardly and productively in derived forms, but is overridden just in case greater identity between inflected forms would result.

2

Synchronic paradigm effects

The literature on synchronic paradigm effects has identified two opposing ways in which phonological relations among inflected words may be regulated: on the one hand, there is a tendency to demand identity among inflected forms, so that elements of shared meaning (the stem, shared inflectional markers) have a consistent phonological form throughout the paradigm (uniformity). At the same time, there is a tendency to avoid total identity between forms that are morphologically distinct (anti-homophony) (see also chapter 103: phonological sensitivity to morphological structure). In both cases, it is claimed that the requirement of identity or distinctness may be enforced within the domain of the paradigm, while violations of identity between derivationally related forms, or violations of anti-homophony among unrelated forms, are not enforced. I consider these two tendencies in turn. Paradigmatic conditions in phonology have been explored extensively in the past decade, and it would not be possible in a chapter of this length to do justice to the range and intricacy of empirical cases that have been brought to bear on the issue. For additional examples, the reader is referred to the collections of papers in Hermans and van Oostendorp (1999), Downing et al. (2005b), and Bachrach and Nevins (2008). Frequently, affixation creates or destroys the context for a phonological process to apply, leading to the possibility of alternations. This section discusses cases in which regularly expected alternations are avoided, and paradigmatic identity is seen instead. As is standard in the literature on identity effects in other domains, such as reduplication (McCarthy and Prince 1995), we may distinguish between cases where identity is achieved by applying a process outside its regular context (overapplication), by failing to apply a process within its regular context (underapplication), or by applying a process differently from how it would normally apply to a given phonological string (misapplication). It is also important to bear in mind that these phenomena are not substantively different from those that arise in cases of cyclicity in derived forms; therefore, much of the discussion of parallel facts in chapter 85: cyclicity is relevant here as well.

2.1

Overapplication

Frequently, phonological processes apply outside their regular context, if doing so can achieve greater identity between related forms. For example, Hayes (2000) discusses a process in American English (and other varieties) in which coda /l/ becomes dark and creates a diphthongized allophone of the preceding vowel (chapter 31: lateral consonants): /fi(l/ → [fiHû] feel, /fa>l/ → [fa>Hû] file,

Paradigms

6

/bD>l/ → [bD>Hû] boil. This process does not normally occur intervocalically: [si(l>I]/*[siHû>I] ceiling, [pa>lHt]/*[pa>HûHt] pilot, [tD>lHt]/*[tD>HûHt] toilet. However, at morpheme boundaries, coda velarization may apply. Hayes presents survey data documenting a variable or gradient pattern in which /l/ may be realized as [û] before morpheme boundaries: mai[û]er, (touchy-)fee[û]y. The pattern that Hayes describes for his own speech involves overapplication of diphthongization and coda velarization before both inflectional and derivational affixes: [hiHû>I] healing, [meHûÌ] mailer. By contrast, in at least the present author’s idiolect, coda velarization always overapplies, but diphthongization overapplies (optionally) only in inflected forms. The difference is most clearly heard (and intuited) after underlying diphthongs, where “diphthongization” yields so-called sesquisyllabic outcomes such as [bD>Hû] boil. Crucially, the overapplication of diphthongization seems to occur only in inflected forms: boiling [bD>Hû>I] patiently vs. boiler [bD>ûÌ], *[bD>HûÌ]. (4)

Overapplication of pre-lateral diphthongization in one idiolect of American English base [bD>Hû]

boil

[spD>Hû]

spoil

[D>Hû]

oil

[sma>Hû]

smile

[va>Hû]

vile

[ma>Hû]

mile

[na>Hû]

Nile

[kHmpa>Hû] compile [sta>Hû]

style

inflected [bD>û>I], [bD>Hû>I] [spD>û>I], [spD>Hû>I]

[D>û>I], [D>Hû>I] [sma>û>I], [sma>Hû>I] [va>ûHst], [va>HûHst]

derived [bD>ûÌ], *[bD>HûÌ] [spD>ûÌ], *[spD>HûÌ] [spD>ûH–], *[spD>HûH–] [D>ûÌ], *[D>HûÌ]

[ma>ûH–], *[ma>HûH–] [na>ûa7>k], *[na>Hûa7>k] [kHmpa>û>I], [kHmpa>ûÌ], [kHmpa>Hû>I] *[kHmpa>HûÌ] [sta>û>I], [sta>û>st], [sta>Hû>I] *[sta>Hû>st]

boiler (‘water-heater’) spoiler (of plot) spoilage (Edmonton) Oiler(s)

mileage Nilotic compiler stylist 2

A similar example comes from Yiddish, which generally avoids [rn] and [rm] codas by schwa epenthesis (Albright 2010): [alarHm] ‘alarm’, [œturHm] ‘storm’, [œirHm] ‘umbrella’, [turHm] ‘tower’ (chapter 26: schwa; chapter 67: vowel epenthesis). When an /rm/ cluster is intervocalic, epenthesis does not normally occur: [alarm-ir-n] ‘to alarm’, [œturm->œ] ‘stormy’, [œirm-H] ‘screen’, [turm-H] ‘prison’. This pattern is disrupted in verbal paradigms, however: (5a) shows that, if epenthesis applies somewhere within the paradigm, it systematically overapplies in the entire paradigm. This can be compared with (5b), which shows that, if 2

A possible exception is [’sta>H‘ûa>z] stylize. It appears that stress clash avoidance may also play a role in conditioning [Hû] in this form, however, since the derived noun form is [‘sta>ûH’ze>œHn] ? [‘sta>HûH’ze>œHn] stylization.

Adam Albright

7

epenthesis is not conditioned anywhere within the paradigm, /rm/ surfaces uniformly faithfully. Overapplication in the verb [œturHm-Hn] cannot be attributed to the influence of the related noun [œturHm], since the verb [alarm-ir-Hn] does not show overapplication on the basis of the related noun [alarHm]. In addition, as the examples above show, epenthesis does not overapply in derived forms; additional examples include [varm-Hs] ‘warm food’ and [refDrm-ir-n] ‘to reform’. (5)

Overapplication of epenthesis in Yiddish a.

‘storm’ inf 1sg 2sg 3sg 1pl 2pl 3pl

Expected ≠ œturm-Hn œturHm œturHm-st œturHm-t œturm-Hn œturHm-t œturm-Hn

Actual œturHm-Hn œturHm œturHm-st œturHm-t œturHm-Hn œturHm-t œturHm-Hn

b.

‘alarm’ inf 1sg 2sg 3sg 1pl 2pl 3pl

Expected = Actual alarmir-Hn alarmir-Hn alarmir alarmir alarmir-st alarmir-st alarmir-t alarmir-t alarmir-Hn alarmir-Hn alarmir-t alarmir-t alarmir-Hn alarmir-Hn

In both the English pre-lateral diphthongization and Yiddish epenthesis cases, phonology is blind to inflectional affixes – that is, a coda process applies as if the stem-final consonant is a coda, even if the inflectional affix should bleed epenthesis. It is not always the case that overapplication extends a process from an inner constituent to an outer constituent in this way, however. The Latin [honor] analogy discussed in §1 involved the overapplication of rhotacism, which should normally have occurred only before vowel-initial suffixes (e.g. /hono(s-is/ → [hono(ris]), but came to apply before nominative singular /s/ (or /Ø/) as well: /hono(s-s/ → [hono(r]. Another example of application of a process triggered by an inflectional affix is found in certain dialects of Korean (Han 2002; Kang 2003). In Korean, coronal obstruents regularly palatalize (chapter 71: palatalization) before a suffix beginning with high front [i]: /os-i/ → [oœi] ‘clothing-nom’, /pat h-i/ → [paŒ h i] ‘field-nom’.3 Palatalization does not normally occur before mid front vowels ([pat h-e] ‘field-loc’) or high central vowels ([pat h-ql] ‘field-acc’). Han (2002), citing data from Choi (1998), observes that in certain dialects of North Gyeongsang Korean (namely, in Sangju, Geumneung, Cheongdo, and Mungyeong), palatalization overapplies: (6)

Overapplication of palatalization in North Gyeongsang Korean /pat h/ ‘field’ unmarked nom acc dat/loc

Conservative pat paΠh-i pat h-ql pat h-e

North Gyeongsang pat paΠh-i paΠh-ql paΠh-e

In North Gyeongsang Korean, as in other dialects, the unmarked form obeys coda restrictions that neutralize continuancy and laryngeal contrasts (chapter 69: 3

There is some variability regarding the place of articulation of affricates, ranging from alveolar to post-alveolar or perhaps even palatal; see Cho (1967); Kim-Renaud (1974); Ahn (1998); Kim (1999); Sohn (1999).

Paradigms

8

final devoicing and final laryngeal neutralization). The change in North Gyeongsang consists of extending palatalization within noun paradigms, so that it overapplies before mid and central vowels. Unlike Latin or Yiddish, this case involves the overapplication of a process that is triggered by an inflectional affix. As with Latin, we must ask whether the change could be interpreted instead as a change in the context of palatalization, or as a lexical re-analysis (chapter 1: underlying representations). It is easy to demonstrate that the change is not a general expansion of the context for palatalization to include mid and central vowels. Han (2002) and Kang (2005) point out that, within verbal and adjectival paradigms, palatalization never affects /t h-q/ sequences: /kat h-qn/ → [kat h qn], *[kaŒ h qn] ‘same-modifier’. Thus, there is no evidence that palatalization in the accusative or locative reflects a broader change in the palatalization process itself. It is also important to ask whether the change is simply a re-analysis of the lexical entry to /paŒ h/ – indeed, this is precisely what Kim (2005) claims. There are several reasons to think that a purely lexical account is not sufficient, however. First, assuming that the leveling in (6) has affected all /t h/-final nouns in the relevant dialects and that speakers would reject new words with [Œ h] ~ [t h] alternations, we need to account not only for the change to the specific lexical items in question, but also for the knowledge that there could be no lexical items of this type – i.e. a morpheme structure condition. McCarthy (1998) proposes to analyze such knowledge as a paradigm effect: specifically, speakers know that palatalization must apply before -i, and that it must overapply in the remainder of the paradigm by virtue of output–output faithfulness; therefore, no morpheme with /t h/ could ever surface as such within its inflectional paradigm. Even more telling, it appears that overapplication of palatalization may be observed even in cases of partial leveling, where speakers continue to treat the noun as /t h/-final in other contexts. Han (2002) and Kang (2005) discuss a related pattern found in the dialects of Gyeonggi, parts of Chungcheong, and North Jeolla, in which palatalization overapplies only before the accusative marker /-ql/, while the expected [t h] surfaces faithfully before locative /-e/ and directive /-qlo/: unmarked [pat¬], nominative [paŒ h-i], accusative [paŒ h-ql], directive [pat h-q7o], locative [pat h-e]. For these dialects, the use of [Œ h] in the accusative cannot be straightforwardly attributed to a re-analysis of the final consonant of the noun stem, since it surfaces as [t h] elsewhere. Instead, Han (2002) claims that we are observing a more limited identity effect, in which the accusative comes to match the nominative, but the locative and directional remain identical and distinct from the other case forms. Kang (2005) attributes the identity of the locative and directive to the fact that they share locational meanings. The Korean example shows that overapplication may extend processes that apply in inflected forms (i.e. overapplication is not “blind” to inflectional material). It also illustrates some of the difficulties in establishing a synchronic paradigm effect as opposed to a diachronic re-analysis. Many cases of overapplication that have been documented in the literature are potentially reinterpretable as lexical re-analyses or broadening of phonological processes. In order to demonstrate that a given case truly involves synchronic paradigmatically motivated overapplication, we must be able to show that the process continues to apply as expected elsewhere in the language, that the relevant lexical items have not been re-analyzed, and that the process overapplies just in those cases where paradigmatic identity would result. Similar considerations hold for identity through underapplication and misapplication, to which we now turn.

Adam Albright

9

2.2

Underapplication

Just as in cases of underapplication of phonological processes to maintain reduplicative identity (McCarthy and Prince 1995; chapter 100: reduplication; chapter 119: reduplication in sanskrit), it appears that phonological processes may be suppressed within inflectional paradigms in order achieve identity. An example can be seen in the behavior of Yiddish [rm] clusters. As shown in (5) above, schwa epenthesis overapplies in Yiddish verb paradigms, just in case epenthesis is expected in a suffixed form such as the 1st or 3rd singular. Interestingly, the converse pattern holds in noun paradigms. As shown in (7), noun plurals are sometimes formed by adding a consonant (/-s/), and sometimes by adding a vowel-initial suffix (usually /-Hn/). In contrast to verbal paradigms, schwa epenthesis optionally underapplies in noun paradigms if it is not licensed in the plural. (7)

Underapplication of epenthesis in Yiddish nouns a.

b.

singular œturHm ~ *œturm œirHm ~ *œirm œvDrHm ~ *œvDrm vDrHm ~ *vDrm DrHm ~ *Drm turHm ~ *turm alarHm ~ *alarm fDrHm ~ fDrm ? unifDrHm ~ unifDrm *farHm ~ farm

plural œturHms œirHms œvDrHms vDrHms DrHms turHms alarHms fDrmHn unifDrmHn farmHn

‘storm’ ‘umbrella’ ‘swarm’ ‘worm’ ‘arm’ ‘tower’ ‘alarm’ ‘form’ ‘uniform’ ‘farm’

A similar example of parallel over- and underapplication comes from Polish diminutives, as discussed by Kraska-Szlenk (1995: 108–114) and Kenstowicz (1996). Many Polish nouns show an alternation between [u] and [D] before word-final voiced non-nasal consonants, with [u] occurring in closed syllables and [D] occurring in open syllables. The morphological context for raising differs depending on the gender of the noun, since different affixes create closed syllables in masculine vs. feminine nouns: (8)

Regular application of o-raising in Polish nouns (Kraska-Szlenk 1995: 108) a.

/dDw/ singular ‘ditch (masc)’ nom duw gen dDwu dat dDwovi acc duw instr dDwem loc dDle

plural dDwq dDwuf dDwom dDwq dDwami dDwax

b.

/krDv/ singular plural ‘cow (fem)’ nom krDva krDvq gen krDvq kruf dat krDvje krDvom acc krDvg krDvq instr krDvã krDvami loc krDvje krDvax

In diminutive paradigms, raising alternations are suspended. Instead, the vowel that is expected in the nominative singular occurs everywhere. The diminutive suffix

Paradigms

10

is (on the surface) [-ek] word-finally and [-k-] before a vowel, due to the presence of a fleeting yer vowel, indicated in (9) as /E/ (chapter 122: slavic yers). As a result, diminutives of masculine nouns end in [-ek-Ø] in the nominative singular. Since the suffix begins with a vowel, raising does not apply in the nominative singular, and it correspondingly underapplies in the remainder of the paradigm (9a). Diminutives of feminine nouns, on the other hand, end in [-k-a]. This suffix does condition raising, and raising overapplies in the remainder of the paradigm (9b). (9)

Under- and overapplication of raising in Polish diminutives a.

/dDw-Ek/ singular ‘ditch (masc)’ nom dDwek gen dDwka dat dDwkovi acc dDwek instr dDwkjem loc dDwku

plural

b. /krDv-Ek-a/ ‘cow (fem)’ nom dDwkq dDwkuf gen dDwkom dat acc dDwki dDwkami instr dDwkax loc

singular plural krufka krufki kruftse krufkg krufkã kruftse

krufki kruvek krufkom krufki krufkami krufkax

As McCarthy (2005) points out (following Buckley 2001; Sanders 2003), it is not surprising that the distribution of o-raising has been disrupted in modern Polish: raising interacts opaquely with final devoicing, it is conditioned by an unusual set of following segments (non-nasal voiced consonants), and it is not generally applied to loanwords or nonce words. Nonetheless, the fact that raising applies normally in the non-diminutive forms of /dDw/ and /krDv/ (8) shows that the process is learned in some (perhaps lexically or morphologically restricted) form, and that it continues to apply in the expected contexts. Furthermore, speakers have clearly learned that these particular stems undergo raising, even in the nominative singular of the diminutive. Kraska-Szlenk (1995) and Kenstowicz (1996) argue that the lack of raising alternations in the paradigms in (9) is best attributed to a paradigm identity constraint, which holds specifically within diminutive paradigms.

2.3

Misapplication

In some cases, a regular phonological process applies in an unexpected fashion (misapplication). Harris (1973) argues that certain exceptions to regular stress placement in Spanish can be explained as identity effects, and that recognizing paradigmatic identity as a grammatical force could avoid the need to complicate the statement of stress rules. An example comes from the imperfect indicative forms of Spanish (discussed also by Burzio 2005: 65 and Oltra-Massuet and Arregi 2005): (10)

Avoidance of stress alternations in the Spanish imperfect: [terminar] ‘finish’ 1sg 2sg 3sg 1pl 2pl 3pl

imp imp imp imp imp imp

indic indic indic indic indic indic

expected termi’naba termi’nabas termi’naba termina’bamos termina’bais termi’naban

actual termi’naba termi’nabas termi’naba termi’nabamos termi’nabais termi’naban

Adam Albright

11

The stress pattern as it would have been inherited from Latin is shown in the left column of (10); this expected pattern also reflects a general pattern of penultimate stress that still holds in modern Spanish. The actual stress pattern in [termi’nabamos] is unusual in Spanish, since words with final closed syllables rarely have antepenultimate stress, the sole exceptions being learned words such as Sócrates and Júpiter, which are felt by speakers to be unusual (Hochberg 1988). This unexpected pattern is immediately explained if we look at the stress pattern of the other cells of the imperfect, where stress is penultimate according to the regular principles of Spanish stress assignment. Harris (1987) provides another example from certain dialects of Spanish, including Chicano Spanish. As (11a) shows, stress in verbs regularly falls on the penult, both in conservative varieties and in Chicano Spanish. Within the present subjunctive paradigm, however, stress alternations have been eliminated, with stress falling on the antepenult in the 1st plural (11b) (see also Reyes 1974). Once again, a possible explanation for this subversion of the regular stress pattern is that it maintains identity among inflected forms. Note that this effect holds locally within the present subjunctive, but does not affect the present indicative, nor does it affect non-verbal forms such as [’termino] ‘end’. (11)

Avoidance of stress alternations in Chicano Spanish subjunctive: [terminar] ‘finish’ a.

b.

1sg 2sg 3sg 1pl 2pl 3pl 1sg 2sg 3sg 1pl 2pl 3pl

indic indic indic indic indic indic subj subj subj subj subj subj

Conservative ter’mino ter’minas ter’mina termi’namos termi’nais ter’minan ter’mine ter’mines ter’mine termi’nemos termi’neis ter’minen

Chicano ter’mino ter’minas ter’mina termi’namos termi’nais ter’minan ter’mine ter’mines ter’mine ter’minemos — ter’minen

Kenstowicz (1996) observes a similar effect for certain Russian nouns, in which stress unexpectedly avoids falling on a fleeting yer vowel in the genitive plural (“double retraction”). Like the Spanish cases, this irregular placement of stress serves to avoid stress alternations within the plural paradigm, and may likewise be attributed to a constraint demanding uniform stress.

2.4

Anti-homophony within paradigms

In addition to serving as the domain for identity effects, inflectional paradigms also appear to be the locus of anti-homophony effects, in which phonology applies unexpectedly in order to avoid surface identity of morphologically distinct forms. The scope and treatment of anti-homophony effects is controversial, but we mention here two examples.

Paradigms

12

The first example comes from Kenstowicz (2005), who discusses the distribution of schwa in medial open syllables in Damascus Arabic (chapter 124: word stress in arabic). Normally in this dialect, unstressed schwas in open syllables are deleted: /sHmH#-Ht/ → [’sHm#-Ht] ‘heard-3sg fem subj’. This deletion also occurs in forms that have been cliticized with object markers: both /d#arb-Ht-o/ → [’d#arbto] ‘hit-3sg fem subj-3sg masc obj’ and /œa(f-Ht-o/ → [’œa(fto] ‘saw-3sg fem subj-3sg masc obj’ show deletion of the underlying /H/ in the 3rd singular feminine subject marker. However, this deletion is systematically blocked in cases where the form in question would become identical with the corresponding masculine form. For example, /#allam-t-o/ ‘taught-3sg masc subj-3sg masc obj’ and /#allam-Ht-o/ ‘taught-3sg fem subj-3sg masc obj’ are expected to yield surface [#al’lamto], with syncope of the stressless schwa. Instead, syncope is blocked in the feminine form ‘she taught him’, and the schwa exceptionally remains and attracts stress: [#alla’mHto]. Kenstowicz (2005) attributes this unusual stress pattern, which violates the normal principles of stress placement in Damascus Arabic, to an anti-homophony condition that holds between the 3rd singular masculine and feminine forms. Importantly, this condition does not penalize deletion in cases such as [’d#arbto] ‘she hit him’ or [’œa(fto] ‘she saw him’, because for these verbs, the 3rd singular masculine forms happen to follow different vocalic templates: [da’rab-t-o] ‘he hit him’, [’œuf-t-o] ‘he saw him’. Hall and Scott (2007) discuss another example, involving underapplication in Swabian German. In this dialect, /s/ becomes [œ] before a coronal: /pDst/ → [pDœt] ‘mail’. This process underapplies in inflectional paradigms, however: /griHs-t/→ [griHst], *[griHœt] ‘greet-3sg’. Hall and Scott attribute this underapplication to the influence of inflectionally related forms with [s], such as 1st singular [griHs]. In the 2nd singular, however, the inflectional affix is /-œ/. Here, underapplication of /s/ → [œ] would yield the illegal form [griHsœ], which would be expected to assimilate and degeminate. The candidate [griHs], which resolves the /sœ/ cluster in favor of [s], would create paradigmatic identity with 1st singular [griHs], 3rd singular [griHst], etc., but it would be homophonous with the 1st singular. Instead, the actual outcome is [griHœ], in violation of paradigm uniformity. Hall and Scott argue that this is due to an anti-homophony condition, in which the 2nd singular is required to be distinct from other forms. In both the Damascus Arabic and the Swabian German examples, the paradigm is the locus of anti-homophony effects, in the sense that homophony between forms that are not paradigmatically related (as in accidentally homophonous verbs) does not trigger unexpected phonology. If this restriction turns out to hold in a broader range of cases, it could serve as an additional source of evidence that paradigms may serve as the domain of grammatical effects.

3

Grammatical mechanisms for deriving paradigm effects

The fact that the regular application of phonological processes may be disrupted in order to achieve paradigmatic regularity was often commented on in the earlier literature on generative phonology (see, e.g. Kiparsky 1972; Harris 1973; Kenstowicz and Kisseberth 1977: 69–74). However, no formal mechanism was provided for accomplishing this effect within the rule-based approach of Chomsky

13

Adam Albright

and Halle (1968; henceforth SPE). Indeed, it is hard to see what a mechanism that explicitly enforces paradigmatic identity would look like, since the decision about whether to apply a rule or not (or to reverse the order of two rules) can be made only by “looking ahead” and seeing whether the results would create alternations within the paradigm. Instead, inflectional identity has often been analyzed as a by-product of how morphological and phonological operations are ordered, under the hypothesis that inflectional affixes are added after many phonological processes have already applied. (See Downing et al. 2005a for a review of alternative rule-based mechanisms, and §3.3 below for discussion of cyclic approaches employing phase-based spell-out; see also chapter 85: cyclicity and chapter 74: rule ordering.) For example, the American English pre-lateral diphthongization data in (4) ([spD>Hû] spoil, [spD>Hû>I] spoiling vs. [spD>ûH–] spoilage) can be seen as the result of diphthongization applying after derivational affixes like -age have been added, but before inflectional affixes have been added. A challenge for such approaches is to establish an internally consistent ordering of morphological and phonological operations. This is not always trivial; for example, in the author’s idiolect, the fact that /l/ is at least optionally dark in words like [spD>ûH–] spoilage and [ma>ûH–] mileage suggests that velarization is ordered before affixation of -age, yet there is no option for dark [û] in words like [sa>lH–] *[sa>ûH–] silage, which arguably also contains an /l/ before the -age suffix. Accounts based on cyclic ordering have no direct way of referring to the fact that for spoilage and mileage, related forms such as [spD>Hû] spoil and [ma>Hû] mile contain dark [û], while for sileage, the related form [sa>loÁ] silo has light [l]. Paradigm effects of this sort are readily accommodated within Optimality Theory (OT), since evaluation in this framework is carried out over surface forms, which is precisely where identity and contrast must be enforced. In this section, we review the main approaches to enforcing identity among paradigms of inflected forms in OT. Most approaches to paradigm effects in OT employ correspondence constraints (McCarthy and Prince 1995), which place morphologically related forms into correspondence with one another, so that identity can be evaluated with the standard machinery of faithfulness constraints (Ident, Max, Dep; see also chapter 63: markedness and faithfulness constraints). Under this approach, paradigmatic identity effects are closely related to cyclicity effects, in which derived forms show unexpected similarity to their bases of derivation. Correspondence relations are intrinsically pairwise, so paradigm uniformity is typically enforced by requiring that each individual pair of forms be identical. Thus, in a sense, paradigm constraints are not truly “paradigmatic,” since they evaluate sets of pairs rather than the entire distribution at once. A cost of this approach, however, is that the number of pairwise relations that must be considered grows factorially with the size of the paradigm – e.g. 6 × 5 = 30 pairwise relations for a six-member paradigm. Since their introduction, correspondence constraints have been used to analyze the relation between derived forms and their bases in producing cyclicity effects (Burzio 1994; Kenstowicz 1996; Benua 1997; Kager 2000; Steriade 2000). Like input–output faithfulness conditions, base–derivative correspondence is usually assumed to be intrinsically asymmetrical: derived forms must be faithful to their bases, but bases are not constrained to resemble their derivatives (see also

Paradigms

14

chapter 85: cyclicity). There are two main issues that must be resolved in applying faithfulness conditions to paradigm regularity. First, we must determine whether correspondence relations between inflectionally related forms are intrinsically symmetrical, as is generally assumed for base–reduplicant faithfulness (McCarthy and Prince 1995), or asymmetrical, as is usually assumed for other cases of base–derivative faithfulness (Benua 1997). Second, if inflected forms are faithful to a base form, we must determine which form acts as the base in an inflectional paradigm. In this section, the predictions of a symmetrical approach to paradigm uniformity (McCarthy’s 2005 Optimal Paradigms approach) are compared to those of an asymmetrical/base-prioritizing approach (Albright 2002; Kenstowicz 1996; Benua 1997), as well as to a stratal approach in which outputs are evaluated cyclically with interleaved levels of affixation and phonological evaluation (Kiparsky 1982, 2000).

3.1

Symmetrical output–output faithfulness: Optimal paradigms

One approach to enforcing paradigmatic regularity is through symmetrical correspondence relations, which simply demand that all inflected forms have identical forms of the root. This symmetrical identity requirement has gone under various names in the literature, including consistency (Burzio 1994, 2005; see also chapter 88: derived environment effects), uniform exponence (Kenstowicz 1996), paradigm uniformity (Steriade 2000), and, most recently, optimal paradigms (McCarthy 2005). McCarthy (2005: 172) succinctly sums up the motivation for adopting a symmetrical approach to identity in paradigms: “Inflectional paradigms have no base . . . : Latin amat ‘he loves’ is not derived from amd ‘I love’ or vice versa; rather, both are derived from the lexeme /am-/.” The claim that inflected forms are based on a common lexeme but not on each other rests on a morphological notion of base of affixation, in which affixation realizes (or marks) morphological features, and affixed forms contain a superset of the morphological information that their inner constituents contain. Under this definition, the base must necessarily contain a subset of the morphological features of derived forms. Depending on the morphological features that one assumes, it is implausible to suppose that the 1st singular “contains” the 3rd singular (though for feature systems that make use of underspecification, see Harley and Ritter 2002; McGinnis 2005). A similar point is made by Kager (1999: 282), who refrains from positing a relation between the 1st singular and the 2nd plural, since neither one appears to compositionally contain the other. Because no individual form has priority in a symmetrical approach, paradigms must be evaluated as a set, in case a high-ranking constraint demands a modification in one form that would then overapply in the rest of the paradigm. In McCarthy’s (2005) Optimal Paradigms (OP) formulation, candidates consist of entire paradigms. The stem in each cell in the paradigm stands in correspondence to the stem in every other cell, and output–output faithfulness constraints (OP-Ident, OP-Max, OP-Dep) evaluate every pair of surface forms in the paradigm. Paradigm uniformity effects arise when OP-faithfulness outranks the relevant IO-faithfulness

15

Adam Albright

constraint. This is illustrated in (12) for the overapplication of schwa epenthesis in Yiddish, discussed in (5) above. (12)

Optimal Paradigms analysis of overapplication of schwa epenthesis in Yiddish /œturm, œturm-t, œturm-Hn/ OP-Dep *rm] IO-Dep a. œturm, œturmt, œturmHn b. œturHm, œturHmt, œturmHn ☞ c. œturHm, œturHmt, œturHmHn

*!* *!*

** ***

The tableau in (12) illustrates several important features of the OP approach. The ranking of *rm] over IO-Dep reflects the fact that there is a general process of epenthesis to repair rm] codas. If this ranking did not hold, there would be no possibility of alternations. With this ranking in place, it is possible to rank OP-Dep high, in order to favor an invariant paradigm with overapplication (candidate (12c)), or low, in order to favor normal application (candidate (12b)). Crucially, the only rankings that favor candidate (12a) (underapplication) are those in which IO-Dep outranks *rm] – but these are incompatible with the fact that the language generally has a process of epenthesis. Thus, there is no ranking in which epenthesis is found in general, but underapplies in order to maintain paradigm uniformity. McCarthy dubs this prediction overapplication-only. Furthermore, the relative number of intervocalic vs. coda /rm/ sequences in the paradigm cannot affect the outcome, since the logic of strict domination dictates that even one instance of coda /rm/ may be fixed by any number of IO-Dep violations. Thus even if we had considered a paradigm with only a single input like /œturm/ and many inputs like /œturm-Hn/, overapplication of epenthesis would still be the optimal way to satisfy OP-Dep. On the face of it, the overapplication-only prediction of OP is too strong, since, as we saw in §2.2, paradigm identity may also be achieved through underapplication – in fact, the very same process of epenthesis underapplies in Yiddish nouns. The only mechanism available to handle such cases in the OP framework is to find some higher-ranked constraint that would be violated by overapplication. As a hypothetical example of what such a constraint might be, suppose Yiddish were to borrow the English word platform, retaining initial stress and creating an associated plural [’platfDrmHn]. In this case, overapplication of schwa epenthesis would yield a plural form with pre-antepenultimate stress and a long stress lapse: [’platfDrHm] ~ [’platfDrHmHn]. As it turns out, Yiddish stress, though somewhat unpredictable, must fall within the last three syllables of the word (see Jacobs 2005: 135–140), so the output [’platfDrHmHn] would be illegal. In this hypothetical example, we could analyze underapplication of epenthesis ([’platfDrm] ~ [’platfDrmHn]) as overapplication of lapse avoidance ([’platfDrHm] ~ *[’platfDrHmHn]), consistent with the overapplication-only prediction.

Paradigms (13)

16

Underapplication as overapplication of an orthogonal process (hypothetical) /platfDrm, platfDrm-Hn/

☞ a. ’platfDrm, ’platfDrmHn b. ’platfDrHm, ’platfDrmHn

OPDep

*rm] IO-Dep OP- Stress IOIdent Ident (stress) (stress) *

*!*

**

c. ’platfDrHm, ’platfDrHmHn d. ’platfDrHm, plat’fDrHmHn e. plat’fDrHm, plat’fDrHmHn

*! *!*

*** * *!*

Unfortunately, in the actual cases of underapplication in Yiddish nouns in (7b), it is not clear what such a constraint would be. In particular, the contrasting pair [vDrHm] ‘worm’ vs. [fDrm] ‘form’ do not appear to differ in any relevant way that could be capitalized on in order to block epenthesis in the latter case by means of a higher-ranked markedness constraint. A similar point can be made about the Polish diminutive example in (9). Here, the cases in which raising underapplies in the masculine find exact counterparts in which raising applies normally in the feminine: e.g. accusative plural [dDwki] ‘little ditches’ vs. [krufki] ‘little cows’. There is no form in the paradigm of [dDwek] in which raising would not violate a constraint that is also violated in the paradigm of [krufka]. In particular, hypothetical overapplication of raising in the nominative and accusative singular (*[duwek]) would violate the same constraints that are violated by the attested overapplication of raising in the feminine genitive plural ([kruvek]). As Kenstowicz (1996) points out, symmetric faithfulness constraints are equally well satisfied by over- and underapplication, and it is not obvious what additional constraint would break the tie. This problem leads McCarthy (2005) to suggest that raising alternations must be lexicalized in this case, reflecting the more generally unproductive nature of o-raising in Polish. As noted in §2, it is often difficult to determine whether an attested case of underapplication represents a synchronic paradigm effect, or whether it is the outcome of a diachronic re-analysis in which the relevant alternation simply no longer applies, even when paradigmatic identity is not at stake. Although it appears that at least some cases of underapplication demand a synchronic analysis, more careful investigation is needed to determine whether they are more appropriately interpreted as diachronic effects rather than as synchronic exceptions to the overapplication-only prediction. Misapplication can also pose a challenge to the symmetrical faithfulness model. The stress patterns of the Spanish imperfect (10) and (dialectal) present subjective (11b) show the otherwise practically non-occurring pattern of antepenultimate stress with a closed final syllable ([termi’nabamos], [ter’minemos]). The candidate in which antepenultimate stress is avoided by placing stress too far to the right in other forms (hypothetical [termina’ba], [termina’bas], [termina’ba], [termina’bamos], [termina’bais], [termina’ban]) would fare much better by the constraint that favors penultimate stress with final heavy syllables. This candidate involves a different dispreferred pattern – namely, forms with stress on final open syllables

17

Adam Albright

([termina’ba]). This stress pattern is somewhat rare, but it occurs in native monomorphemic words ([a’ki] ‘here’, [so’fa] ‘sofa’, [xo’se] ‘José’, etc.) to a far greater extent than antepenultimate stress in words like Sócrates. Moreover, invariant suffixal stress is found in other tenses, including the future ([termina’re], [termina’ras], [termina’ra], [termina’remos], [termina’reis], [termina’ran]). Thus it appears that the OP approach would favor a paradigm that avoids the stress violation in (actual) [termi’nabamos] by placing stress one syllable to the right (*[termina’ba], *[termina’bamos]).

3.2

Base-prioritizing output–output faithfulness

An alternative possibility is that faithfulness among inflected forms is exactly parallel to the asymmetrical structure of faithfulness among derived forms. Asymmetrical, base-prioritizing faithfulness has been given various names in the literature, including base identity (Kenstowicz 1996) and transderivational correspondence (Benua 1997). In an asymmetric approach, dependent forms are constrained to be faithful to a designated base form, but the base form is not constrained to match the rest of the paradigm. Thus, we expect the base form to exhibit normal application of regular phonological processes, while the remaining forms may show over- or underapplication in order to resemble the base. Since surface properties of the base form must be known ahead of time in order to evaluate base-faithfulness violations, base-prioritizing faithfulness requires a form of cyclic, or recursive, evaluation. In the discussion that follows, I follow Benua (1997) in assuming that evaluation proceeds in two steps: first the base is evaluated, and then its dependent forms are evaluated. Although recursive evaluation involves more steps than a single, parallel evaluation, the evaluation of output–output faithfulness requires a fraction of the comparisons that are involved in the full pairwise evaluation of the symmetric approach. An asymmetric approach provides a natural solution to many of the problems pointed out in the preceding section. In the case of stress in the Spanish imperfect and (dialectal) present subjunctive, the unexpected pattern in [termi’nabamos] ‘we ended’ can be attributed to faithfulness to other forms in the paradigm, in which stress obeys the usual pattern of penultimate stress. Deferring for a moment the question of what form should be designated as the base in general, let us assume for now that the base of Spanish verb paradigms (or, at least, of the imperfect and subjunctive) is the 3rd singular: [termi’naba]. In the base-prioritizing approach, properties of the base are determined “first” according to the general rankings of markedness and IO-faithfulness, and this pattern is then transferred to related form via base faithfulness, as shown in (14). We assume that Spanish has constraints against final stressed vowels (*’V]) and against antepenultimate stress with final closed syllables (*’qqŒ]). We also assume that, although antepenultimate stress is sometimes found on vowel-final words, the default is penultimate stress (Aske 1990; Eddington 2000; Oltra-Massuet and Arregi 2005), and that this is favored by a constraint against final lapse (*’qqq]], not shown). These are overruled, however, by a constraint demanding faithfulness to the stress pattern of the 3rd singular base. (Following Benua 1997, the tableau for the non-basic 1st plural form is indented to show that the evaluation of Base-Ident requires reference to the output of the 3rd singular evaluation.)

Paradigms (14)

18

Misapplication in the Spanish imperfect /termina-ba-Ø/ ‘end-3sg’

*Base-Ident(stress) *’qqq] *’V]

☞ a. termi’naba b. termina’ba

*

/termina-ba-mos/ *Base-Ident(stress) *’qqq] *’V] ‘end-1pl’ ☞ a. termi’nabamos b. termina’bamos

* *!

Base identity also addresses the problem of o-raising in Polish diminutives. Again deferring the discussion of what form serves as the base in general, let us assume that the base of Polish noun paradigms is the nominative singular. The evaluations of the masculine and feminine nominative diminutives are shown in parallel for /dDw-(E)k/ and /krDv-(E)k-a/ in (15). Since these forms are basic, they vacuously satisfy Base-Ident. The constraint Raise is shorthand for whatever constraint motivates [u] in closed syllables; recall that the paradigms of simple (non-diminutive) [duw] ‘ditch’ and [krDva] ‘cow’ in (8) show that both of these stems participate in raising. For brevity, the constraints that are responsible for the vocalism of the diminutive suffix are not shown.

(15)

Normal application of Polish raising in nominative singular base forms /dDw-Ek-Ø/ ‘ditch-dim-nom sg’

Base-Ident[high]

Raise IO-Ident[high]

☞ a. dDwek b. duwek /krDv-Ek-a/ ‘cow-dim-nom sg’ ☞ a. krDfka b. krufka

* Base-Ident[high]

Raise IO-Ident[high] *! *

The remaining inflected forms are then constrained by Base-Ident to preserve the vowel height of the nominative singular, as in (16).

19 (16)

Adam Albright Over- and underapplication of Polish raising in non-basic forms /dDw-Ek-a/ ‘ditch-dim-gen sg’ ☞ a. dDwka b. duwka /krDv-Ek-Ø/ ‘cow-dim-gen pl’ a. krDvek ☞ b. kruvek

Base-Ident[high]

Raise IO-Ident[high] *

*! Base-Ident[high]

* Raise IO-Ident[high]

*! *

By hypothesis, Base-Ident for inflected forms is a distinct constraint from whatever base–derivative faithfulness constraints may hold of derived forms. This predicts the possibility of different degrees of faithfulness for inflected vs. derived forms. As we saw above for Latin, inflected forms may indeed exhibit greater faithfulness than derived forms. A similar point can be made for Yiddish, in which derivationally related forms show schwa alternations: cf. [alarHm] ‘alarm’ ~ [alarm-ir-n] ‘to alarm’, [œirHm] ‘shade’ ~ [œirm-H] ‘screen’, [varHm] ‘warm’ ~ [varm-Hs] ‘warm food’. What is not explained by this approach, however, is the cross-linguistic tendency for inflected forms to show greater uniformity than derived forms. If inflectional Base-Ident and base–derivative faithfulness are separate and independently re-rankable constraints, we predict that some languages may show greater uniformity in derived forms than in inflected forms. The analysis in (15) and (16) requires that the nominative singular be designated as the base, even though other inflected forms are not necessarily built compositionally from it. For the masculine diminutive [dDwek], this coincides with the fact that nominative singular is in a substring of the remaining forms. In particular, although the nominative singular differs from the remaining inflected forms in having an [e] in the diminutive suffix, its suffix is -Ø. As a result, the underlying form /dDw-Ek-Ø/ is a phonological substring of the underlying form of inflected forms like /dDw-Ek-a/. In many cases, it is sufficient to posit that null-affixed forms (sometimes termed isolation forms) serve as the base for overtly affixed inflected forms; see, e.g. KuryÓowicz (1947); Kenstowicz (1996); Hayes (1999); Hall and Scott (2007). The comparison with the Polish feminine diminutive form [krufka] shows that base forms need not always be isolation forms, however. In this noun class, there is an overt nominative singular affix /-a/ that is not contained in other case/ number forms. A common intuition is that even if the feminine nominative singular is phonologically marked (in the sense of having an overt affix), it is nonetheless a plausible base form because it represents a morphologically “unmarked” category (in the sense of serving as a default or unmarked member of an opposition). Analyses that make use of this more general notion of morphosyntactic markedness are often somewhat vague as to what the criteria are for identifying the morphologically least marked member of a paradigm, but the general consensus appears to be that it is the nominative singular in a Latin or

Paradigms

20

Polish-like nominal paradigm (Kenstowicz 1996; Sturgeon 2003), and the 3rd singular in a verbal paradigm (KuryÓowicz 1947; MaUczak 1958). The idea that the base of a paradigm is a morphologically unmarked form (null affix, or unmarked feature values) is attractive from the point of view of grounding the structure of paradigmatic correspondence in a universal, morphosyntactically motivated representation. There are cases that are difficult to reconcile with this hypothesis, however. The Latin [honor] example discussed in §1.2 is one potential counterexample, since it shows overapplication in the nominative singular on the basis of morphosyntactically more marked case forms (oblique and plural forms). The underapplication of schwa epenthesis in Yiddish noun paradigms is also a counterexample. Recall from (7b) that epenthesis may fail to apply in final rm] clusters just in case the plural has a vowel-initial suffix: [vDrHm] ~ [vDrHm-s] ‘worm-sg/pl’ (with epenthesis) vs. [fDrm] ~ [fDrmHn] ‘form-sg/pl’ (epenthesis optionally underapplies). This case is readily accounted for if we posit faithfulness to the plural form. This is illustrated in (17) for the variant [fDrm] ‘form-sg’. (17)

Underapplication of schwa epenthesis in Yiddish nouns /fDrm-Hn/ ‘form-pl’ Base-Dep *rm] IO-Dep ☞ a. fDrmHn b. fDrHmHn /fDrm/ ‘form-sg’ Base: [fDrmHn] ☞ a. fDrm b. fDrHm

* Base-Dep *rm] IO-Dep * *!

*

Under this analysis, the fact that epenthesis underapplies in noun paradigms is attributed to faithfulness to the plural, while the fact that epenthesis overapplies in verb paradigms must be attributed to faithfulness to a form in which normal application would favor epenthesis (e.g. 1st singular: Albright 2010). If this is correct, it suggests that the choice of privileged base form may differ by language and by part of speech (see also chapter 102: category-specific effects). This conclusion naturally raises a number of questions: how do learners identify bases? Is it possible to predict which form will act as the base in a given part of speech in a given language, or must it be inferred post hoc from unexpected application of regular phonological processes? Are there limits to which form acts as the base? And if the choice of base can vary freely from language to language, why do certain forms such as the “unmarked” nominative singular and 3rd singular so often act as bases? Albright (2002) proposes that the choice of base in a given language is not arbitrary, but follows from the distribution of contrasts within paradigms (chapter 2: contrast). Specifically, it is hypothesized that learners identify the form that offers the most phonological and morphological information about lexical items. For example, in Yiddish verbs, some inflectional affixes trigger neutralizing phonological processes such as regressive voicing assimilation (/red-st/ → [retst] ‘speak-2sg’),

21

Adam Albright

degemination (/red-t/ → [ret] ‘speak-3sg’; /he>s-st/ → [he>st] ‘call-2sg’) or coalescence of schwas (/blDnkH-Hn/ → [blDnkHn] ‘meander-1pl’) (chapter 80: mergers and neutralization). As it turns out, since the 1st singular suffix is null and Yiddish has relatively few processes that affect final codas, the 1st singular frequently preserves phonological contrasts that are neutralized elsewhere. One process that does occur word-finally (including in the 1st singular) is epenthesis in [rm] clusters, however – and as we have seen, this process overapplies in verb paradigms. For nouns, on the other hand, pluralization involves a high degree of morphological unpredictability, with several competing plural markers and irregular vowel changes to many stems. Thus, the plural often contains morphological information that is neutralized in the singular (Albright 2008b). This correlates with the fact that schwa epenthesis optionally underapplies in the singular if it does not apply in the plural. Informativeness effects that run counter to markedness can be observed in a number of languages. In Latin, the nominative singular underwent many phonological and morphological neutralizations, including cluster simplification and morphological syncretism, which were not found in oblique forms.4 Albright (2005) provides a quantitative comparison of neutralizations affecting Latin noun forms. This analysis reveals that the nominative singular was the least informative form, which correlates with the fact that nominative singular forms were rebuilt in Latin. An even more striking example comes from Korean verbal inflection. In a survey of dialects and acquisition studies, Kang (2006) shows that a large number of phonologically independent alternations have been eliminated across different dialects and varieties of Korean. However, all of these changes have in common the property that they extend the stem form found before a certain suffix, the informal form /-H ~ a/. Albright and Kang (2009) show that this form is also the most informative, revealing stem-final consonant and vowel contrasts more clearly than other suffixed forms. The typological predictions of an information-based approach are less certain. One factor that appears to encourage base status is token frequency (KuryÓowicz 1947; MaUczak 1958). Plausibly, cells in the paradigm with high-token frequency (chapter 90: frequency effects) provide more information to learners, since (on average) more lexical items have been encountered in these forms. Albright (2008a) argues that attestation is by itself also an important type of information, which may bias learners to choose more frequent paradigm members as bases. This, in turn, may explain the fact that morphologically unmarked forms tend to be bases, since they often have the highest token frequency (Bybee 1985: ch. 3). An important prediction of the base-prioritizing faithfulness account is that the choice between over- and underapplication is a straightforward consequence of whether the process applies in the base form or not. The comparison of masculine and feminine diminutives in Polish shows that even within the same language, this may have different consequences for different words. Another prediction is that the same member of the paradigm should act as basic for multiple dimensions of faithfulness. Albright (2008b) shows that for Yiddish nouns, this is true: the plural acts as a privileged base in conditioning underapplication of 4

The importance of oblique forms in identifying phonological and inflectional properties of Latin nouns can be seen from the fact that dictionaries typically list an oblique form (the genitive singular) alongside the nominative.

Paradigms

22

/rm/-epenthesis, underapplication of final devoicing, and overapplication of open syllable lengthening. Similarly, Latin showed leveling of not only rhotacism alternations, but also vowel length and nominative singular marking (-s > -is: Kiparsky 1998; Albright 2005). Finally, the changes to Korean verb paradigms discussed by Kang (2006) show parallel changes for a large number of logically independent alternations. If this prediction proves true in general, it could provide strong support for an asymmetric approach in which paradigm uniformity extends an independently designated base form.

3.3

Stratal and cyclic approaches

The approaches discussed above rely on correspondence relations between related surface forms to enforce identity. An alternative approach, which denies the existence of paradigms altogether, attributes the phonological similarity of related forms to the fact that they share morphological and syntactic structure, and are thus (by hypothesis) identical at a certain stage in their derivation. For example, the framework of Lexical Phonology (Kiparsky 1982) and its successor, Stratal OT (Kiparsky 2000), assumes that phonological evaluation and affixation are interleaved, so that phonological processes may apply to inner constituents of the word prior to the addition of later affixes. Work in Lexical Phonology (LPM) has explored the implications of this approach for cyclicity effects in derivational morphology; see especially Giegerich (1999) and chapter 85: cyclicity for overview and discussion (also chapter 94: lexical phonology and the lexical syndrome). Importantly, it is assumed that inflectional affixes are added in the last level of affixation, and are therefore ignored by all earlier cycles of phonological evaluation. This suggests an analysis of paradigm uniformity effects that is parallel to the stratal analysis of cyclicity effects in derivational morphology: processes apply incorrectly when phonology evaluates a morphological subconstituent that does not yet contain the relevant inflectional affixes. This is sketched in (18) for the overapplication of epenthesis in Yiddish verbs. (18)

LPM approach to overapplication of schwa epenthesis in Yiddish verbs a.

Stem level: Inflectional affixes not present /œturm-/

IO-Max *rm] IO-Dep

a. œturm ☞ b. œturHm

*! *

b. Word level: Inflectional affixes added /[œturHm]-Hn/ IO-Max *rm] IO-Dep a. œturmHn ☞ b. œturHmHn

*!

Under this approach, the lack of epenthesis in paradigms of verbs such as alarmir-n ‘to alarm’ would be attributed to the fact that verbalizing -ir- is a stem-level affix, attaching to bound stems and occurring inside other affixes such as -ist.

Adam Albright

23 (19)

LPM approach to normal application in alarmirn a.

Stem level: Derivational -ir- is present /alarm-ir/

IO-Max *rm] IO-Dep

☞ a. alarmir b. alarHmir

*!

b. Word level: Inflectional affixes added /[alarmir]-Hn/ IO-Max *rm] IO-Dep ☞ a. alarmirHn b. alarHmirHn

*!

A closely related approach relies on the syntactic mechanism of derivation by phase (Chomsky 2001) to evaluate successively larger portions of the word (Marvin 2002; Arad 2003; Piggott and Newell 2006; Marantz 2007; Skinner 2008). Under this approach, it is likewise hypothesized that words are built cyclically, and that certain syntactic heads trigger spell-out and phonological evaluation of the structure that has been built thus far. We assume that verbs like [alarmirn] ‘alarm’ have a structure [[[alarm]√-ir]v-n] and verbs like [œturHmHn] ‘storm’ have a structure [[[œturm]√-Ø]v-Hn], with a null verbalizing head. Under the assumption that verbalizing heads trigger spell-out (Marantz 2007) and that the “little v” head is spelled out with (or is at least visible to) its complement, the innermost spell-out domain of [alarmirn] consists of /alarm-ir/ (no epenthesis necessary), whereas the inner spell-out domain of [œturHmHn] consists of /œturm-Ø/ (epenthesis expected). Thus derivations similar to those in (18) and (19) would hold in this approach as well. Stratal and cyclic approaches evaluate morphologically complex words strictly from the inside outwards. Crucially, there is no way for affixes that are not present to influence the outcome of phonology. For this reason, effects such as the underapplication of epenthesis in Yiddish nouns are unexpected, since the outcome for singular forms like [vDrHm] ‘worm’ vs. [fDrm] ‘form’ requires reference to the shape of the plural ([vDrHm-s] vs. [fDrm-Hn]). More generally, any case in which an output–output correspondence account requires reference to a base form that is not an isolation form or substring of the remaining forms poses a potential challenge to cyclic approaches. It remains a question for future research to determine whether such cases have a different synchronic status from the more common cases of “inside → out” directionality of cyclic influence.

4

Defining and constraining paradigms

Up to this point, we have been intentionally vague as to what forms count as belonging to a single paradigm, relying on an intuition notion of sets of forms that share a root, and in some cases also a set of inflectional features (e.g. imperfect indicative or present subjunctive). One possibility that cannot be immediately discarded is that output–output correspondence relations are established along many dimensions of shared inflectional features (imperfect forms, 3rd singular

Paradigms

24

forms, subjunctive forms, etc.). This unstructured notion of paradigmatic relations appears to miss some general tendencies, however. In point of fact, paradigmatic similarity appears to hold much more strongly within certain sets of forms, such as the set of person/number forms in a given tense and mood. In this section, we briefly review some of the major tendencies that have been identified in the literature, and consider where these tendencies may come from. The examples in the preceding sections show several recurring patterns. For example, person and number inflections of a verb are often constrained to resemble each other within a given tense, while alternations are tolerated across tenses – e.g. Spanish 1st singular present [ter’mino] ~ 1st singular imperfect [termi’naba]; 1st plural [termi’naban]. Although I know of no comprehensive survey of synchronic paradigmatic effects across different inflectional categories, similar tendencies can be observed in the degree of tolerance for morphologized stem changes across different categories. In a survey of 50 genetically unrelated languages, Bybee (1985) confirms that morphophonological stem changes are virtually never associated with person distinctions, and rarely with number distinctions. Tense and mood, on the other hand, are sometimes accompanied by stem changes, and aspect often is (see also Veselinova 2006 for discussion). These tendencies diagnose a paradigm structure in which certain inflectional features (aspect, and secondarily tense and mood) define rather cohesive paradigms, whereas person and number do not. We must interpret Bybee’s findings cautiously, since they admit two possible explanations: perhaps sharing an aspect or tense feature is particularly important in conditioning paradigmatic cohesion, or perhaps differences in tense or aspect are preferentially marked with phonologically salient differences (Wurzel 1989). To the extent that tendencies in morphophonological stem changes are correlated with tendencies in phonological over- and underapplication, we find support for an interpretation in terms of paradigmatic cohesion. Among nouns, it appears that the domain of phonological misapplication is more likely to be a number subparadigm (e.g. singular or plural) than a case subparadigm (e.g. all accusative forms). This appears to be mirrored by the fact that number suppletion is more common than case suppletion (though see Corbett 2007 for discussion). Similar tendencies are also found in cases of diachronic paradigm leveling. Bybee (1985: 13) interprets these differences in terms of what she calls the relevance of different features to the stem. Relevance is defined as the degree to which a given inflectional category changes the meaning of the lexical root – for example, changes in valence, particularly between intransitive and transitive, tend to change the verbal action in a way that changes in person do not. Bybee argues that morphological features that are more relevant are more likely to be marked with overt morphological markers, and are more likely to be accompanied by stem changes. Restated in terms of paradigmatic identity, it appears that the less salient the meaning difference between two forms, the less likely speakers are to tolerate alternations. Another tendency that can be observed in the examples above is that identity is more strongly enforced among some paradigms than others. For example, Spanish verbs have stress alternations within the present indicative (1sg [ter’mino] ~ 1pl [termi’namos]), but more “remote” tenses, such as the imperfect, lack alternations (1sg [termi’naba]; 1pl [termi’nabamos]). In Polish, raising alternations are

25

Adam Albright

observed within the case/number forms of simple nouns (8), but are suspended within the diminutive forms (9). This suggests that paradigmatic identity is not only enforced more strongly for some dimensions (e.g. person/number) than others (e.g. tense), but it is also enforced more strongly in some morphosyntactic contexts (e.g. the imperfect or the subjunctive) than others. One attempt to derive these tendencies in a formal system is Burzio’s (2005) Representational Entailments Hypothesis (see also chapter 88: derived environment effects). Burzio proposes that the strength of the identity condition that holds between two related forms should depend on the degree to which they already have shared meaning, morphology, and phonology. According to this hypothesis, linguistic representations of different items are lined up, and the more material they share, the greater the expectation that they are alike in other respects as well. For example, suppose a given item has feature values [+F, +G, +H, +I]. This can be restated as a set of associations between co-occurring features: [+F] ⇒ [+G], [+F] ⇒ [+H], [+F] ⇒ [+I], and so on. Now suppose we are given another item with feature values [+F, +G, −H, −I]. This item shares the association of [+F] ⇒ [+G], but differs in its other associations – for example, [+F] ⇒ [+I] and [+G] ⇒ [+I] are not met in this form. Finally, compare a third item [+F, +G, +H, −I]. In this form, three entailments concerning [±I] are not met: [+F] ⇒ [+I], [+G] ⇒ [+I], [+H] ⇒ [+I]. In other words, the amount of overlap between [+F, +G, +H, +I] and [+F, +G, +H, −I] makes their difference in [±I] more salient or surprising. Burzio proposes that families of output–output faithfulness constraints are ranked to reflect such differences in overlap: OO-Ident([±I]) (+F,+G,+H ⇒ +I) >> OO-Ident([±I]) (+F,+G ⇒ +I). Entailments may be stated in terms of shared semantic or morphosyntactic features, or in terms of shared phonological features. The Representational Entailments Hypothesis is useful in accounting for why some paradigms are more cohesive than others. For example, in Spanish, the imperfect is marked with an overt marker (-ba-) whereas the present tense has no overt tense marker. Therefore, imperfect forms share more properties in common (namely, the property of containing -ba-), which may in turn beget additional identity. The present subjunctive does not have an overt marker that makes subjunctive forms more similar to each other than indicative forms, but it is not unreasonable to suppose that the semantic or morphosyntactic representation of subjunctive involves more structure than the indicative does. If such explanations are on the right track, then it should be possible to correlate the degree of structural overlap between two forms and the pressure for paradigmatic identity between them. Crucially, it is possible to infer the structure of the representation of a given inflectional category through independent means (observing overt marking, finding implicational relations and default values, etc.; Bybee 1985; Harley and Ritter 2002; McGinnis 2005). Therefore, if the Representational Entailments Hypothesis is correct, it should be possible to predict the strength of paradigmatic identity effects.

5

Conclusion

Kenstowicz and Kisseberth (1977: 74) proclaimed that “. . . the notion ‘paradigm’ will have to be much more rigorously defined in order for the appeal to paradigm regularity to have much explanatory force.” As this discussion has made

Paradigms

26

clear, more than three decades later we are still in the early stages of understanding what determines the strength of paradigmatic effects, and what this might tell us about the underlying structure of correspondence among inflected forms. The growing literature on paradigm uniformity effects in the past 10 years has made progress on a number of issues, however. First, it has demonstrated that identity among inflectionally related forms is not just a diachronic phenomenon, but can be seen as a synchronic effect in the “wrong application” of productive phonological processes. Furthermore, formalized grammatical approaches to paradigm uniformity make testably different predictions about possible uniformity effects, pointing the way to those cases which deserve the closest empirical scrutiny. Finally, comparison of the cases discussed so far in the literature suggests a number of cross-linguistic trends that must be accounted for in a theory of how phonology refers to morphological structure. A deeper understanding of these tendencies will require a more comprehensive survey of synchronic paradigmatic effects, in order to understand how best to represent – and perhaps also derive – the observed tendencies concerning when paradigmatic identity is enforced.

ACKNOWLEDGMENTS This chapter has benefited greatly from the comments, suggestions, and patient guidance of the editors and an anonymous reviewer. All remaining errors are, of course, my own.

REFERENCES Ackerman, Farrell, James P. Blevins & Robert Malouf. 2009. Parts and wholes: Implicative patterns in inflectional paradigms. In James P. Blevins & Juliette Blevins (eds.) Analogy in grammar: Form and acquisition, 54–81. Oxford: Oxford University Press. Ahn, Sang-Cheol. 1998. An introduction to Korean phonology. Seoul: Hanshin. Albright, Adam. 2002. The identification of bases in morphological paradigms. Ph.D. dissertation, University of California, Los Angeles. Albright, Adam. 2005. The morphological basis of paradigm leveling. In Downing et al. (2005b), 17–43. Albright, Adam. 2008a. Explaining universal tendencies and language particulars in analogical change. In Jeff Good (ed.) Linguistics universals and language change, 144–181. Oxford: Oxford University Press. Albright, Adam. 2008b. Inflectional paradigms have bases too: Evidence from Yiddish. In Bachrach & Nevins (2008), 271–312. Albright, Adam. 2010. Base-driven leveling in Yiddish verb paradigms. Natural Language and Linguistic Theory 28. 475–537. Albright, Adam & Yoonjung Kang. 2009. Predicting innovative alternations in Korean verb paradigms. In Current issues in unity and diversity of languages: Collection of the papers selected from CIL 18, 893–913. Seoul: Linguistic Society of Korea. Anderson, Stephen R. 1986. Disjunctive ordering in inflectional morphology. Natural Language and Linguistic Theory 4. 1–32. Arad, Maya. 2003. Locality constraints on the interpretation of roots: The case of Hebrew denominal verbs. Natural Language and Linguistic Theory 21. 737–778. Aske, Jon. 1990. Disembodied rules versus patterns in the lexicon: Testing the psychological reality of Spanish stress rules. Proceedings of the Annual Meeting, Berkeley Linguistics Society 16. 30–45.

27

Adam Albright

Bachrach, Asaf & Andrew Nevins (eds.) 2008. Inflectional identity. Oxford: Oxford University Press. Baerman, Matthew, Dunstan Brown & Greville G. Corbett. 2005. The syntax–morphology interface: A study of syncretism. Cambridge: Cambridge University Press. Baldi, Philip. 1999. The foundations of Latin. Berlin & New York: Mouton de Gruyter. Barr, Robin. 1994. A lexical model of morphological change. Ph.D. dissertation, Harvard University. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst. Blevins, James P. 2003. Stems and paradigms. Language 79. 737–767. Bobaljik, Jonathan D. 2002. Syncretism without paradigms: Remarks on Williams 1981, 1994. Yearbook of Morphology 2001. 53–85. Bobaljik, Jonathan D. 2008. Paradigms (Optimal and otherwise): A case for skepticism. In Bachrach & Nevins (2008), 29–54. Buckley, Eugene. 2001. Polish o-raising and phonological explanation. Paper presented at the 75th Annual Meeting of the Linguistic Society of America, Washington, DC. Burzio, Luigi. 1994. Metrical consistency. In Eric Sven Ristad (ed.) Language computations, 93–125. Providence, RI: American Mathematical Society. Burzio, Luigi. 2005. Sources of paradigm uniformity. In Downing et al. (2005b), 65–106. Bybee, Joan. 1985. Morphology: A study of the relation between meaning and form. Amsterdam: John Benjamins. Cho, Seung-Bog. 1967. A phonological study of Korean with a historical analysis. Uppsala: Almqvist & Wiksell. Choi, Myung-Ok. 1998. Gukeoeumunlongwa jalyo [Korean phonology and data]. Seoul: Taehaksa. Chomsky, Noam. 2001. Derivation by phase. In Michael Kenstowicz (ed.) Ken Hale: A life in language, 1–52. Cambridge, MA: MIT Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Corbett, Greville G. 2007. Canonical typology, suppletion, and possible words. Language 83. 8–42. Dammers, Ulf, Walter Hoffmann & Hans-Joachim Solms. 1988. Grammatik des Frühneuhochdeutschen, vol. 4: Flexion der starken und schwachen Verben. Heidelberg: Carl Winter Universitätsverlag. Dekkers, Joost, Frank van der Leeuw & Jeroen van de Weijer (eds.) 2000. Optimality Theory: Phonology, syntax, and acquisition. Oxford: Oxford University Press. Downing, Laura J., T. A. Hall & Renate Raffelsiefen. 2005a. Introduction: The role of paradigms in phonological theory. In Downing et al. (2005b), 1–16. Downing, Laura J., T. A. Hall & Renate Raffelsiefen (eds.) 2005b. Paradigms in phonological theory. Oxford: Oxford University Press. Eddington, David. 2000. Spanish stress assignment within the analogical modeling of language. Language 76. 92–109. Giegerich, Heinz J. 1999. Lexical strata in English: Morphological causes, phonological effects. Cambridge: Cambridge University Press. Hale, Mark, Madelyn Kissock & Charles Reiss. 1997. Output–output correspondence in Optimality Theory. Proceedings of the West Coast Conference on Formal Linguistics 16. 223–236. Hall, T. A. & John H. G. Scott. 2007. Inflectional paradigms have a base: Evidence from s-dissimilation in Southern German dialects. Morphology 17. 151–178. Han, Eunjoo. 2002. Optimal paradigms in Korean nominal inflections. Studies in Phonetics, Phonology and Morphology 8. 303–322. Harley, Heidi. 2008. When is a syncretism more than a syncretism? In Daniel Harbour, David Adger & Susana Béjar (eds.) Phi theory: Phi-features across interfaces and modules, 251–294. Oxford: Oxford University Press.

Paradigms

28

Harley, Heidi & Elizabeth Ritter. 2002. Person and number in pronouns: A feature geometric analysis. Language 78. 482–526. Harris, James W. 1973. On the ordering of certain phonological rules in Spanish. In Stephen R. Anderson & Paul Kiparsky (eds.) A Festschrift for Morris Halle, 59–76. New York: Holt, Rinehart & Winston. Harris, James W. 1987. The accentual patterns of verb paradigms in Spanish. Natural Language and Linguistic Theory 5. 61–90. Hartweg, Frédéric & Klaus-Peter Wegera. 1989. Frühneuhochdeutsch: Eine Einführung in die deutsche Sprache des Spätmittelalters und der frühen Neuzeit. Tübingen: Max Niemeyer Verlag. Hayes, Bruce. 1999. Phonological restructuring in YidiJ and its theoretical consequences. In Ben Hermans & Marc van Oostendorp (eds.) The derivational residue in phonological Optimality Theory, 175–205. Amsterdam & Philadelphia: John Benjamins. Hayes, Bruce. 2000. Gradient well-formedness in Optimality Theory. In Dekkers et al. (2000), 88–120. Hermans, Ben & Marc van Oostendorp (eds.) 1999. The derivational residue in phonological Optimality Theory. Amsterdam & Philadelphia: John Benjamins. Hochberg, Judith G. 1988. Learning Spanish stress: Developmental and theoretical perspectives. Language 64. 683–706. Hock, Hans Henrich. 1991. Principles of historical linguistics, 2nd edn. Berlin & New York: Mouton de Gruyter. Hockett, Charles F. 1954. Two models of grammatical description. Word 10. 210–231. Jacobs, Neil G. 2005. Yiddish: A linguistic introduction. Cambridge: Cambridge University Press. Joesten, Maria. 1931. Untersuchungen zu ahd. (as.) ë, i, vor u der Folgesilbe und zur 1. Pers. Sg. Präs. Ind. der starken eVerben (Kl. IIIb, IV, V). Giessen: Wilhem Schmitz Verlag. Reprinted 1968, Amsterdam: Swets & Zeitlinger N.V. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kager, René. 2000. Stem stress and peak correspondence in Dutch. In Dekkers et al. (2000), 121–150. Kang, Yoonjung. 2003. Sound changes affecting noun-final coronal obstruents in Korean. In William McClure (ed.) Japanese/Korean Linguistics 12, 128–139. Stanford: CSLI. Kang, Yoonjung. 2005. The emergence of the unmarked in an analogical change. Paper presented at the Seoul Forum in Phonology, December, 2005, Seoul National University. Available (August 2010) at http://individual.utoronto.ca/yjkang/files/ SeoulForum2005.pdf. Kang, Yoonjung. 2006. Neutralization and variations in Korean verbal paradigms. Harvard Studies in Korean Linguistics 11. 183–196. Kenstowicz, Michael. 1996. Base identity and uniform exponence: Alternatives to cyclicity. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, 365–394. Salford: ESRI. Kenstowicz, Michael. 2005. Paradigmatic uniformity and contrast. In Downing et al. (2005b), 145–169. Kenstowicz, Michael & Charles W. Kisseberth. 1977. Topics in phonological theory. New York: Academic Press. Kim, Hyunsoon. 1999. The place of articulation of Korean affricates revisited. Journal of East Asian Linguistics 8. 313–347. Kim, Jinhyung. 2005. A reconsideration of phonological leveling: A case of noun inflection in Korean. Studies in Phonetics, Phonology and Morphology 11. 259–274. Kim-Renaud, Young-Key. 1974. Korean consonantal phonology. Ph.D. dissertation, University of Hawaii. Seoul: Hanshin. Kiparsky, Paul. 1972. Explanation in phonology. In Stanley Peters (ed.) Goals in linguistic theory, 189–227. Englewood Cliffs, NJ: PrenticeHall.

29

Adam Albright

Kiparsky, Paul. 1982. Lexical morphology and phonology. In Linguistic Society of Korea (ed.) Linguistics in the morning calm, 3–91. Seoul: Hanshin. Kiparsky, Paul. 1998. Covert generalization. In Geert Booij, Angela Ralli & Sergio Scalise (eds.) Proceedings of the 1st Mediterranean Conference of Morphology, 65–76. Patras: University of Patras. Kiparsky, Paul. 2000. Opacity and cyclicity. The Linguistic Review 17. 351–365. Kraska-Szlenk, Iwona. 1995. The phonology of stress in Polish. Ph.D. dissertation, University of Illinois. KuryÓowicz, Jerzy. 1947. The nature of the so-called analogical processes. Diachronica 12. 113–145. Maiden, Martin. 2005. Morphological autonomy and diachrony. Yearbook of Morphology 2004. 137–175. MaUczak, Witold. 1958. Tendences générales des changements analogiques. Lingua 7. 298–325, 387–420. Marantz, Alec. 2007. Phases and words. In Sook-Hee Choe (ed.) Phases in the theory of grammar, 191–222. Seoul: Dong In. Marvin, Tatjana. 2002. Topics in stress and the syntax of words. Ph.D. dissertation, MIT. Matthews, Peter H. 1965. Some concepts in Word and Paradigm Morphology. Foundations of Language 1. 268–289. McCarthy, John J. 1998. Morpheme structure constraints and paradigm occultation. Papers from the Annual Regional Meeting, Chicago Linguistic Society 32(2). 123–150. McCarthy, John J. 2005. Optimal paradigms. In Downing et al. (2005b), 170–210. McCarthy, John J. & Alan Prince. 1995. Faithfulness and reduplicative identity. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 249–384. Amherst: GLSA. McGinnis, Martha. 2005. On markedness asymmetries in person and number. Language 81. 699–718. Oltra-Massuet, Isabel & Karlos Arregi. 2005. Stress-by-structure in Spanish. Linguistic Inquiry 36. 43–84. Paul, Hermann, Peter Wiehl & Siegfried Grosse. 1989. Mittelhochdeutsche Grammatik, 23rd edn. Tübingen: Niemeyer. Piggott, Glyne L. & Heather Newell. 2006. Syllabification and the spell-out of phases in Ojibwa words. McGill Working Papers in Linguistics 20. 39–64. Prince, Alan & Paul Smolensky. 2004. Optimality Theory: Constraint interaction in generative grammar. Malden, MA & Oxford: Blackwell. Reyes, Rogelio. 1974. Studies in Chicano Spanish. Ph.D. dissertation, Harvard University. Sanders, Nathan. 2003. Opacity and sound change in the Polish lexicon. Ph.D. dissertation, University of California, Santa Cruz. Skinner, Tobin R. 2008. The consequences of cyclic spell-out for verbal stem-allomorphy. In Susie Jones (ed.) Proceedings of the 2008 Annual Conference of the Canadian Linguistic Association. Available (August 2010) at http://homes.chass.utoronto.ca/~cla-acl/ actes2008/CLA2008_Skinner.pdf. Sohn, Ho-Min. 1999. The Korean language. Cambridge: Cambridge University Press. Steriade, Donca. 2000. Paradigm uniformity and the phonetics–phonology boundary. In Michael B. Broe & Janet B. Pierrehumbert (eds.) Papers in laboratory phonology V: Acquisition and the lexicon, 313–334. Cambridge: Cambridge University Press. Stump, Gregory T. 2001. Inflectional morphology: A theory of paradigm structure. Cambridge: Cambridge University Press. Sturgeon, Anne. 2003. Paradigm uniformity: Evidence for inflectional bases. Proceedings of the West Coast Conference on Formal Linguistics 22. 464–476. Veselinova, Ljuba N. 2006. Suppletion in verb paradigms: Bits and pieces of the puzzle. Amsterdam & Philadelphia: John Benjamins.

Paradigms

30

Vincent, Nigel. 1974. Analogy reconsidered. In John M. Anderson & Charles Jones (eds.) Historical linguistics II: Theory and description in phonology, 426–445. Amsterdam: North Holland Publishing Company. Williams, Edwin S. 1994. Remarks on lexical knowledge. Lingua 92. 7–34. Wurzel, Wolfgang Ullrich. 1989. Inflectional morphology and naturalness. Dordrecht: Kluwer. Zwicky, Arnold M. 1985. How to describe inflection. Proceedings of the Annual Meeting, Berkeley Linguistics Society 11. 372–386.

84

Clitics Stephen R. Anderson

1

Introduction

The notion of “clitic” derives from one of the oldest problems in the study of language: how to define the “word.” Grammarians have long noted that a difficulty is posed in this area by the fact that certain elements in many languages seem to play an independent role in the grammatical structure of sentences, and thus to warrant the status of “grammatical words,” but in terms of their sound structure form parts of unitary “words” (in a distinct, phonological sense) with other “grammatical words.” Examples such as those in (1) from Homeric Greek are typical of the phenomenon.1 (1)

a.

b.

hb de kai autds m’ aiei she-n ptc even so me-a always [PP en= athanatoisi theoisi ] neikei among immortal-d gods-d upbraids even so she always upbraids me among the immortal gods (Iliad 1.520, apud Taylor 1996: 480) theios =moi enupnion blthen Oneiros divine me-d dream came Oneiros divine Oneiros came to me in a dream (Iliad 2.56, apud Taylor 1990: 35)

In the first of these sentences (proclitic) en= is grammatically an independent preposition, but forms a word together with the following athanatoisi ‘immortal’. In the second (enclitic) =moi is a pronominal adjunct ‘to me’ of the verb blthen ‘came’ but forms a word together with the preceding adjective theios ‘divine’. In both cases the independent status of the pro- or enclitic seems assured by the grammar of the sentence, but the unitary status of its combination with a host is confirmed by its phonological (especially accentual) behavior. It is this conflict between two 1

Clitics are identified in boldface type, with the symbol “=” indicating the direction of their relation to a host.

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0084

Clitics

2

equally well-grounded notions of “word” that brought clitics to the attention of traditional grammarians, and subsequently that of linguists. The problem as just presented is essentially a phonological one (how to get the phonology to treat two or more elements that appear distinct from the point of view of grammatical structure as one unit). The study of clitics was quickly complicated, however, by the suggestion that the same elements that displayed this anomalous phonological behavior also had specific, idiosyncratic syntactic properties. Jakob Wackernagel (1892) proposed, following Delbrück (1878), that the unstressed clitics of the oldest Indo-European languages (and thus, Proto-IndoEuropean) occurred systematically after the first word of the sentence, regardless of their grammatical function. This notion of a special syntax for clitics later became part of the very definition of “clitic” for some linguists, and much of the literature presumes that designating something as a clitic entails special behavior both in the phonology and in the syntax. It is nonetheless useful to disentangle two distinct dimensions of “clitic” behavior, the phonological and the morphosyntactic, which turn out to be logically (and empirically) orthogonal (see Anderson 2005 for elaboration of this point, as well as related discussion in surveys such as that of Halpern 1998, the papers in Dixon and Aikhenvald 2002, and much other literature both traditional and modern). In the context of the present book, this chapter will focus almost exclusively on the phonological aspects of clitic behavior, and references to “clitics” will be to elements that display the relevant phonological properties (without regard to whether they display unusual syntactic distribution).

2

What is a (phonological) clitic?

As a starting point, we can ask which elements we ought to consider as clitics from such a perspective. The notion of clitic in traditional grammar is that of a “little” word, and in particular one that does not bear an independent accent but rather leans accentually on an adjacent word.2 The proposal that clitics are always unaccented, however, is problematic. For instance, in Modern Greek, enclitics do not usually receive stress; thus, [’Ïose] ‘give!’, [’Ïose=mu] ‘give me!’ with no stress on the clitic =mu. But when two such enclitics are attached to the same host, a stress appears on the penultimate one, as in [‘Ïose=’mu=to] ‘give it to me!’.3 This is a consequence of a general rule of Modern Greek that builds a trochaic foot over two otherwise unstressed syllables at the right edge of a word, provided the result does not involve a stress clash. Thus, when a clitic is added to antepenultimate-stressed [tria’Ïafilo] ‘rose’, the result is [tria‘Ïafi’lo=mu] ‘my rose’. It is not the sequence of clitics per se that results in the penultimate stress in [‘Ïose=’mu=to], but rather the application of this rule: cf. [’pes=mu=to] ‘say it to me!’, with no stress on =mu in an otherwise identical sequence, because such a stress would clash with that on the monosyllabic stem. On the traditional understanding, the claim that =mu ‘1sg’ is a clitic seems to be 2

The word clitic derives from Greek klitikos ‘leaning’, from klînein ‘to lean’. Arvaniti (1992) provides experimental evidence that, contrary to the proposals of some previous authors, the added stress in such cases is primary with the original word stress being reduced to secondary. This result has been confirmed by the judgments of several native speakers. 3

Stephen R. Anderson

3

compromised by the fact that it sometimes bears an accent, but we can see that this accent is due to the regular phonology of the language, and not to properties of =mu ‘1sg’ itself. Similarly, in the Papuan language Bilua (Obata 2003), stress is generally initial, but proclitic pronouns do not bear stress, as in (2a). Proclitics can also appear in constructions in which there is no adjacent stressed element, however. This occurs under two conditions: first, a vowel-final clitic does not form part of a word with a following vowel-initial stem, as in (2b); and, second, under some circumstances a cluster of clitics arises which is not associated with any non-clitic host, as in (2c). In these circumstances, the clitic receives stress if initial, as illustrated below. (2)

a.

b.

c.

[o= ’ßoußae =k =a] 3sg.masc kill 3sg.fem.obj tns ‘He killed it.’ ’o ’odie =k =a] 3sg.masc call 3sg.fem.obj tns ‘He called her.’ ’o =k =a ’zari=a ’rae=ng=o 3sg.masc 3sg.fem prt want-tns marry-2sg.obj-nom ‘He wants to marry you.’

Designating as “clitics” exactly those elements that do not bear stress, then, does not appear to give us the results we desire. Instead, it is proposed in Anderson (2005) that the right way to pick out clitics phonologically is as prosodically deficient elements. Let us assume that full words in general have a lexical representation that organizes their phonological content into syllables, feet, and ultimately one or more Phonological Words (PWords; see chapter 51: the phonological word). We can then say that a phonological form realizing some grammatical element, whose segmental content may be organized into syllables and possibly feet but which is not lexically assigned the status of a PWord, is a clitic in the desired phonological sense. This characterization is not compromised by the fact that such a clitic will typically become part of a PWord (perhaps together with other clitics, as in Modern Greek [‘Ïose=’mu=to] or in the Bilua sentence (2c) above) as a consequence of the principles of prosodic organization of the language in question. The property of being a clitic in this sense, then, is not necessarily a characteristic of a lexical item, but rather of a phonological form that can realize that lexical item. The same item may well have both clitic and non-clitic forms. The classic example of this is the case of the auxiliary verbs in English: many of these have both full, non-clitic forms (is, has, had, would, will, etc.) and clitic forms (’s, ’d, ’ll, etc.). From the point of view of the grammar, these are essentially free variants. If a reduced (clitic) form is chosen to lexicalize the auxiliary in a given sentence, however, this may result in prosodic ill-formedness, as a consequence of the impossibility of incorporating the prosodically deficient item into the overall sound structure of the sentence in a well-formed way (see Anderson 2008 for discussion and analysis). Apart from these differential phonological effects, however, the reduced and unreduced auxiliaries are instantiations of the same grammatical element. In order to be pronounced, such prosodically deficient material must be incorporated into the larger prosodic structure in some way: thus, the penultimate stress

Clitics

4

in Modern Greek [‘Ïose=’mu=to] results from incorporating both enclitics into the same phonological word as the host verb, and then building a trochaic foot over the resultant sequence of unstressed word-final syllables. The Bilua examples result from assigning PWord status to clusters of material that cannot be incorporated into any independent PWord, and then assigning initial stress to this word. It is the characterization of this sort of integration (which we will refer to as Stray Adjunction) that constitutes the phonology of cliticization, and this area of phonology will be central to the discussion in later sections of the present chapter.

3

How do clitics differ from affixes?

Although the characterization of clitics as prosodically deficient grammatical elements appears to capture the phonological dimension of their behavior, it does not pick them out uniquely in grammatical structure. With relatively few exceptions, the affixes found within words as formal markers of derivational and inflectional structure also lack an autonomous organization into prosodic constituents at or above the level of the PWord (see chapter 104: root–affix asymmetries), and the question naturally arises of how clitics and affixes are to be distinguished. The classic characterization of the issues involved is provided by the widely cited work of Zwicky and Pullum (1983), who enumerate a number of differences between clitics and affixes in defense of their analysis of English -n’t as the realization of an inflectional category of modals and other auxiliary verbs rather than as a clitic. These include the points in (3). (3)

a.

Clitics have a low degree of selection with respect to their hosts; affixes a high degree of selection. b. Affixed words are more likely to have accidental or paradigmatic gaps than host + clitic combinations. c. Affixed words are more likely to have idiosyncratic shapes than host + clitic combinations. d. Affixed words are more likely to have idiosyncratic semantics than host + clitic combinations. e. Syntactic rules can affect affixed words, but not groups of host + clitic. f. Clitics, but not affixes, can be attached to material already containing clitics.

These points can be illustrated, following Zwicky and Pullum (1983), by the contrasts in (4) between English clitic auxiliaries (e.g. ’s ‘is, has’, ’d ‘would’) and the element they argue is an inflectional affix, n’t ‘neg’. (4)

a.

b.

The clitic auxiliaries can attach to words of any class that happen to fall at the right edge of the preceding constituent; n’t can only be added to finite forms of auxiliary and modal verbs. Combinations of clitic auxiliaries with preceding material are limited only by the possibilities of the syntax; some combinations of auxiliary + n’t do not exist (e.g. *mayn’t; in most dialects also *amn’t) while one (ain’t) does not correspond to a specific non-negative form.

Stephen R. Anderson

5 c.

Combinations of host + clitic auxiliary are governed by the regular phonology of English, as seen for instance in regular plurals and past tense forms with the endings /z/ and /d/; forms such as don’t, won’t, can’t, and shan’t bear idiosyncratic relations to their non-negative counterparts. d. Clitic auxiliaries make the same syntactic and semantic contribution to a sentence as full forms; auxiliaries in n’t can have idiosyncratic semantics (thus, in you mustn’t go the negation is within the scope of the modal, while in you can’t go the modal is in the scope of negation). e. Clitic auxiliaries do not move together with their host (thus, a question corresponding to I think John’s at the door is Who do you think’s at the door? and not *Who’s do you think at the door?) while the negated auxiliaries move as a unit (the question corresponding to I haven’t any more bananas is Haven’t you any more bananas? and not *Have youn’t any more bananas?). f. While clitics can be added to other clitics (I’d’ve done better if I could’ve), n’t cannot (thus, I wouldn’t do that if I were you cannot be expressed as *I’dn’t do that if I were you). Zwicky and Pullum present these differences as descriptive observations, supported by comparisons between uncontroversial instances of clitics and affixes. They can be argued to follow, however, from the proposal that clitics are introduced into syntactic structure as prosodically deficient, but morphosyntactically independent, elements, while affixed words are formed by lexical operations and appear as units in the syntax.4 Although Zwicky and Pullum formulate some of the principles in (3) only as tendencies, the present account of the nature of clitics suggests that nearly all of them should be construed categorically. The exception to this generally absolute nature of the differences between clitics and affixes is (3d): syntactically compositional idioms can be semantically idiosyncratic (e.g. build castles in the air ‘make unrealistic plans or proposals’), and there is no reason to exclude host + clitic combinations from this same possibility. Indeed, many languages assign special meanings to verbs in the presence of particular clitics, such as French il y a ‘there is’ or the Italian verbi procomplementari studied by Russi (2008). These latter are combinations of a verb with a specific clitic or cluster of clitics, which take on a conventionalized meaning that is not compositionally related to that of the basic verb. Examples are provided in (5). (5)

a. b.

4

far=la ‘deceive, prevail on someone cunningly’ from fare ‘make, do’ + la ‘3sg DO’ voler=ne ‘resent, have hard feelings for someone’ from volere ‘want’ + ne ‘partitive’

The specific framework I presume is roughly that of Anderson (1992). Within that theory, productive inflection results from the operation of Word Formation Rules that take a lexical stem and the morphosyntactic representation of a syntactic position as their input and yield inflected words as their output; while derivation and lexically idiosyncratic inflection result from Word Formation Rules that construct and relate lexical stems. The details of this position are not essential: what matters is the claim that fully inflected words, structured as PWords, appear in the prosodic structure projected from the syntax. Clitics appear in this structure either as prosodically deficient lexical items (e.g. the contracted forms of English auxiliaries: see Anderson 2008) or as “special clitics” introduced (as phrasal morphology) into that structure at a point where non-clitic material is already present, as described in Anderson 2005.

Clitics c.

6

prender=se=la ‘take offense, be upset’ from prendere ‘take’ + si ‘refl’ + la ‘3sg DO’

The constructions in which such combinations are found are syntactically normal (e.g. Me l’ha fatta di nuovo ‘he tricked me again’), but their interpretation cannot be directly derived from those of their parts. The rest of the properties in (3) follow from the proposed architecture of grammar. Clitics per se are not selective with respect to their hosts (3a), because they are placed by principles that do not make direct reference to the host (although in the case of “special clitics,” the phrasal environment for their introduction may be such that only a restricted range of hosts will be present in the appropriate position). They do not display gaps (3b), because individual host + clitic combinations are not listed as such in the lexicon and so are not subject to omission. Similarly, such combinations are not available for lexical listing of idiosyncratic form (3c). Host + clitic combinations are not affected in a unitary way by the syntax (3e), because the fact of being a clitic entails only a phonological, not a syntactic, relation to the host. The only way a clitic could appear “inside of” an affix (3f) would be if some special circumstances caused it to be introduced in that way as an “endoclitic.” Most of the putative instances of this situation that have been adduced, such as the pronominal clitics that appear between the verbal stem and a future or conditional ending in Portuguese (e.g. mostrár-no-los-á ‘s/he will show them to us’), appear to have alternative analyses that do not involve “endocliticization” (see Anderson 2005: 152ff.). One exception is the case of Udi as discussed by Harris (2002), which does appear to be a real example. In an Udi form like that in (6), the clitic =ne ‘3sg’ comes between one affix and another. (6)

nana-n äjel-ax ak’-es=ne-d-e k’uŒan mother-erg child-dat see-inf-3sg-caus-aorII puppy.abs ‘The mother showed a puppy to the child.’

In a form such as a=z-q’-e ‘I received’, indeed, the clitic element =z ‘1sg’ appears within the monomorphemic root aq’ ‘receive’. The analysis of such cases is extremely interesting, but, as argued in Anderson (2005: 161–165), the principles involved are still consistent with the claim that clitics are added to affixed words, and not the reverse.

4

How are clitics prosodically related to their hosts?

Let us assume, then, that lexical elements appear in the input to the phonology with a certain amount of prosodic organization, and that non-clitics differ from clitics in that only the former are lexically organized into PWords. Clitics and nonclitics alike must be organized into Phonological Phrases (PPhrases) and perhaps higher levels of prosodic structure, although that is of less importance for present concerns. This phrasing can be regarded as being projected at least in part from syntactic structure, but the question remains of how prosodically deficient material is related to adjacent PWords within this overall organization.

Stephen R. Anderson

7

The categories of prosodic structure are generally assumed to be related in a hierarchical fashion, with syllables constituting Feet, which are parts of PWords, which are in turn grouped into PPhrases, etc. (7)

The Prosodic Hierarchy q < Foot < PWord < PPhrase < Intonational Phrase . . .

A particularly restrictive view of this hierarchy known as the Strict Layering Hypothesis was defended by Nespor and Vogel (1986), for whom the relation between category types was seen as exhaustive at all levels: that is, PPhrases consisted exclusively of PWords, which in turn consisted exclusively of Feet, etc.5 In a paper which is fundamental to the study of clitic phonology, however, Selkirk (1995), following arguments of Inkelas (1989), proposed that the principles of the Prosodic Hierarchy ought to be regarded as a set of individually ranked, violable constraints, and this view has dominated subsequent research (see chapter 33: syllable-internal structure, chapter 40: the foot, chapter 51: the phonological word, and chapter 57: quantity-sensitivity for more discussion of the prosodic hierarchy). Associating positions on the ordering in (7) with consecutive integers, we could express the basic nature of the Prosodic Hierarchy as involving two fundamental requirements. (8)

a.

Layeredness No Ci dominates a Cj, where j > i (e.g. no Foot contains a PWord).

b.

Headedness (first approximation) Every Ci directly dominates some C i−1 (e.g. every PWord contains a Foot).

The Strict Layering Hypothesis can be expressed as the claim that representations also meet two other requirements. (9)

a.

Exhaustivity No Ci directly dominates a Cj, where j < i–1 (e.g. no PWord directly dominates a q).

b.

Non-Recursivity No Ci directly dominates another Ci (e.g. no PWord contains another PWord; adjunction structures do not exist).

In order to maintain its logical independence from Non-Recursivity, the formulation of Headedness in (8) can be replaced by the following. (10)

5

Headedness Every Ci directly dominates some Cj, where j ≥ i–1.

Nespor and Vogel also posited a category of Clitic Group between the PPhrase and the PWord. Subsequent work, such as Booij (1988) and Zec and Inkelas (1991), has generally concluded that no such distinct prosodic category need be introduced, and it is disregarded here. For some discussion, see Anderson (2005: 42ff.).

Clitics

8

As noted already by Selkirk (1995), Layeredness and Headedness are inherent in the nature of the Prosodic Hierarchy; since these notions are in some sense definitional, they are not violable, and if construed as constraints should be treated as always undominated. Another undominated requirement, which we could call that of Full Interpretation, mandates that all phonological material to be pronounced be integrated into the overall prosodic structure, which means in effect that there must be a path from it to the root of the prosodic tree. It is this constraint that enforces the application of some process of Stray Adjunction in the case of material which is otherwise prosodically unaffiliated. The requirements in (9), however, make substantive claims about the range of prosodic structures found in the languages of the world and, as such, are subject to empirical confirmation. Evidence suggests, in fact, that they are violated in some instances, and this is the basis for interpreting them not as definitional of prosodic structure, but as potentially violable constraints. Of these the conditions in (11), formulated now as constraints, are apparently never violated and so can be regarded as undominated along with Full Interpretation. (11)

a.

Layeredness No category dominates a higher-level category.

b.

Headedness Every category directly dominates (at least) one element no more than one level below it on the hierarchy.

The additional conditions of the Strict Layering condition can, as we have seen, be violated. Furthermore, violation may be “local” in the sense that a language violating, say, Exhaustivity at the PPhrase level may nonetheless conform to this constraint at other levels, such as the PWord. The relevant principles thus need to be formulated as families of constraints, varying over the categories of the hierarchy as in (12). (12)

a.

Exhaustivity(Ci ) Every element of category Ci is exhaustively composed of elements of category Ci–1.

b.

Non-Recursivity(Ci ) No element of category Ci directly dominates another instance of Ci.

Adherence to the Strict Layering Condition led Nespor and Vogel to require that clitics always constitute PWords in their own right, sisters of their host within a constituent of the next highest level of the hierarchy. This is somewhat problematic, given that clitics do not generally manifest the properties of independent PWords, such as autonomous stress. If we construe the conditions characterizing the Prosodic Hierarchy in (12) as constraints that can be violated under the pressure of other constraints, however, there are a variety of possible relations that might obtain between a clitic and its host, and Selkirk (1995) justifies the claim that all of these are in fact instantiated. The typology of clitic–host relations that she proposes is as in (13).

Stephen R. Anderson

9 (13)

a. PWord Clitic 6

b. Free Clitic

PPh

c.

PPh

PWd

PWd

PWd

Host

Clitic

Host

Affixal Clitic

d.

Clitic

Internal Clitic

PPh

PPh

PWd

PWd Host

PWd Host

Clitic

Clitic

PWord clitics, of course, are structures that result when all of the constraints in (12) are satisfied, so that Strict Layering obtains. Free clitics, in contrast, result when some other constraint forces violations of Exhaustivity(PPhrase): the PPhrase thus contains a constituent lower in the hierarchy than a PWord, such as a stray syllable or foot. Affixal clitics result when Exhaustivity(PPhrase) is satisfied, but Non-recursivity(PWord) is not (and Exhaustivity(PWord) is also violated, in case the stray material constituting the clitic is a syllable and not a foot). Internal clitics, like PWord clitics, involve no violations of any of the constraints. Differentiating these two possibilities requires us to invoke another constraint: (14)

Prosodic Faithfulness Prosodic structure in the input should be preserved in the output.

If we assume that the prosodic structure of the host up to the level of the PWord is present in the input to that part of the phonology enforcing Stray Adjunction, we can see that this structure is preserved intact if the stray material is incorporated as a PWord clitic, but altered if it is incorporated as an internal clitic. The choice between the two, then, depends on the relative importance of Prosodic Faithfulness and some constraint disfavoring the creation of additional PWord structure (say, *Struct). For an example of PWord clitics, we can appeal to Bilua examples such as (2b) and (2c), where other aspects of the structure prevent the incorporation of the clitic into an adjacent PWord, but the prohibition against building new PWords is not highly enough ranked to prevent a new PWord from being formed. The remaining possibilities can be demonstrated through a set of closely related systems 6

Since clitics have been defined precisely as elements lacking PWord structure, the notion of a “PWord Clitic” may seem paradoxical. The point is that while clitics do not have such structure underlyingly, the subsequent operation of the language’s broader principles of prosodic organization may give rise to such a structure, as we saw in the case of Bilua above.

Clitics

10

analyzed elegantly by Peperkamp (1997). As reviewed below, she argues that Standard Italian post-verbal pronominal clitics have the structure of free clitics, while the corresponding elements in Neapolitan are affixal clitics and those of Lucanian are internal clitics. The three dialects provide a nice contrasting set, differing minimally in the way clitics are incorporated into prosodic structure as described in terms of varied rankings of the constraints introduced above. The first system to be considered is that of Neapolitan, as illustrated in (15). (15)

Neapolitan imperative ’fa ’conta ’pettina

imperative + ‘it’ ’fallH ’contalH ’pettinalH

imperative + ‘you’ + ‘it’ fat’tillH ’conta’tillH ’pettina’tillH

‘do’ ‘tell’ ‘comb’

I assume that PWords are built lexically over the host verbs, and then prosodically deficient clitics are added post-lexically. Note that when clitics are added, the first stress does not change except in one case ([fat’tillH]), where we can say that the new stress appearing on the clitic sequence has the effect of suppressing the original stem stress to avoid violating *Clash (which penalizes a sequence of two adjacent stresses). Peperkamp shows that we can describe this system by saying that the clitic material is adjoined to the existing prosodic word, without modifying its structure, as in (16). (16)

a.

PPh

b.

PPh

PWd

PWd

PWd

PWd

F…

q

F…

lex

cl

lex

F cl1

cl2

A single clitic constitutes a single syllable, and not a Foot; two clitics, however, provide enough material to constitute a Foot, and thus introduce an additional stress. Peperkamp’s discussion suggests that there are aspects of formal suppletion that require the treatment of the two-clitic sequence as a single unit, which is eligible to be a Foot. Alternatively, we could assume simply that the two monosyllabic units are introduced together, and subsequently organized into a Foot. We can describe this system as follows. Full Interpretation, Headedness, and Layeredness are all undominated well-formedness conditions on the candidates that are to be compared, so they play no part in the ranking. It is also the case that prosodic structure assigned lexically is generally preserved, so Prosodic Faithfulness (14) is also ranked high. In the case of a monosyllabic stem followed by two clitics, however, the need to avoid successive stressed syllables is more important than the preservation of

Stephen R. Anderson

11

input prosody, so the stress on the stem is lost as a result of the domination of Prosodic Faithfulness by another constraint (17).7 (17)

*Clash Sequences of two consecutive stressed syllables are disallowed.

To satisfy Full Interpretation, prosodically deficient material (i.e. the clitics) must be incorporated into the structure somewhere, and the choices are limited. Incorporation into a foot would violate well-formedness conditions on feet, as well as faithfulness to existing prosodic structure. Incorporation into the existing PWord would also violate faithfulness. Incorporation at the PPhrase level would violate Exhaustivity(PPhrase). The Affixal clitic structures that are actually found indicate that Exhaustivity(PPhrase) outranks Non-Recursivity(PWord): that is, building a recursive PWord preserves the existing prosodic structure, and avoids having lower-level constituents (syllables, feet) directly dominated by a PPhrase. The overall constraint ranking for Neapolitan is as in (18). (18)

*Clash >> Prosodic Faithfulness >> Exhaustivity(PPhrase) >> NonRecursivity(PWord)

Now compare the Neapolitan approach to Stray Adjunction with that employed in another dialect, Lucanian. (19)

Lucanian a. b.

’vinnH ram’millH

‘sell’ ‘give me it’

vHn’nillH ‘sell it’ mannatH’millH ‘send me it’

We see in (19a) that the addition of a clitic in this language causes stress to shift rightward.8 Apparently, a binary trochaic foot is constructed over the last two syllables of the form, including both stem and any following clitics. The forms in (19b), with two clitics, have this foot constructed entirely over clitic material. In this language, Stray Adjunction produces Internal clitics, sacrificing Faithfulness to maintain the Strict Layering constraints. The resulting structure for a form with two clitics is as in (20). PPh

(20)

PWd Ft mannatH 7

’mi

llH

The fact that it is the first, rather than the second, of two adjacent stresses that is lost must be resolved by other aspects of the prosodic phonology of Neapolitan not considered here. 8 Stress shift is responsible for the vowel alternation in these forms, with stressed [i] corresponding to unstressed [H].

Clitics

12

The constraint ranking necessary to obtain this result is (21). (21)

Non-Recursivity(PWord), Exhaustivity(PWord) >> Prosodic Faithfulness

Let us finally compare the situation in (standard) Italian, illustrated in (22). (22)

Standard Italian a. b.

’porta ’portamelo

‘bring’ ‘bring me it’

’portami te’lefonamelo

‘bring me’ ‘telephone it to me’

Here the addition of a clitic does not alter the lexically assigned stress, suggesting that Prosodic Faithfulness is highly ranked. Even when two clitics are added, as in (22b), the stress is not altered, and apparently no new stress is assigned even though two syllables of additional material would support the construction of a new Foot if this material were within the PWord. Apparently, then, Stray Adjunction in Standard Italian produces free clitics by attachment to the PPhrase, as in (23). PPh

(23) PWd ’porta

me lo

The required ranking is that in (24). (24)

Non-Recursivity, Exhaustivity(PWord), Prosodic Faithfulness >> Exhaustivity(PPhrase)

Stray Adjunction in these three Italian dialects is thus based on different rankings of the prosodic constraints, yielding three different structural types of clitic as a reflection of these differences in their post-lexical phonology. Peperkamp argues for the structural differences amongst Italian dialects on the basis of the distribution of stress alone, but sometimes this is insufficient to provide an unambiguous analysis. For example, in the case of a language with stress oriented to the left of the word (or simply preserved by high-ranking faithfulness constraints) and a set of unstressed enclitics, stress alone will not allow us to differentiate among the structures of free, affixal, and internal clitics. To do so, we must establish the location of PWord boundaries in the resulting form. The three possibilities can be distinguished in this way, as in (25). (25)

a. b. c.

Free clitic: (. . .)host]PWd clitic]PPh Affixal clitic: (. . .)host]PWd clitic]PWd]PPh Internal clitic: (. . .)host clitic]PWd]PPh

Determining which of these structures is present in a given instance is certainly not trivial, but it can often be done by looking for phonological phenomena which occur at the edges of PWords or across PWord boundaries. Revithiadou (2008) provides a detailed study of a range of dialects of Modern Greek of exactly this sort, showing that phonological regularities characteristic of prosodic boundaries

13

Stephen R. Anderson

identify different host–clitic relationships in different dialects. Similar arguments provided by Booij and Rubach (1987) for Polish can be interpreted as showing that proclitic prepositions in that language (e.g. bez ‘without’ in bez namysÓu ‘without thinking’) are related to a following host as affixal clitics, as reviewed in Anderson (2005: 40f.). It appears that, in general, the attachment of a clitic to a host on one side or the other can be derived from the overall prosodic organization of a language. Typically, prosodic structure above the level of the PWord is projected from the syntax, and the commonest tendency is for this structure to be respected: that is, a clitic attaches phonologically to the host (on its right or on its left) with which it is most closely affiliated grammatically. In some instances, though, this direction of attachment is directly contravened. In Kwakw’ala, for instance, as discussed at length in Anderson (2005), DPinitial determiner clitics associate phonologically not with the following word, which is part of the same DP, but rather with the preceding word, which is not. An example is provided by the sentence in (26). (26)

jHlkwHmas[=ida bHgwanHma]DP[=£-a ’watsi]DP[=s-a cause hurt-dem man-obj-dem dog-inst-dem ‘The man hurt the dog with the stick.’

gwaøöuøw]DP stick

Here the square brackets indicate syntactic constituents, while inter-word spaces delineate PWords: thus, /bHgwanHma[=£-a/ is a single PWord, while [=ida bHgwanHma]DP is a single DP. This situation can be related to the fact that Kwakw’ala is a language in which virtually all morphological marking is suffixal, and thus the lexical root is always (with the exception of reduplicated forms) word initial. A preference to maintain this same situation at the level of prosodic structure can be expressed as a constraint such as (27). (27)

Align(PWord, L; LexWord, L) (>> Align(XP, L; PPhrase, L))

That is, it is important that the left edge of a PWord coincide with the left edge of a lexical word (and not e.g. a clitic determiner). This constraint is more highly ranked than the requirement that the left edges of syntactic phrases coincide with the left edges of PPhrases, and forces the clitics to associate anti-syntactically to their left. The claim that the direction of attachment of clitics can be derived from the prosodic organization of the language as a whole (including constraints such as the one in (27)) is a strong one. It is at variance with proposals such as that of Klavans (1985), where it is claimed that among the dimensions defining individual clitics in a language is a parameter of direction of attachment. Subsequent research has suggested, however, that once grammatical structure and its relation to prosody are taken into account, a unitary analysis can be offered for the way clitics attach in any individual language. Counterexamples to this claim would have to involve pairs of clitics that were entirely comparable in their grammar, but where one attached to a host on its left and the other to a host on its right (under otherwise identical prosodic conditions). Such examples do not appear to exist, and it seems reasonable to propose that the direction of attachment of clitics is a function of the overall grammar of a language, rather than a property of individual clitics.

Clitics

14

In summary, clitics can be characterized from a phonological point of view as linguistic elements lacking in prosodic structure at (or below) the level of the PWord. Linguistic units that are called “clitics” on the basis of unusual syntactic behavior may or may not be clitics in this sense: for example, Italian loro ‘3pl dat’ behaves in a way which is partially similar to the other Italian pronominal clitics, but loro is not prosodically deficient, and thus does not constitute a clitic from the phonological point of view. Similarly, Hungarian verbal prefixes such as oda in oda-ment-em ‘I went over there’ constitute PWords in their own right (as shown by stress and vowel harmony), and thus are not phonological clitics, even though they bear a special grammatical relation to an associated verb. Material that is not fully integrated into prosodic structure (at the PWord level) in the input can be called “stray,” and the phonology of cliticization is fundamentally a matter of how this stray material is incorporated into the overall prosodic structure of the sentence: how “Stray Adjunction” is enforced. The basic mechanics of this can be described by an ordering of the constraints characterizing prosodic layering with respect to one another and to other constraints within the grammar of the language in question. Arguments for this ranking can be provided either directly from properties of the resulting prosodic structure (such as the location of stress) or from other phonological phenomena that are sensitive to it.

5

How is the segmental phonology of a clitic related to that of its host?

A consequence of the grammatical architecture proposed here concerns the phonology applicable to clitic + host combinations. Since the formation of these presupposes the forms of lexical words, it would appear that in terms of classical Lexical Phonology (e.g. Kiparsky 1985; see also chapter 94: lexical phonology and the lexical syndrome), any adjustments to their shape must follow from principles of the post-lexical phonology, not the lexical phonology sensu stricto. Bermúdez-Otero and Payne (forthcoming) note this, but assert that examples exist which controvert it: cases in which host + clitic combinations are affected by rules that are lexical, not post-lexical in character. The one such example cited by Bermúdez-Otero and Payne concerns laryngeal neutralization in Catalan. They argue, following the descriptive literature (e.g. Wheeler 2005) that voicing is neutralized in coda obstruents in this language (see chapter 69: final devoicing and final laryngeal neutralization). When these are closely followed by an onset consonant (in the same or a following word), they show the same voicing as that consonant, and it is plausible to attribute this to assimilation. Word-finally, however, coda obstruents are devoiced; and this devoicing persists even if the consonant in question is resyllabified post-lexically with a following vowel. These facts are illustrated in (28) for the stem /Oob/ ‘wolf’, which ends in underlying voiced /b/. (28)

a. b. c. d. e.

llop llop lliure llop trist lloba llop amic

[Oop] [Oob.Oiw.re] [Oop.trist] [Oo.ßH] [Oo.pH.mik]

‘wolf’ ‘free wolf’ ‘sad wolf’ ‘she-wolf’ ‘friendly wolf’

Stephen R. Anderson

15

They suggest that there must be a “word-level” principle of laryngeal neutralization which is counterbled by post-lexical resyllabification in forms like (28e). In forms where a stem-final voiced obstruent is followed by a vowel-initial clitic, however, the pattern is subtly different: resyllabification bleeds laryngeal neutralization, as illustrated in (29) for the stem /reb/ ‘receive’. (29)

a. b. c.

rebre [re.ßrH] rep això! [re.pH.œD] rep=ho! [re.ßu]

‘receive (inf)’ ‘receive (2sg.imp) that!’ ‘receive (2sg.imp)-3sg.acc.n!’

Why should there be a difference in voicing between the stem-final /b/ as it appears in (29b) and in (c)? Bermúdez-Otero and Payne conclude that this must be because the clitic in (29c) must already be present (and have triggered resyllabification) at the point where Laryngeal Neutralization takes place: These data show unequivocally that enclitic =ho belongs in the same grammatical word as the verb stem, since it causes the stem-final consonant to be syllabified as an onset already at the word level [. . .] Therefore, enclitic =ho cannot be a phrasal affix.

This conclusion does not follow, however. It results from Bermúdez-Otero and Payne’s equation of “word-level” phonology with “lexical” phonology, and the assumption that the “post-lexical” phonology is monolithic. In fact, however, we can take the “word-level” character of laryngeal neutralization to refer to the PWord, not (as Bermúdez-Otero and Payne do) to the grammatical word. If we assume that post-verbal pronominal clitics in Catalan are affixal clitics, the result of stray adjunction in (29c) will be [[[reb]PWdu]PWd]PPh. This entire construction is a PWord, and it is plausible to assume that resyllabification of this PWord yields a structure like [[re]PWd.bu]PWd, bleeding Laryngeal Neutralization. In (29b), however, the structure is [[reb]PWd[H.œD]PWd]PPh. Laryngeal Neutralization, a rule whose scope is the PWord, converts this to [[rep]PWd[H.œD]PWd]PPh, which is subsequently resyllabified at the PPhrase level to [[re]PWd[pH.œD]PWd]PPh. Resyllabification at the PPhrase level does not bleed Laryngeal Neutralization, but Resyllabification at the PWord level does. Since Bermúdez-Otero and Payne do not show that Laryngeal Neutralization has other characteristics of a “lexical” rather than “post-lexical” process (e.g. lexical exceptions), it follows only that the post-lexical phonology displays a sort of cyclic structure, with a round of phonological adjustment induced by each of the categories of the Prosodic Hierarchy, and not that clitics like Catalan =ho are not phrasal affixes. The notion that phonological regularities enforced at different levels of the Prosodic Hierarchy (such as the PWord vs. the PPhrase) can be at least partially distinct is a cornerstone of prosodic theory, and a basic way in which one argues that a given prosodic constituent is of one type rather than another (see Nespor and Vogel 1986). I conclude, then, that the phonology relating clitics to their hosts is in general of the “post-lexical” type, with the specifics depending on the regularities governing various prosodic constituent types within a given language. Given the current state of instability that governs the architecture of phonological theory, with classical rule-based Lexical Phonology and its most direct constraint-based dependent, Stratal OT (Bermúdez-Otero, forthcoming), in conflict both with the “standard” monolithic model of OT and also with various alternatives such as

Clitics

16

OT-CC (McCarthy 2007) and Optimal Interleaving (Wolf 2008), Phase-based Phonology, as represented by various papers in Grohmann (2009), and others, it is difficult to see the facts above from Catalan as decisively incompatible with the view of clitics as phonologically integrated with their hosts at the syntactic level, rather than in the lexicon.

6

Conclusion

There is very little to the phonology of clitics, then, that is unique to these elements. In terms of their representation, they have the character of being incompletely organized in prosodic terms: they are deficient in not constituting PWords, as opposed to normal lexical items. Once that is taken into account, the rest of their behavior follows from the prosodic phonology of the language. Aspects of prosodic well-formedness require that they undergo Stray Adjunction, or incorporation into adjacent prosodic units at some level, in ways that depend on the language’s particular ranking of constraints governing prosodic structure. The language’s “post-lexical” phonology (in some appropriate, architecture-dependent sense) then governs adjustments in the phonological shape of the resulting combination of clitic and host. Neither the prosodic organization nor the phonological adjustments involved are uniquely identified with clitics, although clitics may well provide essential clues in the determination of how the phonology (including prosody) of a language works. A full treatment of the linguistic category of “clitics,” of course, would have to deal with more than the phonological characteristics of items so designated. In particular, the principles underlying the distinctive (morpho)syntactic behavior of “Special Clitics” must be elucidated. Linguistic items that show clitic behavior in the morphosyntactic sense are usually, though not always, prosodically deficient and thus phonologically clitic as well. The analysis of this dimension of the (not entirely homogeneous) class of “clitics” would, however, take us much too far afield in the context of this Companion, and the interested reader can only be referred to Anderson (2005) for the development of one view.

ACKNOWLEDGMENTS This work was supported in part by NSF award #BCS 98–76456 to Yale University. I thank two anonymous referees and the editors of the Companion for useful comments, and also Argyro Katsika for her assistance with the phonology of Modern Greek. None of these people is to be held accountable for my use of the information they supplied, of course.

REFERENCES Anderson, Stephen R. 1992. A-morphous morphology. Cambridge: Cambridge University Press. Anderson, Stephen R. 2005. Aspects of the theory of clitics. Oxford: Oxford University Press. Anderson, Stephen R. 2008. English reduced auxiliaries really are simple clitics. Lingue e Linguaggio 7. 169 –186.

17

Stephen R. Anderson

Arvaniti, Amalia. 1992. Secondary stress: Evidence from Modern Greek. In Gerard J. Docherty & D. Robert Ladd (eds.) Papers in laboratory phonology II: Gesture, segment, prosody, 398–423. Cambridge: Cambridge University Press. Bermúdez-Otero, Ricardo. Forthcoming. Stratal Optimality Theory. Oxford: Oxford University Press. Bermúdez-Otero, Ricardo & John Payne. Forthcoming. There are no special clitics. In Alexandra Galani, Glyn Hicks & George Tsoulas (eds.) Morphology and its interfaces. Amsterdam & Philadelphia: John Benjamins. Booij, Geert. 1988. Review of Nespor & Vogel (1986). Journal of Linguistics 24. 515 –525. Booij, Geert & Jerzy Rubach. 1987. Postcyclic versus postlexical rules in Lexical Phonology. Linguistic Inquiry 18. 1– 44. Delbrück, Berthold. 1878. Syntaktische Forschungen, vol. 3: Die altindische Wortfolge aus dem çatapathabrâ mana. Halle: Buchhandlung des Waisenhauses. Dixon, R. M. W. & Alexandra Y. Aikhenvald (eds.) 2002. Word: A cross-linguistic typology. Cambridge: Cambridge University Press. Grohmann, Kleanthes K. (ed.) 2009. Interphases: Phase-theoretic investigations of linguistic interfaces. Oxford: Oxford University Press. Halpern, Aaron L. 1998. Clitics. In Andrew Spencer & Arnold M. Zwicky (eds.) The handbook of morphology, 101–122. Oxford & Malden, MA: Blackwell. Harris, Alice C. 2002. Endoclitics and the origins of Udi morphosyntax. Oxford: Oxford University Press. Inkelas, Sharon. 1989. Prosodic constituency in the lexicon. Ph.D. dissertation, Stanford University. Kiparsky, Paul. 1985. Some consequences of Lexical Phonology. Phonology Yearbook 2. 85–138. Klavans, Judith L. 1985. The independence of syntax and phonology in cliticization. Language 61. 95 –120. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Obata, Kazuko. 2003. A grammar of Bilua, a Papuan language of the Solomon islands. (Pacific Linguistics 540.) Canberra: Australian National University. Peperkamp, Sharon. 1997. Prosodic words. Ph.D. dissertation, University of Amsterdam. Revithiadou, Anthi. 2008. A cross-dialectal study of cliticization in Greek. Lingua 118. 1393–1415. Russi, Cinzia. 2008. Italian clitics: An empirical study. Berlin & New York: Mouton de Gruyter. Selkirk, Elisabeth. 1995. The prosodic structure of function words. In Jill N. Beckman, Laura Walsh Dickey & Suzanne Urbanczyk (eds.) Papers in Optimality Theory, 439–469. Amherst: GLSA. Taylor, Ann. 1990. Clitics and configurationality in Ancient Greek. Ph.D. dissertation, University of Pennsylvania. Taylor, Ann. 1996. A prosodic account of clitic position in ancient Greek. In Aaron L. Halpern & Arnold M. Zwicky (eds.) Approaching second: Second position clitics and related phenomena, 477–503. Stanford: CSLI. Wackernagel, Jacob. 1892. Über ein Gesetz der indogermanischen Wortstellung. Indogermanische Forschungen 1. 333–436. Wheeler, Max W. 2005. The phonology of Catalan. Oxford: Oxford University Press. Wolf, Matthew. 2008. Optimal interleaving: Serial phonology–morphology interaction in a constraint-based model. Ph.D. dissertation, University of Massachusetts, Amherst. Zec, Draga & Sharon Inkelas. 1991. The place of clitics in the prosodic hierarchy. Proceedings of the West Coast Conference on Formal Linguistics 10. 505 –519. Zwicky, Arnold M. & Geoffrey K. Pullum. 1983. Cliticization vs. inflection: English n’t. Language 59. 502–513.

85 Cyclicity Ricardo Bermúdez-Otero

1

Introduction

The phonology of a natural language will often treat the same string differently according to whether it is wholly contained within a single morph, arises through a morphological operation like affixation, or straddles the edges of two adjacent grammatical words. In the generative tradition there is a widespread and longstanding consensus that such morphosyntactic conditioning effects may come about in two ways: representationally or procedurally (Scheer 2008: §3ff.; see Table 85.1). Representational morphosyntactic conditioning occurs when phonological processes are sensitive to the presence or absence of certain phonological objects – boundary symbols in SPE, prosodic categories in most later frameworks – which are in turn positioned by reference to the edges of morphosyntactic units. In procedural morphosyntactic conditioning, in contrast, morphosyntax directly controls the amount of structure visible during a given round of phonological computation, either by submitting to the phonology only a morphosyntactic subconstituent of a complete linguistic expression (as in the theory of the cycle) or Table 85.1 Two types of morphosyntactic conditioning acknowledged throughout the history of generative phonology Theory

Representational effects

Procedural effects

Sample reference

SPE

boundary symbols (+, #)

the cycle

Chomsky & Halle (1968)

Lexical Phonology

prosodic units (built by rules)

the cycle (with levels)

Booij & Rubach (1984)

Stratal OT

prosodic units (controlled by Align)

the cycle (with levels)

Bermúdez-Otero & Luís (2009)

Classical OT

prosodic units (controlled by Align)

OO-correspondence

Raffelsiefen (2005)

Lateral Phonology

empty CV units

the cycle (phases)

Scheer (2008)

The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0085

Ricardo Bermúdez-Otero

2

by allowing the phonology access to the surface representation of some morphosyntactically related expression (as in the theory of transderivational or output– output correspondence, henceforth OO-correspondence). This chapter addresses current debates about procedural morphosyntactic conditioning in phonology, focusing in particular on the contest between the cycle and OO-correspondence (§5–§9); we shall be concerned with prosody only insofar as it raises the non-trivial problem of demarcating procedural from representational effects (§4). Much of the discussion will be taken up with three instances of morphosyntactically induced misapplication that challenge the basic premises of transderivational theories: in all three cases, the surface bases needed for an analysis relying on OO-correspondence appear to be unavailable for phonological or morphological reasons (§6–§8). As the argument unfolds, however, it will become clear that questions about morphosyntax–phonology interactions are intricately entangled with problems in every other area of phonology, notably including the theory of representations, the phonology–phonetics interface, and the balance between synchronic and diachronic explanation.

2

Two cases of cyclic misapplication in English: Post-nasal plosive deletion and Belfast dentalization

Let us begin with a well-known instance of morphologically induced overapplication. Present-day English tolerates homorganic consonant clusters consisting of a nasal followed by a non-coronal voiced plosive (i.e. [b] or [g]) only if the latter is syllabified in onset position; if the plosive would otherwise surface in the coda, it undergoes deletion (Borowsky 1993: 202).1 (1)

a.

bomb thumb crumb long

[bCm] [hZm] [kPZm] [lCI]

b.

bombardV thimble crumble elongateV

[‘bCm.’bA(d] [’h>m.b=] [’kPZm.b=] [’i(.lCI.‘ge>t]

The forms in (1a) and (1b) display normal application and non-application of deletion, respectively. In (2a), however, the process overapplies: the plosives [b] and [g] fail to surface stem-finally, even though in that position they would be syllabified as onsets; cf. (2b).2 (2)

1

a.

bomb-ing thumb-ing crumb-y long-ish

[’bC.m>I] [’hZ.m>I] [’kPZ.m>] [’lC.I>œ]

b.

*[’bCm.b>I] *[’hZm.b>I] *[’kPZm.b>] *[’lCI.g>œ]

All varieties of English exhibit post-nasal /b/-deletion; /g/-deletion varies across dialects. ThumbN and thimble are highly unlikely to be synchronically related, and so native speakers probably have no reason to derive the noun thumbN or the converted verb thumbV from a root /hZmb/. If so, the gerund thumb-ing [’hZ.m>I] is in fact transparent. This does not affect our argument, however: the key point is that the grammar of English systematically disallows transparent alternations between infinitives ending in [. . . Vm] and gerunds ending in [. . . Vm.b>I]. 2

Cyclicity

3

According to the theory of the phonological cycle, first formulated by Chomsky et al. (1956: 75), the key to such instances of morphosyntactically induced misapplication is to be found in part–whole relationships within the grammatical constituent structure of the relevant linguistic expressions. Consider, for example, the morphological structure of the adjective long, the verb elongate, and the derived adjective longish. (3)

a.

Aword©

c.

b. Aword©

Vword©

Astem©

Vstem©

Astem Astem©



afx



Vafx



lCIg

i(

lCIg

e>t

lCIg

Aafx



Let us suppose that some of the morphosyntactic constituents shown in (3) define domains for phonological computation; I shall henceforth refer to these as “cyclic nodes.” Assume, at a minimum, that the set of cyclic nodes in (3) includes every stem immediately derived from a root, as well as every fully inflected, syntactically free grammatical word; these are flagged with a superscript ©. Given these premises, one obtains the following nested domain structures: (4)

a. [[lCIg]]

b.

[[i(-lCIg-e>t]]

c.

[[lCIg] >œ]

Now suppose that phonological computation proceeds iteratively, starting with the domains defined by the smallest, most deeply embedded cyclic nodes, and then moving to larger, less deeply embedded cyclic nodes: in other words, suppose that the computation of the phonological form of the parts precedes and feeds the computation of the phonological form of the whole. (5) inner cycle outer cycle

a. [[lCIg ]] lCI —

b.

[[i(-lCIg-e>t]] i(.lCI.ge>t —

c.

[[lCIg] >œ] lCI lC.I>œ

According to this cyclic analysis, post-nasal plosive deletion overapplies in longish because its conditions are met within a morphosyntactic subconstituent, the stem long-, which defines a cyclic domain by itself. The environment for deletion disappears in the outer cycle, as the vowel of the suffix -ish projects a syllable with an onset capable of sheltering the underlying /g/; but deletion has already applied in the inner cycle. The result is a counterbleeding interaction. Observe that not all morphosyntactic constituents trigger phonological cycles. In (3b) and (4b), for example, it is absolutely crucial that roots (as opposed to stems) should not count as cyclic nodes (Kiparsky 1982: 32–33; Inkelas 1989: §3.5.5); otherwise, post-nasal plosive deletion would incorrectly overapply in

Ricardo Bermúdez-Otero

4

e-long-ate.3 A fully articulated theory of the cycle must of course specify criteria for designating particular morphosyntactic nodes as cyclic or non-cyclic. Scholars working in the tradition of Lexical Phonology (e.g. Kiparsky 1982; Hargus and Kaisse 1993) and Stratal OT (e.g. Bermúdez-Otero 1999, 2003; Kiparsky 2000) have reached a broad consensus on a number of points, sometimes strikingly at variance with phase theory in minimalist syntax (Chomsky 2001): e.g. there appear to be no cyclic nodes between the grammatical word (X°) and the utterance (see further Scheer 2008: §740ff.). Many other issues remain open: e.g. whether or not certain affixes should be allowed to define cyclic domains by themselves (Baker 2005; see also Mohanan 1982, and cf. McCarthy 2007: 133–134). To account for morphologically induced underapplication, the theory of the cycle needs to be supplemented with the concept of level or stratum (chapter 94: lexical phonology and the lexical syndrome).4 Consider, for example, the process of dentalization found in certain varieties of English spoken in and around Belfast: the coronal non-continuants /t d n l/ have alveolar realizations unless immediately followed by /(H)P/, in which case they become dental (Harris 1985: 58, 211ff.). (6)

a. b.

train true Peter matter

[}Ó>Hn] [}Ór(] [’pi}HP] [’ma}HP]

drain drew ladder rudder

[{Ô>Hn] [{Ôr(] [’la{HP] [’P*{HP]

dinner spanner

[’dë|HP] [’spä|HP]

pillar [’pë*HP]

Dentalization underapplies when its environment is created by adding a suffix like agentive -er (7a) or comparative -er (7b) to a free stem, although it does apply normally when comparative -er is suffixed to a suppletive bound root (7c). (7)

a.

b. c.

hea[t]er wai[t]er shou[t]er fa[t]er la[t]er better cf. better

loa[d]er

di[n]er ‘diner’ ru[n]er

ki[l]er

lou[d]er

fi[n]er

coo[l]er

[’bæ}HP] ‘good.comparative’ [’bætHP] ‘one who bets’

Applying the same principles as in (4), we obtain cyclic domain structures like the following: (8)

a.

[[train]] [[Peter]] [[bett-er]] ‘good.comparative’

b. [[heat] er] [[ fat] er] [[bet] er] ‘one who bets’

However, it becomes immediately apparent that, if dentalization applies in every cycle, then cyclic derivation will just fail to produce the desired underapplication in (8b): dentalization will simply take place in the outer cycle. 3

For the root-based status of this form, compare verbs like e-dulcor-ate, e-mancip-ate, e-viscer-ate, etc., which are manifestly derived from uninflectable bound bases. 4 Some scholars pursue an alternative approach, based on Chomsky’s (2001) Phase Impenetrability Condition: e.g. Marvin (2002), Scheer (2008).

Cyclicity

5

Many cyclic frameworks solve this problem by asserting that phonological domains associated with morphosyntactic constituents of different kinds may be subject to different phonological generalizations: in common usage, such morphosyntactic constituents are said to “belong to different phonological levels.” Theories differ as to the number of phonological levels that may be distinguished within the grammar of a single language. Lexical Phonology and Stratal OT often assume that each grammar specifies precisely three levels: the stem, word, and phrase levels. In affixal constructions, the ascription of the construction to the stem-level or word-level phonology is deemed to depend on properties both of the base (Giegerich 1999) and of the affix: the attachment of an affix to a root necessarily produces a stem-level category; the attachment of an affix to a stem may produce a stem-level or word-level category depending on the idiosyncratic affiliation of the affix (e.g. Bermúdez-Otero 2007d: 283). In constrast, full grammatical words trigger word-level cycles and complete utterances trigger phrase-level cycles. In the case of Belfast English, one must assume that dentalization applies only within stem-level domains, and that agentive -er and comparative -er are word-level suffixes unless attached to bound roots. This yields the appropriate counterfeeding relationship between stem-level dentalization and word-level suffixation.

SL (dentalization on) WL (dentalization off)

Peter WL[ SL[Peter]] } –

‘good.comparative’ WL[ SL[bett-er]] } –

SL (dentalization on) WL (dentalization off)

fatt-er WL[ SL[ fat] er] – –

‘one who bets’ WL[ SL[bet] er] – –

(9)

3

The Russian Doll Theorem and the life cycle of phonological processes

There is no room in this brief chapter to review all the predictions about morphosyntactically induced misapplication that follow from the theory of the cycle. It will therefore be appropriate to concentrate here on one of the most fundamental: (10)

The Russian Doll Theorem Let there be the nested cyclic domains c[ . . . b[ . . . a[. . .] . . .] . . .]. If a phonological process p is opaque in b because its domain is a, then p is opaque in c.

To my knowledge, this entailment of cyclic theory has not been formally enunciated before, probably because it has been considered so obviously true as to be entirely trivial. Later, however, we shall see that OO-correspondence is easily capable of violating the Russian Doll Theorem and captures its effects only by stipulation (§9).

Ricardo Bermúdez-Otero

6

The Russian Doll Theorem has the following corollary: (11)

If a phonological process exhibits cyclic misapplication within a certain phonological configuration created by affixation, then it must also exhibit cyclic misapplication if the same configuration arises by word concatenation.

This follows logically from elementary facts of morphosyntactic layering: a phonological process can cyclically misapply in the presence of an affix only if that affix is excluded from its cyclic domain, which must therefore correspond to a morphosyntactic category smaller than the grammatical word, i.e. a stem; but, by its very nature, a stem cannot straddle the edges of adjacent words. Post-nasal plosive deletion (§2) bears out this prediction: overapplication before word-level suffixes beginning with a vowel, as in long-ish [’lC.I>œ], entails overapplication in word-final prevocalic environments, as in long effect [‘lC.I>.’fekt]; cf. *[‘lCI.g>.’fekt]. Not only does corollary (11) hold true of post-nasal plosive deletion in present-day English, but it also captures key facts in the diachronic evolution of the process. For /Ig/ clusters, in particular, we can reconstruct the four historical stages shown in (12): Stage 0 represents the situation in Early Modern English; Stage 1 is attested in the formal, relatively conservative register of eighteenth-century orthoepist James Elphinston; Stage 2 corresponds to Elphinston’s description of his own casual, more innovative register; and Stage 3 is observed in present-day RP (Garrett and Blevins 2009: 527–528). The symbol „ represents pause (i.e. the end of the phonological utterance). (12) elongate prolong-er prolong it prolong „

0 Ig Ig Ig Ig

Stage 1 2 Ig Ig Ig Ig Ig I I I

3 Ig I I I

In compliance with (11), the diachronic transition from normal application (Stage 1) to word-internal overapplication (Stage 3) is effected through an intermediate phase involving overapplication at word boundaries but normal application word-internally (Stage 2). More generally, the diachronic pathway shown in (12) provides a clear illustration of the typical life cycle of phonological processes, which stratal-cyclic frameworks capture in a particularly perspicuous way; see e.g. Bermúdez-Otero (1999: 99–103, 239–240; 2007b: 503) and McMahon (2000: ch. 4). First, phonetically driven innovations enter the grammar from below as gradient phonetic rules, which later become stabilized as categorical phonological processes applying across the board at the phrase level (Bermúdez-Otero 2007b: 505; see also chapter 89: gradience and categoricality in phonological theory; chapter 93: sound change): in (12), this is the transition from Stage 0 to Stage 1. Subsequently, analogical change causes the new phonological process to climb up to progressively higher levels, concomitantly narrowing down its domain of application (Dressler 1985: 149): in (12) we see deletion climbing up from the phrase level (Stage 1) to the word level (Stage 2), and from the word level (Stage 2) to the stem level

Cyclicity

7

(Stage 3). Eventually, senescent processes typically undergo morphologization or lexicalization: see Bermúdez-Otero (2008) for incipient symptoms of this in the case of post-nasal plosive deletion. The overall sequence of events is represented in greater detail in (13), where Ì indicates a site of /g/-deletion. (13)

level

deletion?

elongate

a. Stage SL WL PL b. Stage SL WL PL c. Stage SL WL PL

prolonging „

0: Early Modern English no [i(.lCI.ge>t] [pPH.lCIg][>Ig] no [i(.lCI.ge>t] [pPH.lCI.g>Ig] no [i(.lCI.ge>t] [pPH.lCI.g>Ig] 1: Elphinston’s formal register no [i(.lCI.ge>t] [pPH.lCIg][>Ig] no [i(.lCI.ge>t] [pPH.lCI.g>Ig] yes [i(.lCI.ge>t] [pPH.lCI.g>IÌ] 2: Elphinston’s casual register no [i(.lCI.ge>t] [pPH.lCIg][>Ig] yes [i(.lCI.ge>t] [pPH.lCI.g>IÌ] yes [i(.lCI.ge>t] [pPH.lCI.g>I] (vacuous) d. Stage 3: Present-day RP SL yes [i(.lCI.ge>t] [pPH.lCIÌ][>IÌ] WL yes [i(.lCI.ge>t] [pPH.lC.4>I] (vacuous) PL yes [i(.lCI.ge>t] [pPH.lC.4>I] (vacuous)

prolong it

prolong „

[pPH.lCIg][>t] [pPH.lCIg][>t] [pPH.lCI.g>t]

[pPH.lCIg] [pPH.lCIg] [pPH.lCIg]

[pPH.lCIg][>t] [pPH.lCIg][>t] [pPH.lCI.g>t]

[pPH.lCIg] [pPH.lCIg] [pPH.lCIÌ]

[pPH.lCIg][>t] [pPH.lCIÌ][>t] [pPH.lC.4>t]

[pPH.lCIg] [pPH.lCIÌ] [pPH.lCI]

[pPH.lCIÌ][>t] [pPH.lCI][>t]

[pPH.lCIÌ] [pPH.lCI]

[pPH.lC.I>t]

[pPH.lCI]

The analogical changes involved in the transitions between Stages 1 and 2 and between Stages 2 and 3 were driven by input restructuring (Bermúdez-Otero and Hogg 2003: 105ff.; Bermúdez-Otero 2006: 501ff.). At Stage 1, for example, surface PL[pPH.lCI„] was derived unfaithfully from word-level WL[pPH.lCIg] by a phrase-level application of deletion. By Stage 2, however, PL[pPH.lCI„] has been re-analyzed as derived faithfully from an identical word-level representation WL[pPH.lCI]. This has the effect of introducing deletion into the word-level phonology, and gives rise to the innovative opaque surface form PL[pPH.lC.I>t], derived from word-level WL[pPH.lCI] WL[>t]. Bermúdez-Otero (1999: 100–103, 239–240; 2003: 4ff.) outlines an approach to phonological learning that accounts straightforwardly for such patterns of recurrent input restructuring.

4

Cyclicity vs. prosody

In §2 we assumed that the morphosyntactic conditioning effects displayed by post-nasal plosive deletion and Belfast dentalization were procedural, not representational (see §1). However, several scholars have proposed that the behavior of English word-level suffixes should be explained prosodically, rather than cyclically (e.g. Szpyra 1989: 178–200; Hammond 1999: 322–329). In this approach, suffixes like agentive -er and adjectival -ish are not incorporated into the prosodic

Ricardo Bermúdez-Otero

8

word containing the stem, but attach under a second projection of w (chapter the phonological word): (14)

a.

b.

w′

51:

w′



q



q

long

ish

fat

er

If this were true, then the absence of dentalization in Belfast fatt-er [’fatHP], cf. *[’fa}HP] (§2), could be described as a case of transparent non-application, rather than as an instance of opaque underapplication: one would just need to stipulate that dentalization does not apply unless its conditions are met within the first projection of w. (15)

G coronal J → [+distributed] / I−continuantL

[ . . . __ (H)P . . . ]



The uncertainty whether a particular instance of morphosyntactic conditioning in phonology should be analyzed procedurally or representationally is in fact one of the most serious and recurrent obstacles faced by empirical research into the morphosyntax–phonology interface. A great deal of existing work fails to make the cut on explicit, consistent, and principled grounds (Raffelsiefen 2005: 214–215). Typically, the question cannot be settled without analyzing a very substantial fragment of the phonology of the language in question, including both morphosyntax–phonology and phonology–phonetics interactions (e.g. BermúdezOtero and Luís 2009). In our case, phonological variation and phonetic gradience in English provide strong evidence against the prosodifications shown in (14). Let us first consider variation. Hayes (2000: 98) shows that, in American English, the application frequency of /l/-darkening (chapter 31: lateral consonants) follows the cline in (16): (16)

higher frequency of [û] heal „ > heal it

>

lower frequency of [û] heal-ing > Healey

On the basis of a comprehensive survey of English function words, however, Selkirk (1996: 204–206) shows that combinations of a verb and a weak object pronoun like heal it undergo affixal cliticization (chapter 84: clitics):

Cyclicity (17)

9

w′ w°

q

heal

it

Therefore, if one adopts the approach to English word-level suffixes shown in (14), heal-ing will end up being prosodified in the same way as heal it, and so prosody will be unable to explain the fact that /l/-darkening applies with greater frequency in the latter than in the former. One would then have to fall back on a procedural (cyclic) explanation: see §9. The argument from variable /l/-darkening suggests that the prosodification shown in (14) is descriptively insufficient (though cf. Raffelsiefen 2005: 253–256); the evidence of gradient durational effects confirms that it is incorrect. It is a wellknown fact that, in English, each of the members of a transparent compound forms a prosodic word by itself: (18)

w′ w°



’radio ‘station Given this fact, the approach to word-level suffixes outlined in (14) predicts that stems in word-level suffixal constructions will display the same patterns of gradient durational adjustment (resistance to polysyllabic shortening; liability to pre-boundary lengthening) as the first members of transparent compounds, since both occur in the environment w′[ w°[ __ ] . . .]. This prediction proves incorrect. In an experiment with nonce words, Sproat and Fujimura (1993) found no durational effects of stem-level suffixation (e.g. beel-ic) or word-level suffixation (e.g. beel-ing) when compared with monomorphemic controls (e.g. Beelik), whereas the first members of compounds (e.g. beel equator) were consistently lengthened; see Sproat (1993: 178). A more recent study of Scottish English has detected a very small effect of word-level suffixation: the phonetic realization of the string /Pe(z/ appears to be slightly shorter in raisin [’Pe(zè] than in rais-ing [’Pe(z>I] (Sugahara and Turk 2009). Nonetheless, this effect falls far below that of compounding: it is not statistically significant at “normal” speech rates (Sugahara and Turk 2009: 496); it manifests itself as a 6.6 percent difference (mean of 23 msecs) at “slow” speech rates; and it reaches only 9.6 percent (mean of 42 msecs) at “extra-slow” speech rates. Pace Sugahara and Turk (2009: 488), these findings are best understood as an effect of footing, rather than of recursive prosodic-word structure (chapter 40: the foot):5 5

In (19a), Noun Extrametricality (Hayes 1982: 240) is implemented through the exclusion of the final syllable of the noun stem from the first foot projection (R°) and through its attachment under a second foot projection (R′) (chapter 43: extrametricality and non-finality).

10 (19)

Ricardo Bermúdez-Otero a.

raisin

b.

raise

c.

rais-ing

w w

R′ R° q[[

q

Pe(



but

w

R

R

q[[

q[[

q

Pe(

z>I

Pe(

disyllabic foot



monosyllabic foot

Indeed, the idea that stray syllables affiliated to word-level suffixes attach directly to w, as in (19c), instead of being footed, makes straightforward sense of the fact that, unless autostressed, word-level suffixes are stress-neutral. The diagnostics that we have applied so far can be used to demarcate representational morphosyntactic conditioning from procedural morphosyntactic conditioning regardless of one’s particular theory of the latter. However, if one commits to a stratal-cyclic analysis of procedural morphosyntactic conditioning, then further demarcation criteria become available. One such criterion is cyclic locality: prosodic structure assigned in an early cycle can persist, and continue to affect the application of phonological processes, throughout later cycles; in contrast, the morphosyntactic structure visible during a phonological cycle ceases to be accessible in the next cycle (by “Bracket Erasure”: see e.g. Orgun and Inkelas 2002: 116). Cyclic locality entails, for example, that the contrast between American English ‘capi[7]a’listic and ‘mili[t]a’ristic must be mediated by prosody, as /t/-flapping is demonstrably phrase-level (see (36b) below) and so cannot access the internal morphological structure of words: see e.g. Davis (2005) and Bermúdez-Otero and McMahon (2006: 403–404); cf. Steriade (2000).

5

Cyclicity vs. OO-correspondence

Whilst phonologists generally agree that both representational and procedural morphosyntactic conditioning effects exist, as we saw in §1 and §4, there is currently no consensus on the best way to analyze procedural morphosyntactic conditioning. Within OT, the most popular alternative to the cycle is transderivational correspondence (e.g. Kenstowicz 1996; Benua 1997; Kager 1999; etc.). This theory claims that morphosyntactically induced misapplication arises when high-ranking OO-identity constraints cause a transparently derived surface property of a given expression (the “surface base”) to be transmitted to the surface representation of some morphosyntactically related expression, where its presence is opaque. Thus, the underapplication of Belfast dentalization in fatt-er [’fatHP] (§2) would be analyzed as follows:

Cyclicity (20)

Aword

11

Aword

Astem

Astem





/fat/

/fat/ IO-Faith

[fat]

OO-Ident

transparent non-application

afx

/HP/ IO-Faith [’fatHP]

underapplication

The implementation of this solution poses a number of non-trivial technical challenges, such as motivating the selection of the surface base and preventing the satisfaction of OO-identity by means of overapplication in the base (i.e. transparent /fat-HP/ → *[’fa}HP] leading to opaque /fat/ → *[fa}]); I return to these issues in §9 below. At this point, however, I should like to compare the core predictions of cyclicity and OO-correspondence. The comparison is in fact easy, because the two theories share a fundamental assumption: (21)

Ultimate transparency If a phonological generalization p misapplies in the surface representation s of some linguistic expression, then p must apply transparently in some other representation r, with which s is in direct or indirect correspondence.

The theory of the cycle predicts that p will apply transparently in some cyclic domain defined by some morphosyntactic constituent of the expression: the output of this cycle is connected with the surface representation by relationships of input–output faithfulness. In contrast, OO-correspondence predicts that p will apply transparently in the surface representation of some appropriately related linguistic expression; the two surface representations are linked to each other by means of transderivational correspondence. In §6 to §8 I adduce empirical evidence supporting the first prediction and challenging the second.

6

Phonologically masked bases I: Quito Spanish /s/-voicing

Spanish has a voiceless alveolar fricative phoneme /s/. In the dialect spoken in Quito (Robinson 1979; Lipski 1989), /s/ is realized faithfully in the onset (22a), but displays contextual laryngeal allophony in the coda: coda /s/ surfaces as [s] before voiceless segments and utterance-finally (22b), and becomes [z] when

Ricardo Bermúdez-Otero

12

followed by a voiced segment either in the same grammatical word or across a word boundary (22c). (22)

a.

b.

c.

gasa ganso da sueño el sueño rasco gas caro gas „ rasgo plasma gas blanco gas noble

/gasa/ /gaNso/ /da sueJo/ /el sueJo/ /rasko/ /gas ka7o/ /gas/ /rasgo/ /plasma/ /gas blaNko/ /gas noble/

[’ga.sa] [’gan.so] [‘{a.’swe.Jo] [el.’swe.Jo] [’ras.ko] [‘gas.’ka.7o] [gas „] [’raz.Øo] [’plaz.ma] [‘gaz.’êlaI.ko] [‘gaz.’no.êle]

‘gauze’ ‘gander’ ‘makes one sleepy’ ‘the dream’ ‘I scratch’ ‘expensive gas’ ‘gas’ ‘feature’ ‘plasma’ ‘white gas’ ‘noble gas’

Coda /s/ undergoes voicing not only before voiced obstruents, but also before sonorants: e.g. plasma [’plaz.ma], gas noble [‘gaz.’no.êle]. For our purposes, the crucial fact is that voicing overapplies to word-final prevocalic /s/: (23)

a. cf. b. cf.

gas acre gasa has ido ha sido

/gas ak7e/ /gasa/ /as ido/ /a sido/

[‘ga.’za.k7e] [’ga.sa] [a.’zi.Éo] [a.’si.Éo]

‘acrid gas’ ‘gauze’ ‘hast gone’ ‘hath been’

On the surface, expressions like gas acre [‘ga.’za.k7e] fail to meet the conditions for /s/-voicing: in gas acre, [z] surfaces in a pre-sonorant environment, but not in the coda, for Spanish has a phrase-level process of resyllabification that moves word-final prevocalic consonants into the onset.6 In this position, therefore, the transparent realization of /s/ would be voiceless: cf. gasa [’ga.sa]. In a stratal-cyclic framework, the laryngeal allophony of Quito Spanish /s/ submits to the following analysis. First, the stem-level phonology allows output [s], but forbids output [z]: in an optimality-theoretic implementation, therefore, a hypothetical underlying /z/ present in the rich base would be unfaithfully mapped onto [s] in the stem-level output (see Bermúdez-Otero 2007c for an illustration of this strategy). At the word level, in turn, [s] remains unchanged if syllabified in the onset; in the coda, however, [s] loses its laryngeal node, becoming laryngeally unspecified [S]: see (24a). Finally, at the phrase level, input [s] is realized faithfully, whereas delaryngealized [S] acquires voice specifications either by leftward autosegmental spreading from an immediately following obstruent or by default: on the assumption that sonorants are not redundantly specified as [+voice] (chapter 8: sonorants), we can just say that [S] becomes voiced before sonorants in order to satisfy a positional constraint designating [+voice] as the unmarked feature in this particular context, whereas utterance-final [S] is assigned the context-free default specification [−voice];

6

This is confirmed, inter alia, by the fact that [7] undergoes optional emphatic trilling in canonical coda positions, but not word-finally before a vowel (Harris 1983: 70–71): e.g. [ma7] ~ [mar] ‘sea’, [‘ma7.’ne.Ø7o] ~ [‘mar.’ne.Ø7o] ‘Black Sea’; but [‘ma.7e.’xe.o] ‘Aegean Sea’, not *[‘ma.re.’xe.o].

Cyclicity

13

see (24b).7 In this analysis, underlying /s/ becomes vulnerable to voicing if it finds itself in the coda in a word-level cycle and so loses its laryngeal node; the generalization is rendered opaque by phrase-level resyllabification.8 (24)

PL

a. b.

[

WL (coda delaryngealization) PL (defaults)

[gasa]] [ga.sa]

WL

PL

[

[gas] WL[ak7e]] [gaS] [a.k7e]

WL

[ga.sa]

PL

[ WL[gas]] [gaS]

[ga.za.k7e]

[gas]

As stated, the facts of Quito Spanish /s/-voicing pose a challenge to OOcorrespondence (Colina 2006). This theory can explain the opaque voicing of onset /s/ in gas acre [‘ga.’za.k7e] only by reference to a surface base containing a transparently voiced correspondent [z] in the coda. Many such expressions are found: e.g. gas blanco [‘gaz.’êlaI.ko], gas noble [‘gaz.’no.êle]. The problem, however, is that none of them bears a non-arbitrary morphosyntactic relationship to gas acre [‘ga.’za.k7e], and so none can straightforwardly qualify as its base. If surface bases are selected by the containment criterion (Benua 1997: 28–29; Kager 1999: 215ff.), the only plausible option is the citation form gas [gas], which consists of a subset of the morphs of gas acre; but this exhibits [s]. In contrast, gas noble [‘gaz.’no.êle], which contains the desired [z], has no better claim to being the base of gas acre than, say, gas caro [‘gas.’ka.7o], again showing [s]. Within inflectional paradigms, some versions of OO-correspondence allow surface bases to be designated by arbitrary stipulation (e.g. Kenstowicz 1996: 387, 391), but this option is of no avail here, since expressions like gas, gas acre, and gas noble do not belong in an inflectional paradigm; see the discussion of surface base selection in chapter 83: paradigms. (25)

[ [gas]]

[ [gas] A[ak7e]]

NP N

NP N

IO-Faith

IO-Faith [gas] [z] absent

7

× OO-Ident

[ga.za.k7e] [z] opaque

[ [gas] A[noble]]

NP N

IO-Faith [gaz.no. le] [z] transparent, but not in a legitimate base

Analyzing the pre-sonorant voicing of [S] as driven by a position-sensitive default (chapter 69: final devoicing and final laryngeal neutralization; chapter 46: positional effects in consonant clusters), rather than by feature spreading from a following sonorant redundantly specified as [+voice], allows for a closer fit between this categorical phonological operation and the gradient phonetic processes of passive voicing on which it is grounded and from which it diachronically emerges (see below): passive voicing in environments such as that occupied by the /s/ in plasma involves lengthening of the voicing tail from the preceding vowel, rather than anticipation of glottal pulsing for the following sonorant (Jansen 2004). 8 This cyclic derivation accords partly with Mascaró’s (1987) reduction-and-spreading model of laryngeal phenomena, though cf. note 7. Bermúdez-Otero (2007c: §31–§34) proposes a similar account for the voicing of word-final prevocalic sibilants in Catalan (cf. Wheeler 2005: 162–164). See also Rubach (1996: 72, 82–85) on the alleged voicing of all word-final obstruents before sonorants (including vowels) in Cracow Polish, but cf. Strycharczuk (2010).

14

Ricardo Bermúdez-Otero

However, Colina (2009: 8–10) shows that OO-correspondence can avoid this problem by shifting part of the burden of description onto the phonetics. Colina suggests that, in Quito Spanish, delaryngealized coda [S] does not acquire categorical voice specifications during the phonological derivation either by autosegmental spreading or by default feature insertion; she claims, rather, that expressions like gas acre and gas noble merely display the effects of gradient passive voicing in phonetic implementation (Keating 1988). If Colina is right, then the surface phonological representation of gas acre is [‘ga.’Sa.k7e], with overapplication of delaryngealization in the onset; but this can be analyzed without difficulty as involving OO-correspondence with the citation form ga[S]: cf. (25) and (26). (26)

[ [gas]]

[ [gas] A[ak7e]]

NP N

NP N

IO-Faith [gaS]

OO-Ident

IO-Faith [ga.Sa.k7e]

Is Colina’s re-analysis correct? This question cannot be settled on a priori grounds: in particular, the fact that the environment for /s/-voicing in gas acre straddles a word boundary does not by itself warrant the conclusion that the process must be gradient rather than categorical (chapter 89: gradience and categoricality in phonological theory). Electropalatographic studies have admittedly shown that many instances of assimilatory external sandhi involve gradient co-articulation (i.e. reduction, overlap, and blending of articulatory gestures), rather than categorical assimilation (i.e. delinking and spreading of discrete phonological features): see e.g. Barry (1985), Wright and Kerswill (1989), Nolan (1992), Hardcastle (1995), and Zsiga (1995). However, there is also compelling evidence for the existence of categorical external sandhi. Holst and Nolan (1995) and Nolan et al. (1996) argue persuasively that at least some instances of /s#œ/ → [œ)] sandhi in British English do involve discrete feature delinking and spreading; the likelihood of categorical assimilation increases in the absence of the major prosodic boundary associated with a break between clauses. Ladd and Scobbie (2003) report that, in Sardinian, total anticipatory assimilation between singletons across word boundaries yields long consonants that are phonetically equivalent to underlying geminates (chapter 37: geminates). Ellis and Hardcastle (2002) examined inter- and intra-speaker variation in fast-speech /n#k/ sandhi in British English, and found no fewer than four different idiolectal strategies (chapter 92: variability): (i) absence of accommodation between the two segments (in two out of ten subjects); (ii) gradient co-articulation (in two out of ten subjects); (iii) categorical assimilation (in four out of ten subjects); (iv) variation between categorical assimilation and absence of accommodation, with avoidance of coarticulation (in two out of ten subjects). Crucially, type (iv) speakers did not produce residual coronal gestures, but realized the nasal either without any tongue-tip raising at all or with full mid-sagittal linguo-alveolar closure; this behavior is inconsistent with gradient gestural reduction, but reflects the variable application of discrete feature delinking and spreading across word boundaries. Kochetov and Pouplier’s (2008: 414) Korean subjects exhibited the same behavior in /t#p/ and /t#k/ sandhi. These findings clearly indicate that a process of external sandhi may

Cyclicity

15

apply gradiently for a speaker in some tokens, and still be categorical for other speakers, or for the same speaker in other tokens. It is therefore unsafe to relegate external sandhi to the phonetics without further argument. Although instrumental evidence is lacking, Robinson (1979) and Lipski (1989) provide strong indirect evidence that, in many instances, word-final prevocalic /s/ does undergo categorical voicing in Quito Spanish. First, the process applies regularly in all registers independently of speech rate: it “may be frequently observed even in slow, disconnected or interrupted speech” (Lipski 1989: 53–54). Secondly, native speakers of the dialect rely on the difference between [s] and [z] to discriminate between minimal pairs like (23b): ha sido [a.’si.Éo] ‘hath been’ vs. has ido [a.’zi.Éo] ‘hast gone’ (Robinson 1979: 136, 140–141; Lipski 1989: 55). Thirdly, word-final /s/ voicing can be used as a turn-holding device before hesitation pauses (Robinson 1979: 141). Robinson records the following example, where he describes the realization of the /s/ of es as “strongly voiced”: (27)

es . . . tres . . . [ez(( Ït7es(] ‘it’s . . . uh . . . three . . .’

(Robinson 1979: 141)

It appears that the speaker intentionally produced a sandhi form of es to signal the fact that he or she had not reached the end of the utterance. Lipski (1989: 54) adduces further cases. For these reasons, Bradley and Delforge (2006: 39) conclude that the voicing of word-final prevocalic /s/ in Quito Spanish “reflects a phonological [+voice] specification,” as opposed to “gradient interpolation of glottal activity through the constriction period of phonetically targetless [S].” This conclusion is incompatible with Colina’s (2009) answer to the questions that Quito Spanish /s/-voicing raises for OO-correspondence. The cyclic derivations proposed in (24) can moreover be seen as the synchronic outcome of a simple series of commonplace diachronic innovations (§3). We may assume that, in an initial round of phonologization and stabilization, the lack of robust phonetic cues for laryngeal features in codas was reinterpreted as phraselevel coda delaryngealization. Analogical change then caused this process of coda delaryngealization to percolate up to the word level. Finally, a second round of phonologization and stabilization caused the gradient passive voicing of delaryngealized sibilants in pre-sonorant contexts to be re-analyzed as a categorical phrase-level process of context-specific default feature insertion.

7

Phonologically masked bases II: English linking and intrusive r

Quito Spanish /s/-voicing is not an isolated case: it is not unusual for wordfinal prevocalic consonants to exhibit properties that are opaque in prevocalic position, but which nonetheless fail to match those of utterance-final consonants in citation forms. Linking and intrusive r in non-rhotic dialects of English provides another instance of this phenomenon. Again, a stratal-cyclic approach to the morphosyntax–phonology interface can easily deal with the facts, whereas OO-correspondence must shift some of the descriptive burden to a different component of the grammar: in this case, the theory of representations.

Ricardo Bermúdez-Otero

16

Most non-rhotic dialects of English (Wells 1982: 75–76, 218ff.) allow [P] in onset positions, such as word-initially or word-medially before a stressed or unstressed vowel (28a),9 but forbid [P] in coda positions, such as word-medially or word-finally before a consonant or pause (28b). This is formalized as (28c). (28)

a.

b.

c.

rack raccoon carouse caramel cart car „ the car came *Coda[P] *Coda

[Pæk] [PH.’khu(n] [khH.’Pa(Áz] [’khæ.PH.‘meû] [khA(t] [khA(„] [ÏH.‘khA(.’khe(>m]

*[khA(Pt] *[khA(P„] *[ÏH.‘khA(P.’khe(>m]

P Crucially, most non-rhotic dialects tolerate [P] word-finally before a vowel, whether the consonant was present etymologically (“linking r”) or not (“intrusive r”). (29)

a. b.

the car is new the spa is new

[ÏH.‘khA(.o>z.’nju(] [ÏH.‘spA(.o>z.’nju(]

linking r intrusive r

The fact that linking and intrusive r escapes the phonotactic ban in (28c) indicates that it surfaces in the onset (chapter 55: onsets). In English, however, word-final prevocalic r (including linking and intrusive r in non-rhotic dialects) exhibits lenition in comparison with canonical onset r; the transcriptions above reflected this phenomenon by distinguishing between unlenited [P] in (28a) and lenited [o] in (29). Compared with word-initial [P], word-final prevocalic [o] displays the following properties: (i) shorter duration (Cruttenden 2001: 289; Tuinman et al. 2007: 1905–1906); (ii) earlier timing of the tongue-root gesture (Campbell et al. 2010: 62); (iii) smaller magnitude of the lip gesture (Wells 1990; Campbell et al. 2010: 63–64); (iv) smaller magnitude of the tongue-tip gesture (Gick 1999: 47–49; Campbell et al. 2010: 63–64); (v) greater magnitude of the tongue-root gesture (Campbell et al. 2010: 63–64); (vi) greater intensity (McCarthy 1993: 179; Tuinman et al. 2007: 1905–1906); and (vii) higher F3 (Hay and Maclagan 2010). Thus dialects with intrusive r afford minimal pairs such as the following (McCarthy 1993: 179): (30)

a. b.

saw eels saw reels

[sD(.oi(ûz] [sD(.Pi(ûz]

If, as I have suggested, linking and intrusive [o] escapes the phonotactic restriction in (28c) because it surfaces in onset position, just like word-initial [P], then the reasons why the former undergoes lenition and the latter does not are not apparent on the surface: thus r-lenition overapplies (chapter 74: rule ordering). 9

Harris (2006) reports that in some Southern US dialects [P] is banned outside foot-initial onsets: e.g. ’ve?y, ’she?iff, ‘Ca?o’lina.

Cyclicity

17

However, this opaque pattern is easy to describe in stratal-cyclic terms (Kiparsky 1979: 437ff.; McCarthy 1991: 203–204). Intrusive r is inserted at the word level after w-final non-high vowels in order to satisfy the constraint FinalC, i.e. *V]w (McCarthy 1991: 203, 1993: 176), which outranks *Coda[P] at the word level.10 In the same cycle, the inserted r is targeted by coda lenition and undergoes a corresponding featural change: it acquires the feature [lax], say. At the phrase level, however, the relative ranking of FinalC and *Coda[P] is reversed: in consequence, word-final r undergoes deletion in preconsonantal and prepausal environments, but in prevocalic position it escapes into the onset, carrying with it the feature [lax]. (31)

a. Word level: FinalC >> *Coda[P] saw

Reece

s D P

P i s

[lax] b. Phrase level: *Coda[P] >> FinalC saw seas

s D

s i z

[lax] [sD(.si(z]

10

saw ease

saw Reece

s D

s D

i z

[lax] [sD(. i(z]

i s

[lax] [sD(.Pi(s]

The idea that r-intrusion is driven by FinalC receives independent support from the absence of intrusive r after reduced function words (which do not project an w-node) in the non-rhotic dialect of Eastern Massachusetts (McCarthy 1991: 200ff., 1993: 173ff.). In the case of words ending with high vowels or closing diphthongs, we assume that r-intrusion is blocked by the final offglide, which suffices to satisfy FinalC; alternatively, FinalC can be replaced with *V[−hi]]w. If r-intrusion applies w-finally, then stem-level applications may be needed to generate forms like draw[P]-ing, as word-level suffixes like -ing are incorporated into the prosodic word of the stem, and not adjoined: i.e. w[draw] → w[drawing], not w[draw] → *w′[ w°[draw]ing]; see §4 above. If so, we may assume that listed allomorphy pre-empts stem-level r-intrusion in cases like algebr[H] ~ algebr[e>]ic (McCarthy 1991: 196).

Ricardo Bermúdez-Otero

18

As in the case of Quito Spanish /s/-voicing (§6), this synchronic system can easily be understood as the product of a straightforward series of ordinary phonological changes: (32) a.

level

processes

Initial stage WL PL

manner

manner is

Anna

[mæ.nHP] [mæ.nHP][>z] [æ.nH] [mæ.nHP] [mæ.nH.P>z] [æ.nH]

Anna is [æ.nH][>z] [æ.nH.>z]

b. Phonologization and stabilization (I): Lenition of [P] in codas enters the phrase level WL [mæ.nHP] [mæ.nHP][>z] [æ.nH] [æ.nH][>z] PL lenition [mæ.nHo] [mæ.nH.P>z] [æ.nH] [æ.nH.>z] c. Analogical input restructuring (I): Lenition of [P] in codas climbs up to the word level WL lenition [mæ.nHo] [mæ.nHo][>z] [æ.nH] [æ.nH][>z] PL lenition (vacuous) [mæ.nHo] [mæ.nH.o>z] [æ.nH] [æ.nH.>z] d. Phonologization and stabilization (II): Deletion of [o] in codas enters the phrase level WL lenition [mæ.nHo] [mæ.nHo][>z] [æ.nH] [æ.nH][>z] PL deletion [mæ.nH] [mæ.nH.o>z] [æ.nH] [æ.nH.>z] e.

Analogical input restructuring (II): Analogical extension of word-level final [o] WL insertion, lenition [mæ.nHo] [mæ.nHo][>z] [æ.nHo] [æ.nHo][>z] PL deletion [mæ.nH] [mæ.nH.o>z] [æ.nH] [æ.nH.o>z]

The path for (32e) was smoothed by a general process of schwa apocope (chapter 26: schwa) in Middle English (Minkova 1991). As a result of this, Early Modern English had relatively few words like Anna, with an underlying final /H/. Thus, the rise of phrase-level r-deletion in codas brought about a situation in which most tokens of preconsonantal or prepausal [H] alternated with prevocalic [Ho]. In these circumstances, learners re-analyzed phrase-level representations like [æ.nH„] as derived by r-deletion from word-level [æ.nHo].11 (33)

WL PL

[mæ.nH ] [mæ.nH ] [mæ.nH. >z] →

[æ.nH ] [æ.nH ]

[æ.nH. >z]

In turn, this analogical extension of final [o] across word-level outputs eventually resulted in a word-level ban of w-final [H], enforced where necessary by [o]-insertion. This stratal-cyclic account of the diachronic rise and synchronic operation of r-intrusion avoids many of the pitfalls incurred by its best-known competitors. Rule-inversion scenarios resulting in a phrase-level hiatus-breaking rule of [P]-epenthesis in onsets (e.g. Vennemann 1972: 216; McMahon 2000: ch. 6; BermúdezOtero and Hogg 2003: 99ff.) do not account for the lenited realization of intrusive r (chapter 66: lenition). In turn, restructuring scenarios in which /H/ is replaced 11

In this view, the analogical extension of final [o] across word-level outputs can only have begun after variable r-deletion entered the phrase level, but it may well have been in progress before the application rate of r-deletion approached 100 percent (see Hay and Sudbury 2005).

Cyclicity

19

by /HP/ in underlying representations (e.g. Donegan 1993) fail to account for the regular and productive nature of r-intrusion (see the references in Heselwood 2009: 86). A regular process of [P]-epenthesis in w-final position at the word level incurs neither problem. Furthermore, the diachronic scenario outlined in (32) accords perfectly with the normal life cycle of phonological processes (§3). Both r-lenition ([P] → [o]) and r-deletion ([o] → Ø) first entered the categorical phonology from below, as phrase-level processes applying across the board ((32b) and (32d)). The analogical change causing lenition to climb up from the phrase to the word level (32c) proceeds by input restructuring: the lenited [o] in surface PL[mæ.nHo] is re-analyzed as present in the output of the word level.12 Moreover, r-lenition entered the grammar earlier than r-deletion (as must be the case, since the former is a precondition for the latter), and so has been exposed to analogical pressures for longer: it is therefore unsurprising that r-lenition should be more advanced in its life cycle than r-deletion, the former having reached the word level, the latter remaining at the phrase level. In this light, the synchronic markedness reversal illustrated in (31) can be seen as arising from a clash between disparate diachronic forces: the high ranking of *Coda[P] at the phrase level reflects the phonologization of phonetic effects; in contrast, the high ranking of FinalC at the word level reflects the analogical restructuring of phrase-level inputs. If so, McCarthy’s (1993: 181–182) complaint of arbitrariness against his own previous stratal analysis (McCarthy 1991: 203– 204) arguably betrays a failure to strike a proper balance between synchronic and diachronic explanation (cf. Bermúdez-Otero 1999: 98–107). If this account is correct, then English linking and intrusive r raises difficulties for OO-correspondence. The segment’s lenited realization is opaque because there is no r-lenition in onsets. To explain the facts, OO-correspondence would need to find a surface base in which [o] occurred transparently, i.e. in the coda. Yet this is impossible, as the defining property of non-rhotic dialects is precisely that they do not allow r to surface outside the onset. (34)

saw

saw eels

[ [sD((P)]]

[ [sD((P)] N[i(lz]]

VP V

VP V

IO-Faith [sD(] absent

IO-Faith

× OO-Ident

saw reels [ [sD((P)] N[Pi(lz]]

VP V

IO-Faith

[sD(. i(ûz]

[sD(.Pi(ûz]

opaque

absent

Yet, as in the case of Quito Spanish /s/-voicing, the proponents of OOcorrespondence may deflect this argument by putting forward a transparent analysis of linking and intrusive [o]. McCarthy (1993: 178–181) does so by invoking ambisyllabicity (Kahn 1976). In this approach, linking and intrusive [o] is 12

This progression from lower to higher levels correctly predicts that, diachronically, word-internal r-intrusion, as in draw-ing (see note 10), starts later than r-intrusion at word boundaries, as in draw in (see Hay and Sudbury 2005: 816–818, 820).

Ricardo Bermúdez-Otero

20

permitted to surface because it has an onset attachment, but it is lenited because it has a link to the coda too: cf. (31b) and (35). (35)

saweasesaw ease Reece saw w w

saw Reece w w

R

R

R

R

q

q

q

q

s D o i z

s D P i s

In this sense, ambisyllabicity enables McCarthy (1993: 178–181) to conflate two stages of a cyclic derivation into a single representation – at the cost of adopting a less restrictive theory of syllable structure. However, ambisyllabicity incurs problems of its own, and has been argued to provide an inconsistent account of English segmental allophony (e.g. Kiparsky 1979: 437ff.; Jensen 2000; Harris 2003). Bermúdez-Otero (2007a: §14–§24) notes two ambisyllabicity paradoxes. Since Kahn (1976), the standard diagnostic for ambisyllabification in English has been /t/-flapping. In most North American dialects, /t/ undergoes flapping in two environments: foot-medially between a vowel or /P/ and another vowel (36a), and word-finally between a vowel or /P/ and another vowel (36b). (36)

a. b.

!V # [. . . R @ P $ __ V . . .] e.g. [7] in Patty, party, parity !V # __ ] e.g. [7] in pat it, pat Eve, at it, at ease GWord V @P $

Since the segmental conditions in these two environments are exactly identical, formulating two separate rules of flapping would miss a generalization. Accordingly, Kahn proposed that the two environments could be unified prosodically: in (36a) /t/ becomes ambisyllabic by Coda Capture, and in (37b) /t/ becomes ambisyllabic by Onset Capture. (37)

a. Coda Capture Patty

p

t i

b. Onset Capture pat Eve

p

t i v

Cyclicity

21

Thus Kahn’s strategy was to use syllabification to channel the allophonic effects of both stress and word boundaries. Yet this solution does not generalize to other English consonants. Consider, for example, /l/-darkening in the Midwestern American dialect studied by Sproat and Fujimura (1993). This dialect exhibits Kahn’s canonical pattern of /t/-flapping. By implication, /l/ too should display the same allophone, either clear [l] or dark [û], in foot-medial intervocalic position (e.g. Bee/l/ik) and in word-final intervocalic position (e.g. Bee/l/ equates): /l/ should be ambisyllabic in the former by Coda Capture and in the latter by Onset Capture. As Sproat and Fujimura (1993: 308) themselves note in passing, however, this prediction proves false: X-ray microbeam cinematography revealed that their subjects produced clear [l], with the coronal gesture phased before the dorsal gesture, in Bee/l/ik, whereas they produced dark [û], with the dorsal gesture phased before the coronal gesture, in Bee/l/ equates. (38)

form Beelik Beel equates

/l/ allegedly ambisyllabic by . . . Coda Capture Onset Capture

/l/ allophone clear [l] dark [û]

the coronal gesture . . . leads lags

In this dialect, therefore, Kahn’s ambisyllabification rules work for /t/, but not for /l/. This is Bermúdez-Otero’s (2007a) first ambisyllabicity paradox. A second paradox arises from a conflict between /t/-flapping and pre-fortis clipping (Bermúdez-Otero 2007a: §21–§24), and Kiparsky (1979: 440) observes a third paradox, further discussed by Nespor and Vogel (1986: 93–94). By casting doubt on the existence of ambisyllabicity, these paradoxes challenge McCarthy’s (1993) transparent re-analysis of linking and intrusive r in (35). In contrast, the English dialect described by Sproat and Fujimura poses no difficulties for a stratal-cyclic model with onset-maximal stem-level syllabification and resyllabification of prevocalic consonants in word-level and phrase-level cycles (Bermúdez-Otero 2007a: §18–§20). The right results follow from the operation of two word-level processes: one laxes /t/ in non-foot-initial position (Kiparsky 1979: 437ff.; Jensen 2000; Harris 2003); the other darkens /l/ in the coda. A full typology of English dialects supports the need to allow individual allophonic processes to target either weak positions in the syllable (i.e. the coda) or weak positions in the foot (i.e. in a trochaic system, anywhere outside foot-initial onsets). Notably, an innovative pattern of foot-based /l/-darkening (e.g. ye[û]ow, vi[û]age) is attested alongside the conservative syllable-based pattern: see Olive et al. (1993: 366) and Hayes (2000: 95–96) for American dialects, and Carter and Local (2007) for British dialects.13 In sum, English linking and intrusive r raises the same problem for OOcorrespondence as Quito Spanish /s/-voicing: both are patterns of external sandhi in which word-final prevocalic consonants display opaquely derived properties that are absent from citation forms. In both cases, OO-correspondence declines responsibility for the facts, and shifts the burden of explanation either to phonetic implementation or to the theory of representations.

13

Similarly, alongside the conservative pattern of syllable-based r-deletion in non-rhotic dialects, an innovative foot-based pattern has been detected in the south of the USA: see note 9 above.

Ricardo Bermúdez-Otero

22

8

Non-surfacing bases in non-canonical paradigms: Albanian stress

In the examples of morphosyntactically induced misapplication discussed in §6 and §7, the surface bases required by OO-correspondence are unavailable for phonological reasons: a phonological process applies normally in a non-final cycle C, but the output of C never surfaces transparently, because it is always altered by the operation of subsequent phonological processes in later cycles. However, the output of C may also fail to surface unchanged, for purely morphological reasons. This effect stands out with particular clarity in non-canonical inflectional paradigms, i.e. paradigms exhibiting phenomena such as deponency, defectiveness, suppletion, or heteroclisis (Corbett 2007). In such circumstances, the predictions of cyclicity and OO-correspondence diverge dramatically. Let two words a and b have identical syntagmatic structures in all relevant respects, but belong to paradigms with different sets of cells: one canonical, the other non-canonical. The theory of the cycle predicts that, in the phonology, a and b must exhibit the same effects of procedural morphosyntactic conditioning (§1), since the course of cyclic derivations depends on syntagmatic structure alone (Bobaljik 2008: 32; Bailyn and Nevins 2008: 242). In contrast, OO-correspondence predicts the opposite, as transderivational identity effects depend on the availability of surface bases. On the basis of evidence from Albanian, Trommer (2006, 2009) argues that the first prediction is true, the second false. In this section I briefly summarize Trommer’s argument, omitting his detailed motivation of the morphological segmentations underpinning the analysis. Trommer (2004) found that Albanian polysyllabic words bearing no overt inflection display final stress in either of two cases: (i) if the final syllable is headed by a non-mid vowel (i.e. by /i/, /u/, or /a/), as in (39a) and (39b), or (ii) if the final syllable is both headed by a full vowel (i.e. by a vowel other than /H/) and closed by a consonant, as in (39b) and (39c)). Otherwise, stress falls on the penultima, as in (39d) and (39e). (39)

a.

[Áu.hH.’si] [a.kH.’ku] [7i.’–a] b. [a7.’mik] [Œi.’fut] [7e.zul.’tat] c. [a.’det] [pa.’tok] d. [’ho.le] [’ba.bo] [’hH.nH] e. [’a.fH7]

‘linguistics’ ‘here and there’ ‘prayer’ ‘enemy’ ‘gipsy’ ‘result’ ‘habit’ ‘gander’ ‘swing’ ‘midwife’ ‘moon’ ‘near’

In word-forms containing overt inflectional markers, however, stress assignment often misapplies. Consider, for example, the present indicative of a verb with a canonical paradigm (Table 85.2). According to Trommer, metrical opacity arises as a consequence of the fact that the domain of stress assignment is the stem, not the word: stress is assigned transparently in stem-level cycles, but is rendered opaque at the word level

Cyclicity

23

Table 85.2 The present indicative of the Albanian verb formoj ‘form’ (nact denotes ‘non-active’) UR act

sg

pl

nact

sg

pl

1

GWord

2

GWord

3

GWord

1

GWord

2

GWord

3

GWord

1

GWord

2

GWord

3

GWord

1

GWord

2

GWord

3

GWord

SR

opaque stress?

[fo7mo-j]]

[fo7.’moj]

no

[fo7mo-n]]

[fo7.’mon]

no

[fo7mo-n]]

[fo7.’mon]

no

[fo7.’moj.mH]

no

[ni]]

[fo7.’mo.ni]

yes: *[fo7.mo.’ni]

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[

Stem

[fo7mo-j]

[mH]]

Affix

[fo7mo-n]

Affix

[fo7mo-j]

Affix

[nH]]

[fo7.’moj.nH]

no

[fo7mo-j]

Affix

[he-m]]

[fo7.’mo.hem]

yes: *[fo7.mo.’hem]

[fo7mo-j]

Affix

[he-œ]]

[fo7.’mo.heœ]

yes: *[fo7.mo.’heœ]

[fo7mo-j]

Affix

[he-t]]

[fo7.’mo.het]

yes: *[fo7.mo.’het]

[fo7mo-j]

Affix

[he-mi]] [fo7.’mo.he.mi]

yes: *[fo7.mo.he.’mi]

[fo7mo-j]

Affix

[he-ni]]

[fo7.’mo.he.ni]

yes: *[fo7.mo.he.’ni]

[fo7mo-j]

Affix

[he-n]]

[fo7.’mo.hen]

yes: *[fo7.mo.’hen]

by the addition of inflectional suffixes and by regular internal sandhi at the stem–suffix juncture. (40)

a.

Internal sandhi processes nn → n j → Ø / __ h

b.

Sample derivations [

WL

SL (stress assignment) WL (internal sandhi)

[fo7mo-j]] [fo7.’moj] — ‘form (act 1sg)’

SL

[ SL[fo7mo-j] SL[he-m]] [fo7.’moj] [hem] [fo7.’mo.hem] ‘form (nact 1sg)’

WL

Let us now turn to verbs with non-canonical paradigms. The verb pendohem ‘regret’, for example, exhibits deponency: it lacks a voice alternation, and its fixed lexical meaning is expressed by a series of non-active forms (Table 85.3). Crucially, the absence of non-active forms entails that the location of stress is opaque throughout the present indicative. This fits with the predictions of cyclicity: since the single series of forms of a deponent verb has the same syntagmatic structure as the non-active series of a canonical verb, both must display the same pattern of metrical opacity; compare (40b) and (41). (41)

[

WL

SL (stress assignment) WL (internal sandhi)

[pendo-j] SL[he-m]] [pen.’doj] [hem] [pen.’do.hem] ‘regret (1sg)’

SL

In contrast, OO-correspondence seems unable to account for the misapplication of stress assignment in the present indicative forms of Albanian deponent verbs: there are simply no suitable surface bases with transparent stress.

Ricardo Bermúdez-Otero

24

Table 85.3 The present indicative of the Albanian verb pendohem ‘regret’

act nact

sg

pl

(42)

a.

UR

SR

opaque stress?







[he-m]]

[pen.’do.hem]

yes: *[pen.do.’hem]

1

GWord

2

GWord

3

GWord

1

GWord

2

GWord

3

GWord

[

[

Stem

[pendo-j]

Affix

[

Stem

[pendo-j]

Affix

[he-œ]]

[pen.’do.heœ]

yes: *[pen.do.’heœ]

[

Stem

[pendo-j]

Affix

[he-t]]

[pen.’do.het]

yes: *[pen.do.’het]

[

Stem

[pendo-j]

Affix

[he-mi]]

[pen.’do.he.mi]

yes: *[pen.do.he.’mi]

[

Stem

[pendo-j]

Affix

[he-ni]]

[pen.’do.he.ni]

yes: *[pen.do.he.’ni]

[

Stem

[pendo-j]

Affix

[he-n]]

[pen.’do.hen]

yes: *[pen.do.’hen]

[fo mo-j]]

[

GWord Stem

[fo mo-j]

GWord Stem

IO-Faith [fo .’moj]

[he-m]]

Affix

IO-Faith  [fo .’mo.hem]

OO-Ident

opaque stress

transparent stress b.

[

GWord Stem

[pendo-j]

Affix

[he-m]]

IO-Faith ??

× OO-Ident

 [pen.’do.hem] opaque stress

Thus Trommer’s analysis suggests that morphologically induced misapplication depends on syntagmatic structure, not on the contents of paradigms. In the case of Quito Spanish /s/-voicing, the advocates of OO-correspondence shifted the burden of explanation to phonetics (§6); in the case of English linking and intrusive r, to the theory of representations (§7). A similar escape maneuver in the case of Albanian stress might conceivably appeal to morphology, e.g. by claiming that stress assignment in Albanian verbs has been partly or wholly morphologized. Whatever the merits of such an argument, OO-correspondence will remain in an anomalous position until enough languages are found in which systematic patterns of morphologically induced phonological misapplication fail to hold in defective, deponent, suppletive, and heteroclitic paradigms.

9

Further challenges to OO-correspondence

The case studies presented in §6–§8 provide the most direct challenge to the theory of OO-correspondence: in all three cases, the necessary surface bases appear to be unavailable. However, transderivational theories face other questions, briefly noted in §5: what expressions can qualify as surface bases, and how are they selected?; should OO-identity be symmetrical, base-prioritizing, or both?

Cyclicity

25

(See chapter 83: paradigms.) Whilst these problems have attracted a great deal of attention in the literature, the fact that OO-correspondence fails to preserve much of the corroborated empirical content of cyclic theory has generally prompted less discussion. One key instance is the Russian Doll Theorem (§3). For example, (43) reports the incidence of /l/-darkening in three English dialects where the process has not yet become foot-based (i.e. where /l/ remains light in village: see §7).14 From this evidence one can reliably infer a pattern of diachronic evolution instantiating the Russian Doll Theorem: (43) is a perfect match for (12) and (13). (43) Healey heal-ing heal it heal „ darkening of rhymal /l/ applies at . . .

RP l l l û PL

Am1 l l û û WL

conservative

Am2 l û û û SL innovative

In a transderivational analysis, however, the inevitability of the Russian Doll pattern disappears. For example, Hayes (2000: 102) proposes two separate OO-identity constraints to capture the facts in (43): Am1 shows an effect of high-ranking OO-Ident(Phrasal); Am2 shows effects of both OO-Ident(Phrasal) and OOIdent(Morphological). (44)

a. hi(û

hi(.û>t OO-Ident(Phrasal)

b. hi(û

hi(.û>I

OO-Ident(Morphological) By factorial typology, however, these two constraints can generate an impossible dialect with [hi(û, hi(.û>I, hi(.l>t], in violation of the Russian Doll Theorem. All that is needed is a constraint hierarchy of the following type: (45)

*Coda[l] >> OO-Ident(Morphological) >> *[û] >> OO-Ident(Phrasal)

To avoid this result, Hayes (2000: 102) resorts to stipulating an innate fixed ranking in Universal Grammar: (46)

OO-Ident(Phrasal) >> OO-Ident(Morphological)

The explanatory loss is plain to see: whereas (11) was a corollary, (46) is an axiom. 14

For RP, see Cruttenden (2001: 201). The dialect I have here labeled “Am1” is the one described by Sproat and Fujimura (1993); see §7 above. For “Am2,” see Olive et al. (1993: 212–215). The implicational relationships implicit in (43) are confirmed by the rates of variation reported by Hayes (2000: 98): see (16) above.

26

Ricardo Bermúdez-Otero

It thus looks as if a great deal of work remains to be done before OOcorrespondence can claim to have superseded the cycle.

ACKNOWLEDGMENTS In this chapter I have drawn on research previously presented at meetings in Groningen, Leipzig, Manchester, Rhodes, Toulouse, and Warsaw; I am grateful to the organizers and audiences on all these occasions for their comments and suggestions. I am also indebted to Sonia Colina, Andrew Nevins, Tobias Scheer, Patrycja Strycharczuk, and Jochen Trommer.

REFERENCES Bachrach, Asaf & Andrew Nevins (eds.) 2008. Inflectional identity. Oxford: Oxford University Press. Bailyn, John F. & Andrew Nevins. 2008. Russian genitive plurals are impostors. In Bachrach & Nevins (2008), 237–270. Baker, Brett. 2005. The domain of phonological processes. In Ilana Mushin (ed.) Proceedings of the 2004 Conference of the Australian Linguistics Society. Sydney: Department of Linguistics, University of Sydney. Available (August 2010) at http://hdl.handle.net/ 2123/112. Barry, Martin C. 1985. A palatographic study of connected speech processes. Cambridge Papers in Phonetics and Experimental Linguistics 4. 1–16. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst (ROA-259). Bermúdez-Otero, Ricardo. 1999. Constraint interaction in language change: Quantity in English and Germanic. Ph.D. dissertation, University of Manchester & University of Santiago de Compostela. Bermúdez-Otero, Ricardo. 2003. The acquisition of phonological opacity. In Jennifer Spenader, Anders Eriksson & Östen Dahl (eds.) Variation within Optimality Theory: Proceedings of the Stockholm Workshop, 25–36. Stockholm: Department of Linguistics, Stockholm University. Bermúdez-Otero, Ricardo. 2006. Phonological change in Optimality Theory. In Keith Brown (ed.) Encyclopedia of language and linguistics, 2nd edn, vol. 9, 497–505. Oxford: Elsevier. Bermúdez-Otero, Ricardo. 2007a. Word-final prevocalic consonants in English: Representation vs derivation. Paper presented at the Old World Conference in Phonology 4, Rhodes. Handout available (August 2010) at www.bermudez-otero.com/OCP4.pdf. Bermúdez-Otero, Ricardo. 2007b. Diachronic phonology. In Paul de Lacy (ed.) The Cambridge handbook of phonology, 497–517. Cambridge: Cambridge University Press. Bermúdez-Otero, Ricardo. 2007c. Marked phonemes vs marked allophones: segment evaluation in Stratal OT. Paper presented at the workshop on segment inventories, 30th GLOW Colloquium, Tromsø. Available (August 2010) at www.bermudez-otero.com/ GLOW2007.pdf. Bermúdez-Otero, Ricardo. 2007d. Morphological structure and phonological domains in Spanish denominal derivation. In Fernando Martínez-Gil & Sonia Colina (eds.) Optimality-theoretic studies in Spanish phonology, 278–311. Amsterdam & Philadelphia: John Benjamins. Bermúdez-Otero, Ricardo. 2008. [ÏH sw>’ICm>tH ’t/(nd H’genst sH ’m>I>s ’kæmbHl]: Evidence for Chung’s generalization. Paper presented at the 16th Manchester Phonology Meeting. Handout available (August 2010) at http://www.bermudez-otero.com/16mfm.pdf.

Cyclicity

27

Bermúdez-Otero, Ricardo & Richard M. Hogg. 2003. The actuation problem in Optimality Theory: Phonologization, rule inversion, and rule loss. In D. Eric Holt (ed.) Optimality Theory and language change, 91–119. Dordrecht: Kluwer. Bermúdez-Otero, Ricardo & Ana R. Luís. 2009. Cyclic domains and prosodic spans in the phonology of European Portuguese functional morphs. Paper presented at the Workshop on the Division of Labour between Morphology and Phonology and 4th Meeting of the Network Core Mechanisms of Exponence, Meertens Institute, Amsterdam. Available (August 2010) at www.bermudez-otero.com/bermudez-otero&luis.pdf. Bermúdez-Otero, Ricardo & April McMahon. 2006. English phonology and morphology. In Bas Aarts & April McMahon (eds.) The handbook of English linguistics, 382–410. Cambridge, MA & Oxford: Blackwell. Bobaljik, Jonathan D. 2008. Paradigms (Optimal and otherwise): A case for skepticism. In Bachrach & Nevins (2008), 29–54. Booij, Geert & Jerzy Rubach. 1984. Morphological and prosodic domains in Lexical Phonology. Phonology Yearbook 1. 1–27. Borowsky, Toni. 1993. On the word level. In Hargus & Kaisse (1993), 199–234. Bradley, Travis G. & Ann Marie Delforge. 2006. Systemic contrast and the diachrony of Spanish sibilant voicing. In Randall Gess & Deborah Arteaga (eds.) Historical Romance linguistics: Retrospective and perspectives, 19–52. Amsterdam & Philadelphia: John Benjamins. Campbell, Fiona, Bryan Gick, Ian Wilson & Eric Vatikiotis-Bateson. 2010. Spatial and temporal properties of gestures in North American English /r/. Language and Speech 53. 49–69. Carter, Paul & John Local. 2007. F2 variation in Newcastle and Leeds English liquid systems. Journal of the International Phonetic Association 37. 183–199. Chomsky, Noam. 2001. Derivation by phase. In Michael Kenstowicz (ed.) Ken Hale: A life in language, 1–52. Cambridge, MA: MIT Press. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Chomsky, Noam, Morris Halle & Fred Lukoff. 1956. On accent and juncture in English. In Morris Halle, Horace G. Lunt, Hugh McLean & Cornelis H. van Schooneveld (eds.) For Roman Jakobson: Essays on the occasion of his sixtieth birthday, 65–80. The Hague: Mouton. Colina, Sonia. 2006. Voicing assimilation in Ecuadoran Spanish: Evidence for Stratal OT. Paper presented at 36th Linguistic Symposium on Romance Languages, Rutgers University. Colina, Sonia. 2009. Sibilant voicing in Ecuadorian Spanish. Studies in Hispanic and Lusophone Linguistics 2. 1–18. Connell, Bruce & Amalia Arvaniti (eds.) 1995. Papers in laboratory phonology IV: Phonology and phonetic evidence. Cambridge: Cambridge University Press. Corbett, Greville G. 2007. Deponency, syncretism, and what lies between. In Matthew Baerman, Greville G. Corbett, Dunstan Brown & Andrew Hippisley (eds.) Deponency and morphological mismatches, 231–269. Oxford: Oxford University Press. Cruttenden, Alan (ed.) 2001. Gimson’s pronunciation of English, 6th edn. London: Edward Arnold. Davis, Stuart. 2005. Capitalistic v. militaristic: The paradigm uniformity effect reconsidered. In Downing et al. (2005), 107–121. Donegan, Patricia J. 1993. On the phonetic basis of phonological change. In Charles Jones (ed.) Historical linguistics: Problems and perspectives, 98–130. London: Longman. Downing, Laura J., T. A. Hall & Renate Raffelsiefen (eds.) 2005. Paradigms in phonological theory. Oxford: Oxford University Press. Dressler, Wolfgang U. 1985. Morphonology: The dynamics of derivation. Ann Arbor: Karoma. Ellis, Lucy & W. J. Hardcastle. 2002. Categorical and gradient properties of assimilation in alveolar to velar sequences: Evidence from EPG and EMA data. Journal of Phonetics 30. 373–396.

28

Ricardo Bermúdez-Otero

Garrett, Andrew & Juliette Blevins. 2009. Analogical morphophonology. In Kristin Hanson & Sharon Inkelas (eds.) The nature of the word: Essays in honor of Paul Kiparsky, 527–545. Cambridge, MA: MIT Press. Gick, Bryan. 1999. A gesture-based account of intrusive consonants in English. Phonology 16. 29–54. Giegerich, Heinz J. 1999. Lexical strata in English: Morphological causes, phonological effects. Cambridge: Cambridge University Press. Hammond, Michael. 1999. The phonology of English: A prosodic optimality-theoretic approach. Oxford: Oxford University Press. Hardcastle, W. J. 1995. Assimilations of alveolar stops and nasals in connected speech. In Jack Windsor Lewis (ed.) Studies in general and English phonetics: Essays in honour of Professor J. D. O’Connor, 49–67. London: Routledge. Hargus, Sharon & Ellen M. Kaisse (eds.) 1993. Studies in Lexical Phonology. San Diego: Academic Press. Harris, James W. 1983. Syllable structure and stress in Spanish: A nonlinear analysis. Cambridge, MA: MIT Press. Harris, John. 1985. Phonological variation and change: Studies in Hiberno-English. Cambridge: Cambridge University Press. Harris, John. 2003. Release the captive coda: The foot as a domain of phonetic interpretation. In Local et al. (2003), 103–129. Harris, John. 2006. Wide-domain r-effects in English. UCL Working Papers in Linguistics 18. 357–379. Hay, Jennifer & Margaret Maclagan. 2010. Social and phonetic conditioners on the frequency and degree of intrusive /r/ in New Zealand English. In Dennis R. Preston & Nancy Niedzielski (eds.) A reader in sociophonetics, 41–70. Berlin & New York: Mouton de Gruyter. Hay, Jennifer & Andrea Sudbury. 2005. How rhoticity became /r/-sandhi. Language 81. 799–823. Hayes, Bruce. 1982. Extrametricality and English stress. Linguistic Inquiry 13. 227–276. Hayes, Bruce. 2000. Gradient well-formedness in Optimality Theory. In Joost Dekkers, Frank van der Leeuw & Jeroen van de Weijer (eds.) Optimality Theory: Phonology, syntax, and acquisition, 88–120. Oxford: Oxford University Press. Heselwood, Barry. 2009. R vocalisation, linking R and intrusive R: Accounting for final schwa in RP English. Transactions of the Philological Society 107. 66–97. Holst, Tara & Francis Nolan. 1995. The influence of syntactic structure on [s] to [œ] assimilation. In Connell & Arvaniti (1995), 315–333. Inkelas, Sharon. 1989. Prosodic constituency in the lexicon. Ph.D. dissertation, Stanford University. Jansen, Wouter. 2004. Laryngeal contrast and phonetic voicing: A laboratory phonology approach to English, Hungarian, and Dutch. Ph.D. dissertation, University of Groningen. Jensen, John T. 2000. Against ambisyllabicity. Phonology 17. 187–235. Kager, René. 1999. Surface opacity of metrical structure in Optimality Theory. In Ben Hermans & Marc van Oostendorp (eds.) The derivational residue in phonological Optimality Theory, 207–245. Amsterdam & Philadelphia: John Benjamins. Kahn, Daniel. 1976. Syllable-based generalizations in English phonology. Ph.D. dissertation, MIT. Keating, Patricia. 1988. Underspecification in phonetics. Phonology 5. 275–292. Kenstowicz, Michael. 1996. Base identity and uniform exponence: Alternatives to cyclicity. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, 365–394. Salford: ESRI. Kiparsky, Paul. 1979. Metrical structure assignment is cyclic. Linguistic Inquiry 10. 421–441. Kiparsky, Paul. 1982. Lexical morphology and phonology. In Linguistic Society of Korea (ed.) Linguistics in the morning calm, 3–91. Seoul: Hanshin.

Cyclicity

29

Kiparsky, Paul. 2000. Opacity and cyclicity. The Linguistic Review 17. 351–365. Kochetov, Alexei & Marianne Pouplier. 2008. Phonetic variability and grammatical knowledge: An articulatory study of Korean place assimilation. Phonology 25. 399–431. Ladd, D. Robert & James M. Scobbie. 2003. External sandhi as gestural overlap? Counterevidence from Sardinian. In Local et al. (2003), 162–180. Lipski, John M. 1989. /s/–voicing in Ecuadoran Spanish: Patterns and principles of consonantal modification. Lingua 79. 49–71. Local, John, Richard Ogden & Rosalind Temple (eds.) 2003. Phonetic interpretation: Papers in laboratory phonology VI. Cambridge: Cambridge University Press. Marvin, Tatjana. 2002. Topics in the stress and syntax of words. Ph.D. dissertation, MIT. Mascaró, Joan. 1987. A reduction and spreading theory of voicing and other sound effects. Unpublished ms., Universitat Autònoma de Barcelona. Published 1995, Catalan Working Papers in Linguistics 4. 267–328. McCarthy, John J. 1991. Synchronic rule inversion. Proceedings of the Annual Meeting, Berkeley Linguistics Society 17. 192–207. McCarthy, John J. 1993. A case of surface constraint violation. Canadian Journal of Linguistics 38. 169–195. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. McMahon, April. 2000. Lexical Phonology and the history of English. Cambridge: Cambridge University Press. Minkova, Donka. 1991. The history of final vowels in English: The sound of muting. Berlin & New York: Mouton de Gruyter. Mohanan, K. P. 1982. Lexical Phonology. Ph.D. dissertation, MIT. Distributed by Indiana University Linguistics Club. Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Nolan, Francis. 1992. The descriptive role of segments: Evidence from assimilation. In Gerard J. Docherty & D. Robert Ladd (eds.) Papers in laboratory phonology II: Gesture, segment, prosody, 261–280. Cambridge: Cambridge University Press. Nolan, Francis, Tara Holst & Barbara Kühnert. 1996. Modelling [s] to [œ] accommodation in English. Journal of Phonetics 24. 113–137. Olive, Joseph P., Alice Greenwood & John Coleman. 1993. Acoustics of American English speech: A dynamic approach. New York: Springer. Orgun, Cemil Orhan & Sharon Inkelas. 2002. Reconsidering bracket erasure. Yearbook of Morphology 2001. 115–146. Raffelsiefen, Renate. 2005. Paradigm uniformity effects versus boundary effects. In Downing et al. (2005), 211–262. Robinson, Kimball L. 1979. On the voicing of intervocalic s in the Ecuadorian highlands. Romance Philology 33. 137–143. Rubach, Jerzy. 1996. Nonsyllabic analysis of voice assimilation in Polish. Linguistic Inquiry 27. 69–110. Scheer, Tobias. 2008. A lateral theory of phonology, vol. 2: How morpho-syntax talks to phonology: A survey of extra-phonological information in phonology since Trubetzkoy’s Grenzsignale. Unpublished ms., University of Nice. Selkirk, Elisabeth. 1996. The prosodic structure of function words. In James L. Morgan & Katherine Demuth (eds.) Signal to syntax: Bootstrapping from speech to grammar in early acquisition, 187–213. Mahwah, NJ: Lawrence Erlbaum. Sproat, Richard. 1993. Looking into words. In Hargus & Kaisse (1993), 173–195. Sproat, Richard & Osamu Fujimura 1993. Allophonic variation in English /l/ and its implications for phonetic implementation. Journal of Phonetics 21. 291–311. Steriade, Donca. 2000. Paradigm uniformity and the phonetics–phonology boundary. In Michael B. Broe & Janet B. Pierrehumbert (eds.) Papers in laboratory phonology V: Acquisition and the lexicon, 313–334. Cambridge: Cambridge University Press.

30

Ricardo Bermúdez-Otero

Strycharczuk, Patrycja. 2010. What’s in a word? Prosody in Polish voicing. Paper presented at the 18th Manchester Phonology Meeting. Available (August 2010) at http:// personalpages.manchester.ac.uk/postgrad/patrycja.strycharczuk/18mfmslides.pdf. Sugahara, Mariko & Alice Turk. 2009. Durational correlates of English sublexical constituent structure. Phonology 26. 477–524. Szpyra, Jolanta. 1989. The phonology–morphology interface: Cycles, levels and words. London & New York: Routledge. Trommer, Jochen. 2004. Albanian word stress. Unpublished ms., University of Osnabrück. Trommer, Jochen. 2006. Stress uniformity in Albanian: Morphological arguments for cyclicity. Paper presented at the Workshop on Approaches to Phonological Opacity, 29th GLOW Colloquium, Barcelona. Trommer, Jochen. 2009. Stress uniformity in Albanian: Morphological arguments for cyclicity. Unpublished ms., University of Leipzig. Tuinman, Annelie, Holger Mitterer & Anne Cutler. 2007. Speakers differentiate English intrusive and onset /r/, but L2 listeners do not. In Jürgen Trouvain & William J. Barry (eds.) Proceedings of the 16th International Congress of Phonetic Sciences, 1905–1908. Saarbrücken: Saarland University. Vennemann, Theo. 1972. Rule inversion. Lingua 29. 209–242. Wells, J. C. 1982. Accents of English. 3 vols. Cambridge: Cambridge University Press. Wells, J. C. 1990. Syllabification and allophony. In Susan Ramsaran (ed.) Studies in the pronunciation of English: A commemorative volume in honour of A. C. Gimson, 76–86. London: Routledge. Wheeler, Max W. 2005. The phonology of Catalan. Oxford: Oxford University Press. Wright, Susan & Paul Kerswill. 1989. Electropalatography in the analysis of connected speech processes. Clinical Linguistics and Phonetics 3. 49–57. Zsiga, Elizabeth C. 1995. An acoustic and electropalatographic study of lexical and postlexical palatalization in American English. In Connell & Arvaniti (1995), 282–302.

86

Morpheme Structure Constraints Geert Booij

1

Introduction

Morpheme structure constraints are constraints on the segmental make-up of the morphemes of a language. A textbook example of such a constraint is that bnik is an impossible morpheme of English, whereas blik is a possible English morpheme that happens not to exist. Hence, bnik is a systematic gap in the morpheme inventory of English, whereas blik is an accidental gap in this inventory. This can be taken to imply that there is a morpheme structure constraint that prevents English morphemes from beginning with a /b/ followed by a nasal consonant. Halle (1959: 38) proposed to account for such distributional generalizations by means of morpheme structure rules, which define the class of possible morphemes of a language. Morpheme structure rules were conceived of as rules that fill in predictable specifications of the sound segments of a morpheme. For instance, in the case of English morphemes that begin with the consonant cluster bC, such as brick, it is predictable that the C must be a liquid, i.e. a non-nasal sonorant consonant. That is, the feature specifications [−nasal] and [+sonorant] of the second consonant of brick are predictable. They can therefore be omitted in the lexical phonological specification of the relevant morphemes. Morpheme structure rules fill in the blank cells of the lexical phonological matrix, and thus turn this underspecified matrix into a systematic phonological matrix, with all feature values of its segments specified. This is the underlying phonological form of a morpheme to which the phonological rules of a language apply. In sum, morpheme structure rules function as redundancy rules that specify predictable information, and at the same time they define the set of possible morphemes of a language. Stanley (1967) proposed to replace Halle’s notion “morpheme structure rule” by the notion “morpheme structure condition” (MSC). All morpheme structure conditions function as redundancy statements with respect to fully specified lexical phonological matrices, which form the input for the phonological rules (P-rules). The notion “morpheme structure condition” as discussed above forms part of the theoretical machinery of classical generative phonology, but has been subject to debate. In this chapter this debate will be summarized. Before doing that, I will provide some examples of phonotactic properties of morphemes in §2. The The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0086

Geert Booij

2

problems raised by the concept of “morpheme structure condition” will then be discussed in §3–§5. These problems are the following: (i) The redundancy problem: is there any need for a specific set of morpheme structure conditions, or can they be made to follow from other types of phonological rules or constraints? (§3 and §4) (ii) The duplication problem: how can we avoid the same distributional generalization (for example the homorganicity of a nasal consonant and a following obstruent in consonant clusters) being expressed by both an MSC and a phonological rule (P-rule), and thus making the grammar unnecessarily complex? (§4) (iii) The status of MSCs: are they absolute constraints, or statistical tendencies only? (§5) The chapter will conclude with some observations on the expressive functions of morphemes with specific phonotactic properties (§6) and a summary of our findings on the status of MSCs (§7).

2

Morpheme structure conditions

The unequal distribution of phonemes across words and morphemes was an important topic of research in structuralist phonology, because distributional facts were interpreted as signaling the presence or absence of grammatical boundaries, as in the work of Trubetzkoy (Wiese 2001). For instance, in German the phoneme /j/ only occurs at the beginning of lexical morphemes (as in jagen ‘to hunt’ and its derivative verjagen ‘to chase away’). Hence, the presence of the /j/ is a positive signal of a left-edge morpheme boundary (van Wijk 1939: 125). Such facts show that the phonological and grammatical dimensions of linguistic structure are not completely autonomous, but are related in a systematic fashion (Jakobson 1949). The relation between the distribution of phonemes and grammatical units such as morphemes and words is therefore an aspect of the interface between phonology and morphology. Jakobson (1949) drew attention to the fact that different grammatical units may have different phonotactic properties. For instance, he observed that of the 23 Czech consonants, only eight are found in inflectional suffixes. Jakobson also mentioned that only the following consonants appear in the inflectional suffixes of English: /z d n I/. Dutch exhibits a number of such asymmetries between lexical morphemes on the one hand and derivational and inflectional suffixes on the other (Booij 1995). For Dutch, the following generalizations hold: (1)

a. b. c.

Suffixes may consist of consonants only (/s/, /t/ or a combination thereof). Suffixes may begin with the vowel /H/. Suffixes may have /H/ as their only vowel.

Lexical morphemes of Dutch, on the other hand, do not have the phonotactic possibilities listed in (1) for suffixes, and require the presence of at least one full vowel (that is, a vowel that is not /H/; see chapter 26: schwa), and cannot be

Morpheme Structure Constraints

3

schwa-initial. Dutch prefixes cannot begin with a schwa either, but can have the schwa as their only vowel, as is the case for the Dutch prefix be- /bH/. Thus, we can sometimes tell from the phonological make-up of a morpheme whether it is a lexical morpheme or an affix. A famous type of morpheme structure constraint is the restricted distribution of consonants in Semitic roots (see chapter 108: semitic templates). Most Semitic roots are triliteral, that is, they contain three consonants, the consonantal skeleton. These skeletons are intercalated with vowels, and these vowel patterns are the exponents of grammatical information. Greenberg (1950) observed that the first two consonants of a Semitic CCC skeleton cannot be identical, whereas the last two can. Furthermore, homorganic consonants, i.e. consonants with the same place of articulation, are excluded, unless they are identical, even if they are the last two consonants. This is exemplified by the following distributional patterns in Arabic: (2)

*m-m-d m-d-d f-r-r *b-m-C *C-b-m *g-k-C œ-k-k *œ-k-g

‘to stretch’ ‘to flee’

‘to split’

Similar facts are reported for Modern Hebrew in Bar-Lev (1978: 321): “well-formed roots contain only consonants from different places of articulation.” For instance, the following patterns can be observed in existing Modern Hebrew roots (cf. also Berent et al. 2002): (3)

labial – velar – dental velar – labial – dental dental – velar – labial labial – dental – velar

bagad gibor dégem mélex

‘to betray’ ‘hero’ ‘model’ ‘king’

Crucially, these constraints apply to morphemes only; hence they are tautomorphemic constraints. As McCarthy (1986: 209) observes, there is no Arabic root tatak, with two identical consonants, but there are inflected forms of verbs like ta-takallam ‘you converse’ in which the first t belongs to a prefix, and hence does not belong to the same morpheme as the second t. Thus the prohibition on identical consonants is not violated. McCarthy (1986) proposes to analyze this constraint as following from the Obligatory Contour Principle (OCP), which states that identical elements on the melodic tier of a morpheme are not permitted (see chapter 14: autosegments). If the OCP applies to the lexical representations of Arabic roots, and assuming that all autosegmental spreading in Arabic is rightwards, it is predicted that, of the following three structures, only the third one is well-formed (McCarthy 1986: 209): (4)

a. C s

V

C m

V

C

b. C

m

s

V

C

V

C

c. C

m

s

V

C m

V

C

4

Geert Booij

In (4a), the OCP is violated on the melodic tier, whereas in (4b) leftward spreading has taken place. Hence, if the V is /a/, the root *sasam is excluded, and only the root samam is well-formed. In order to exclude sequences of homorganic but not identical consonants as well, the OCP must be interpreted here as OCP-Place. That is, assuming that there is a separate tier for Place specifications of sounds, OCP-Place forbids adjacent identical specifications on the Place tier, and this excludes adjacent homorganic consonants (including identical ones, which by definition have the same specification on the Place tier). Phonotactic properties of morphemes may also reveal that they belong to a particular stratum of the lexicon of that language, and may differentiate between native and borrowed morphemes (see chapter 95: loanword phonology). Itô and Mester (1995: 819) mention examples of constraints from several languages that are specific for native morphemes of those languages. Japanese is interesting in that its morphemes can be divided into four subclasses: Yamato (native), Sino-Japanese, Foreign and Mimetic. Each subclass is characterized by a set of constraints, some of which are valid for more than one subclass. For instance, Sino-Japanese roots consist of one syllable only, and Lyman’s Law (morphemes contain at most one voiced obstruent) holds for Yamato morphemes only (Itô and Mester 1995). Dutch words that begin with pn- in the spelling, such as pneumatisch ‘pneumatic’ and pneumonie ‘pneumonia’, betray their non-native origin: pn- is a well-formed word-initial consonant cluster in Greek, but not in Dutch. This suggests that morphemes with pn- do not belong to the set of possible native morphemes of Dutch, and the constraint *[pn- partially characterizes the set of native Dutch morphemes. In English this constraint applies to all morphemes, and hence the combination pn- is realized as /n/. The range of phonotactic patterns found in morphemes may be smaller than those in words. English morphemes, for instance, never end in a cluster of voiced obstruents (there are no morphemes like *lovd or *dubd), whereas such clusters do occur in complex words like past tense forms of verbs (as in loved /vd/, and dubbed /bd/). Dutch morphemes are subject to the constraint that voiced obstruent clusters only occur in complex words: morpheme-internally we only find clusters like /pt/ and /st/, but in complex words we find clusters like /bd/ and /zd/, as in the past tense forms eb-de /ebdH/ ‘receded’ and raas-de /ra(zdH/ ‘raged’. The only exceptions to this Dutch MSC are a few loanwords like labda /lAbda(/ ‘lambda’ and budget /[bœ–et/ ‘budget’. Hence, the occurrence of voiced obstruent clusters morpheme-internally makes the relevant Dutch morphemes recognizable as loans (Zonneveld 1983). As observed by Shibatani (1973), MSCs may have a different status from phonological rules or constraints, in that loanwords are not necessarily adapted to the MSCs of a borrowing language, whereas the application of the phonological constraints cannot be suppressed. Hence, the Dutch loan labda keeps its voiced obstruent cluster, and is not pronounced as la[pt]a. Dressler (1985: 219–245) also provides examples from various languages of distributional patterns that are characteristic of morphemes. In sum, there are distributional constraints that are characteristic of morphemes, but not of words in general. The question is whether and how they have to be accounted for by a specific type of constraint, the MSC.

Morpheme Structure Constraints

3

5

The redundancy problem

As we saw above, in classical generative phonology constraints on the segmental composition of (lexical) morphemes are interpreted as lexical redundancy rules or morpheme structure constraints (MSCs) (Halle 1959, 1964; Stanley 1967; Chomsky and Halle 1968). For instance, in many languages, nasal consonants in morpheme-internal clusters share their place of articulation with a following consonant. This generalization can be expressed by omitting the place of articulation of the nasals in the lexical representation of the relevant morphemes. A lexical redundancy rule will then fill in the proper value for the feature [place], and thus derive fully specified underlying phonological representations to which the phonological rules of a language apply. Thus, in the phonological component of the grammar the set of rules that express static phonotactic generalizations is ordered to apply before the set of phonological rules that account for alternations. The two sets of rules (MSCs and phonological rules) together are considered to express all the phonotactic regularities of a language (Postal 1968: 214; see also chapter 7: feature specification and underspecification). The role and importance of MSCs have been questioned for a number of reasons. In the first place, as pointed out by Hooper (1972), the role of the syllable as a domain of phonotactic generalizations cannot be ignored. The notion “syllable” does not play any formal role in the type of generative phonology codified in Chomsky and Halle (1968), but since then a wealth of evidence for the crucial role of the syllable (and larger prosodic units such as the foot and the prosodic word) in phonological analysis has been amassed (Nespor and Vogel 1986). Constraints on syllable structure are by definition constraints on how phonemes can combine into larger units. Hence, a lot of constraints on phoneme sequences are in fact syllable structure constraints (Hooper 1972). For example, the constraint that an English word cannot begin with a consonant cluster of the type nasal + obstruent follows from the universal principle of syllable structure that the sonority of consonants must decrease towards the edges of the syllable (see chapter 49: sonority). Thus, sequences like *mpat and *ntak are impossible English morphemes. This means that morphemes must have a phonological composition that will lead to well-formed prosodic structures. A second argument for the syllable as a phonotactic unit is that we cannot determine whether a particular segmental string is ill-formed without taking syllabification into account. For instance, the consonant sequence /bkm/ is always phonotactically ill-formed in English, because there is no possible division across two syllables that leads to a sequence of well-formed syllables. On the other hand, the consonant cluster /kn/, which does not appear word-initially in English words, can appear word-internally, as in acne, because this word can be syllabified as ak.ne, with two well-formed syllables (dots indicate syllable boundaries). That is, the following generalization holds: (5)

A (grammatical) word is phonotactically well-formed iff it can be parsed exhaustively into one or more well-formed prosodic constituents.

The class of well-formed syllables and higher prosodic constituents (foot, prosodic word) can be defined by the prosodification algorithm of each language,

6

Geert Booij

which is partially universal and partially language-specific. This algorithm groups the sounds of words into syllables, feet, and prosodic words (Rubach and Booij 1990). Alternatively, the grammar contains a set of ranked prosodic constraints that determine the optimal prosodification of a string of sounds, as in Optimality Theory (Kager 1999). We might therefore claim that MSCs are superfluous because phonotactic restrictions on morphemes can be seen as the effects of phonological constraints on the output forms of words. For instance, English does not have a morpheme abkmer, since this morpheme cannot be prosodified exhaustively: the /k/ cannot be made part of the first or the second syllable. Similarly, the reason that a Dutch lexical morpheme requires the presence of at least one full vowel is that, otherwise, such a morpheme cannot yield a well-formed prosodic word. Dutch suffixes, on the other hand, can be vowelless or contain the vowel /H/ only, as mentioned in (1), because they always combine with a lexical morpheme. That is, the phonotactic shape of Dutch suffixes has to do with their being dependent on a host morpheme to which they attach. We might therefore consider these suffix-specific phonotactic properties as something that need not be expressed separately, since it follows from the mapping of morphological structure onto prosodic structure. Hooper (1972) offers a second argument against MSCs, that formulating phonotactic constraints with the morpheme as domain may also lead to spurious generalizations. For instance, Dutch lexical morphemes of Romance origin may end in obstruent clusters that are unpronounceable in isolation: (6)

castr-eer ‘castrate’ celebr-eer ‘celebrate’ emigr-eer ‘emigrate’ penetr-eer ‘penetrate’

One might conclude that Dutch morphemes can end in clusters of the type /Cr/, but this generalization does not reveal what is really at stake: those morphemes are only acceptable because they are bound roots, obligatorily followed by a vowelinitial suffix. Hence, these consonant clusters will form proper syllable onsets, as in pe.ne.treer. Similar observations can be found in Kenstowicz and Kisseberth (1977: 145) for Tunica, and they therefore concluded that in such cases it is the word rather than the morpheme that is the domain of phonotactic constraints. Nevertheless, the occurrence of such root-final consonant clusters is revealing in the sense that they betray the Romance origin of those roots: Germanic roots of Dutch never have this form because they can be used as words without further suffixation. A third example of the role of prosody in the phonotactics of morphemes is that in many languages lexical morphemes are subject to prosodic minimality conditions. For instance, Dutch lexical morphemes are subject to the constraint that they consist of at least one heavy syllable (with either a long vowel or a short vowel followed by a consonant). That is, a lexical morpheme cannot consist of a light syllable only; bimoraicity is required. It is only in exclamations like hè /he/ that the use of such light syllables with a short vowel is possible. Prosodic conditions on morphemes create a problem for the classical MSCs: the syllable structure of a morpheme is not part of its lexical representations, but a

Morpheme Structure Constraints

7

derived property. Therefore, MSCs cannot refer to derived prosodic properties such as bimoraicity (McCarthy 1998). The only way to circumvent this problem is to phrase the constraint in terms of segment sequences: a lexical morpheme must contain either a long vowel, or a short vowel followed by at least once consonant. However, we then miss the generalization that it is a prosodic syllable weight condition that is involved. Once more, this suggests that the segmental composition of morphemes is governed by phonological output conditions. A similar problem occurs when we want to express the following generalization for Dutch: “In mono-morphemic forms we do not find sequences of schwa-headed syllables” (van Oostendorp 1995: 141). Again, this MSC refers to the derived property of syllable structure (cf. Downing 2006 for a cross-linguistic survey of prosodic minimality conditions). In sum, we have to find a way in which prosodic constraints can account for at least part of the phonotactic constraints on morphemes.

3.1

Non-syllabic sequential constraints

Not all constraints on segmental sequencing can be reduced to syllable structure or prosodic minimality requirements. There are sequential constraints on consonant clusters that hold independently from the tautosyllabic or heterosyllabic status of these clusters. For instance, Yip (1991) proposes the following generalization for English (see also chapter 12: coronals): (7) Consonant Cluster Condition In consonant clusters, consonants may have at most one other articulator feature than Coronal. Thus, we find English clusters like /pt/ and /kt/ (apt, act), but not (tauto- or heterosyllabic) clusters like /kp/, /pk/, /km/, /mk/, /xm/, and /gm/ (loanwords like drachma and stigma are exceptions to this generalization). Note that the illformedness of such clusters does not follow from syllable structure constraints since they could be heterosyllabic. Yet they do not occur. If we come across such clusters in words (as in zipcode and backpack), we can conclude that these words must be compounds, consisting of more than one lexical morpheme. An example of a sequential constraint that holds both for tautosyllabic and heterosyllabic sound sequences, observed for English by Davis (1991), and for Dutch by Booij (1995: 46), is that in the sequence sCVC the two Cs should not be identical, unless they are coronal. Here are some Dutch examples with labial and coronal consonants (such sequences of velar consonants do not occur for independent reasons): (8)

sCVC

CVC poep mam toet

/pup/ /mAm/ /tut/

‘shit’ ‘mother’ ‘face’

*spoep /spup/ *smam /smAm/ stoet /stut/

‘procession’

This constraint is also valid for heterosyllabic sequences: they are not acceptable when followed by a vowel, as shown by forms like *spupo and *smama.

8

Geert Booij

The point that not all phonotactic constraints can be reduced to syllable structure constraints is particularly clear for word-edge constraints, which are discussed in the next subsection.

3.2

Word edges

The difference between syllable structure constraints and sequential constraints is stressed by Kristoffersen (2000: 46–48), in relation to the distribution of consonants at word edges. In Norwegian, the cluster tl- is a proper syllable onset. It occurs word-internally in words like Be.tlem ‘Bethlehem’ and A.tle (proper name). Yet Norwegian words never begin with this cluster. Kristoffersen also observed that, although Norwegian words never begin with pn-, a cluster that does not violate the Sonority Hierarchy Constraint on syllable structure, Norwegians have no difficulty in pronouncing loan words like pneumatisk ‘pneumatic’. These observations imply that /tl-/ and /pn-/ are proper syllable onsets in Norwegian, and that the non-occurrence of initial /tl-/ and /pn-/ is not due to a syllable structure constraint, but to a constraint that holds for the left edge of Norwegian root morphemes or prosodic words. A similar example from Dutch is that lexical morphemes do not begin with pj-, tj-, or kj-; however, the diminutive suffix allomorphs -pje, -tje, -kje begin with these clusters, and hence these clusters do appear in word-internal syllable onsets, as in riem-pje ‘belt-dim’ with the prosodic structure ((rim)q(pjH)q)w (w = prosodic word, q = syllable). Therefore, the nonoccurrence of these clusters cannot be attributed to a syllable structure constraint. The word-initial sequences /pj- tj- kj-/ do occur in borrowed proper names for male persons, such as Pjotr, Tjeerd, Kjeld, and they do not cause pronunciation problems for speakers of Dutch. The edges of words may have special phonotactic properties, since they may either impose more restrictions than what syllable well-formedness requires, or allow for extra consonants compared to what is possible in syllables in general. The Norwegian examples above (no tl- or pn- at the beginning of a word) are a case in point. Other examples of more restricted phonotactics at word edges can be found in Booij (1983): in Huichol, for example, words cannot end in a consonant but syllables can (source: Bell 1976). In Polish, extra consonants may be added in word-initial position that violate the universal Sonority Sequencing constraint (Rubach and Booij 1990: 434; see also chapter 109: polish syllable structure): (9)

rwao rfza lgnÓo mdÓy mnich

‘tear’ ‘rust’ ‘stick’ ‘tasteless’ ‘monk’

In these words, a sonorant consonant is followed by a consonant of the same or higher degree of sonority, in violation of the Sonority Sequencing requirement that the sonority of consonants must increase towards the direction of the nucleus. The account that Rubach and Booij (1990) propose is that Polish prosodic words have an extra optional word-initial slot for an extrasyllabic consonant preceding the regular syllables, which is exempt from the requirements of the Sonority

Morpheme Structure Constraints

9

Sequencing condition. This analysis implies that allowing for these marked consonant clusters is not to be seen as a property of lexical morphemes, but of the prosodic words that corresponds with such morphemes. The special phonotactics of word edges is dealt with in Optimality Theory in the form of alignment constraints (McCarthy and Prince 1993). The basic idea of this approach, which makes crucial use of ranked output constraints in computing the phonetic form of words, is that there are alignment constraints that require the alignment of prosodic and grammatical boundaries. According to McCarthy and Prince (1993), the language Axininca Campa has word-initial onsetless syllables, whereas word-internally a vowel hiatus must always be filled by an epenthetic consonant. The relevant alignment constraint blocks the insertion of an epenthetic consonant in word-initial position. If epenthesis took place, there would be no alignment of the left edge of the prosodic word with the left edge of the (vowel-initial) morpheme. That is, the alignment constraint is ranked higher than the constraint that penalizes empty onsets. Note, however, that this analysis does not directly express that the left edges of Axininca Campa morphemes can begin with a vowel, even though syllables in this language normally begin with a consonant. The alignment mechanism allows for a difference in make-up between the edges of morphemes and syllables, but does not express it.

3.3

Phonotactic differences between simplex and complex words

As briefly mentioned at the end of §2, the range of phonotactic patterns in morphemes may be smaller than in complex words. Harris (1994) presents a number of observations on the phonotactic differences between simplex words and complex words in English. For instance, one will not find a heterosyllabic sequence /pt/, as in laptop, within a morpheme (except for loans like helicopter), even though a heterosyllabic cluster /pt/ would not violate the syllable structure constraints of English: a syllable can end in a /p/, and begin with a /t/. The same applies to the cluster /pw/: a proper name like Sopwith, which is historically a compound, is exceptional in this respect, and thus betrays its historical origin as a compound. Such opaque compounds tend to be adapted to the phonotactic patterns. The proper name Greenwich with the sequence /nw/ is now pronounced without the /w/, thus adapting to the phonotactic constraints for monomorphemic words (Harris 1994: 51). The observation that certain consonant clusters only occur at morpheme boundaries is often used in linguistic analyses for assigning multi-morphemic status to words (see chapter 46: positional effects in consonant clusters). For instance, many words in the Amerindian language Athapaskan are considered to be compounds, even though the constituents do not occur as words by themselves, because they contain consonant clusters that are characteristic of morpheme boundaries (Rice 2009: 546). Phonotactic differences between root morphemes and complex words have also been observed for vowel harmony (cf. §4 for a more detailed discussion of such facts for Turkish). The necessity of a separate morpheme structure condition on vowel combinations in roots is explicitly defended in the analysis of Hungarian vowel harmony in Vago (1976); see also chapter 123: hungarian vowel harmony. Harvey and Baker (2005: 1459) observed that in the Australian language

Geert Booij

10

Warlpiri, a language with vowel harmony with respect to the feature [round], the sequence [−round][+round] is not permitted for two consecutive vowels (with intermediate consonants) within roots, whereas the disharmonic sequence [+round] [−round] is. They account for this difference not by assuming an agreement constraint, but by proposing separate constraints for each type of disharmonic sequence. In addition, there is a constraint of root identity that requires the feature specifications for [round] to be preserved in the output. Thus they do not need to assume two rules of vowel harmony, a morpheme structure constraint and a phonological constraint that applies to complex words, and the duplication problem is avoided. Note, however, that this analysis requires reference to the root, a type of morpheme, as the domain of an identity constraint. That is, reference to morphemes in phonological constraints is still required. Different phonotactics may also play a role in recognizing the lexical category of a word. In Dutch, there is a marked difference in phonological make-up between simplex nouns and simplex verbs. Verbs tend to consist of at most two syllables; if there is a second syllable, it will end in a schwa followed by a liquid. Nouns, on the other hand, allow for a larger variety of phonological structures, such as those consisting of three or more syllables, or ending in a full vowel. It appears that speakers of Dutch are able to categorize words as nouns or verbs on the basis of such phonotactic knowledge (Don and Erkelens 2006). In sum, the distributional properties of segments within morphemes relate to the phonological rules or constraints of the relevant language, but not all morpheme-internal phonotactics can be reduced to these more general phonological regularities. In the words of Stanley (1967: 397): “The constraints holding within single morphemes are more restrictive than the constraints which characterize larger units.”

4

The duplication problem

The problem that the assumption of both MSCs and P-rules seems to lead to unnecessary complications of the grammar was noted by Stanley (1967). For instance, Turkish has two general P-rules of vowel harmony that also predict the distribution of vowels within morphemes: all vowels agree in backness, and high vowels agree in roundness (see chapter 118: turkish vowel harmony). As Zimmer (1969: 310) points out: The restrictions on vowel co-occurrence within almost all bases of Turkic origin are nearly the same as those just described for suffix vowels; thus for the “harmonic” part of the lexicon, there are two MSC’s which replicate, to a great extent, the vowel-harmony rules that determine the selection of vowels in suffixes. There is, however, a large number of loanwords to which these vowel harmony MSC’s do not apply, – e.g. /günah/ ‘sin’, /kalem/ ‘pen’, /sosis/ ‘sausage’, /viraN/ ‘curve’.

In addition, there is an MSC that does not double as a P-rule, the Labial Consonant MSC (Zimmer 1969: 312): (10)

After /a/, a [+high] vowel agrees in labiality with a preceding [+labial] consonant.

Morpheme Structure Constraints

11

An example of a morpheme that obeys MSC (10) is karpuz ‘watermelon’, in which the second vowel is round, even though the first vowel is non-round. This is an interesting MSC for the debate on the redundancy of MSCs, since it has no P-rule counterpart. This duplication problem, already noted by Stanley (1967), is discussed by Anderson (1974: ch. 16). Anderson observes that many Turkish morphemes such as kitap ‘book’ are disharmonic, but do not block the application of vowel harmony rules once they have been suffixed. Anderson therefore concludes that we need both MSCs and P-rules for vowel harmony in Turkish, since they may be subject to different idiosyncrasies. In Anderson’s view, the relation between MSCs and P-rules dealing with vowel harmony is a functional one, which need not and cannot be expressed formally by unifying them into one rule. Shibatani (1973) proposes that such constraints should be considered to be both MSCs and Surface Phonetic Constraints (SPCs). A constraint can be marked as an MSC, a SPC, or both an MSC and a SPC. Clayton (1976) argues that constraints that hold for underlying forms of morphemes only are unmotivated, and do not reflect the speaker’s knowledge of his/her language. Therefore, Clayton claims that Surface Phonetic constraints suffice. The duplication problem is also considered by Kiparsky (1982: 167–170), in the framework of Lexical Phonology (see also Kaisse and Shaw 1985: 25; chapter 94: lexical phonology and the lexical syndrome). In this framework, phonological rules apply cyclically. Rules apply either in a structure-adding fashion or in a structure-changing one. Rules only apply in a structure-changing fashion in derived environments, i.e. in environments created by the previous application of a morphological or a phonological rule (see Booij 2000 for a survey of this theory). Kiparsky (1982) proposed that there are no Morpheme Structure Rules. The lexical representations of morphemes are underspecified; that is, predictable properties are omitted. On the first cycle, phonological rules specify these features; i.e. they fill in the blanks. If a word is complex, the same rule can apply in a structure-changing fashion on the next cycle, since the complex word is a derived environment. For instance, the Dutch rule that requires obstruents to have the same specification for the feature [voice] as an adjacent obstruent can be applied as a blank-filling rule to the underspecified feature matrix for /x/ in a word like achter /AxtHr/ ‘back’, and it can be applied in a structure-changing fashion in a complex word like as-bak ‘ash-tray’ (underlying /As-bAk/; phonetic form [AzbAk]). In English, the place of articulation of the nasal consonant in damp can be left unspecified, and filled in by the rule of Nasal Place assimilation, whereas the same rule will change the underlying coronal nasal /n/ into [m] in the derived word compress (underlyingly con-press). Such an analysis can also deal with exceptions to MSCs. For instance, the Dutch word imker //mkHr/ ‘bee-keeper’, which is synchronically a simplex word, violates the constraint on homorganicity of nasal–obstruent clusters. In Kiparsky’s proposal, this is no problem: the nasal consonant will be fully specified as being labial, and the rule that predicts the feature [velar] for a nasal followed by /k/ is blocked from applying because feature-changing applications of this rule are allowed in derived environments only. In the case of Turkish vowel harmony, the same solution would apply. Disharmonic roots are fully specified, and therefore the P-rules of vowel harmony

12

Geert Booij

are blocked from applying to these morphemes, whereas they will apply in derived environments, to the vowels of the suffixes. In short, in Kiparsky’s proposal, the duplication problem is solved by abolishing the class of morpheme structure rules, and having P-rules apply in two different fashions. However, not all types of generalizations over the phonological shape of morphemes mentioned above can be expressed this way. This applies in particular to prosodic conditions on the shape of morphemes.

4.2

MSCs in Optimality Theory: Lexicon optimization and output–output faithfulness

Optimality Theory (OT) does not allow for constraints on the inputs of phonological evaluation. Output constraints are the only mechanisms for expressing phonotactic patterns. This idea of OT is referred to as the Richness of the Base hypothesis. For instance, there is no input constraint that forbids the morpheme *bnik as a morpheme of English. The output constraints will penalize such a form, and evaluate this form in such a way that the optimal output form is not faithful to this form, but different, e.g. blik. Since forms such as bnik will never surface in English, it does not make sense to store an underlying form bnik for blik. This is the effect of lexicon optimization. Thus, the phonological output constraints of a language will be reflected by the input forms. This point of view is foreshadowed in Sommerstein (1974: 73), who argued that judgments about whether a sound sequence is a possible morpheme must be made on the basis of surface representations. This idea is discussed in more detail in McCarthy (1998, 2002, 2005), and can be illustrated as follows. Suppose there is a language with the constraint that obstruents are voiceless at the end of a syllable, and with the suffix /-Hn/ as plural ending for nouns, as in [hut] – [hutHn] ‘hat(s)’. Furthermore, this language has no alternations of the type [hut] – [hudHn]. That is, morphemes that end in an obstruent will always end in a voiceless obstruent. Given the word [hut], the Richness of the Base hypothesis implies that we might assume the underlying form /hud/ for the singular form. The correct phonetic form [hut] will be computed anyway. However, in an optimal lexicon the underlying form to be chosen will be /hut/ because of lexicon optimization. This means that of the possible tableaux that select the right form, the most harmonic one will be selected, i.e. the one with the minimal number of violations of constraints. The assumption of the underlying form /hud/ will imply violation of the input–output (IO) faithfulness condition, unlike the underlying form /hut/. IO faithfulness requires the underlying form to be selected as the surface form, unless it is overruled by higher-ranked constraints. The optimal underlying form can be selected by comparison of tableaux, and the selection of the most harmonic one. Thus, lexicon optimization makes restrictions on input forms superfluous. If there are no MSCs, the question arises of how to account for constraints that hold for morphemes only. One example of a distributional difference between morphemes and words concerns the distribution of nasals in Dutch. Within morphemes, nasal consonants are always homorganic with a following obstruent (with the exception of imker; cf. §4.1). Hence, we find damp /dAmp/ ‘damp’, tand /tAnt/ ‘tooth’ and dank /dAIk/ ‘thanks’, but no morphemes ending in */-mt -mk -Ip -It/. On the other hand, complex words such as the 3rd singular present forms

Morpheme Structure Constraints

13

of verbs always end in /-t/, preceded by all three types of nasals: klim-t ‘climbs’, zon-t ‘sunbathes’, zing-t /z/It/ ‘sings’. If we assume a markedness constraint NC (nasals are homorganic with a following consonant), this constraint must be blocked from changing a verb form like klimt into klint. McCarthy (1998) argued that this can be achieved by making use of output–output (OO) correspondence constraints. If we rank OO faithfulness conditions for the relation between a base word and its derivatives higher than the markedness constraint NC, a verb form like klimt cannot be changed to klint, because this would violate the requirement of correspondence of the stem of this inflected form with the verbal stem klim. Nasal assimilation should not be blocked in all derived environments, however. In a prefixed word like compress, the prefix-final /n/ of con- does assimilate to the next /p/. This can be accounted for if we assume that the NC constraint is ranked higher than faithfulness to the underlying form of the prefix, /kDn/. The more general observation is that affixes tend to adapt to roots rather than the other way around. Hence, in OT analyses it is often assumed that faithfulness constraints for affixes rank lower than those for roots (Alderete 2003). This implies that constraints have to be indexed for particular morphological categories such as root and affix. Therefore, we have to allow for reference to morphological domains in a system of phonological output constraints.

4.3

Domains and strata

Our conclusion so far is that, even when we do not allow for constraints on underlying forms of morphemes, it should be possible to index a phonological output constraint for a particular morphological domain such as the lexical morpheme. This will make it possible to specify distributional constraints that hold for lexical morphemes only. For instance, the Dutch constraint that lexical morphemes and prefixes cannot begin with /Cj/, whereas suffixes can, is expressible by indexing this constraint for the relevant morphological domains. As observed in §3, particular phonotactic properties may only hold for certain strata of the lexicon. This is discussed in detail for Japanese by Itô and Mester (1995, 1999, 2001), who argue that “phonological generalizations can be covert by being lexically partial: they hold within a subdomain of the lexical space, but are violated in peripheral areas occupied e.g. by loanwords or onomatopeia” (Itô and Mester 2001: 274). For instance, in Japanese the palatalization constraint that changes /t/ into /ts/ before /i/ (this constraint also excludes tautomorphemic /ti/ sequences) does not affect loanwords like English tea and party. Therefore, Itô and Mester defend the idea of stratum-specific (ranking of) faithfulness constraints. A Dutch example of a stratum-specific constraint was mentioned in §2: in native Dutch morphemes morpheme-internal obstruent clusters are always voiceless, but this constraint does not hold for non-native morphemes such as labda /lAbda(/ ‘lambda’, or the brand name Mazda /mAzda(/. Such non-native morphemes preserve their foreign pronunciation. The word labda, for instance, will not be changed to [lApta(], and Dutch speakers will recognize it as a loan due to this phonological property. Another example is the word-initial cluster sk-, which does not occur in Dutch native words, but only in loans from English, e.g. scan and Skype. A similar distinction between native and non-native morphemes can be observed for languages with vowel harmony: non-native lexical morphemes may

14

Geert Booij

be disharmonic, and this is not changed through application of the vowel harmony constraints. For instance, the Hungarian noun soför ‘chauffeur’ is disharmonic (the first vowel is back, the second one is front), and remains so, even though it selects its suffix vowels in accordance with the frontness/backness vowel harmony constraint. In sum, phonological constraints may have to be indexed for particular morphological categories or for lexical strata.

5

Absolute constraints or tendencies?

A final important point of debate concerning MSCs is whether they are absolute constraints or just statistical tendencies. Zimmer (1969) investigated the psychological reality of the Labial Consonant MSC of Turkish (10) mentioned above. Recall that this constraint holds for morphemes only, and is not supported by the two P-rules of vowel harmony. Zimmer made up lists of pairs of nonsense words, and asked subjects, Turkish students in California, to determine which word of such a pair sounds more like a word that might actually occur in Turkish. In the case of word pairs where the P-rules of vowel harmony played a role, the results were as expected, with a strong preference for the word in accordance with the vowel harmony constraints. For the pairs that involved the Labial Consonant MSC, on the other hand, there was hardly any difference in number between expected responses (the words in accordance with the MSC) and unexpected responses. Zimmer (1969: 320) therefore concluded that an MSC that is not supported by a P-rule might not be internalized by native speakers of Turkish.

5.1

OCP-Place

The psychological reality of OCP-Place, discussed in §2, which excludes identical adjacent place specifications, has been investigated for speakers of Jordanian Arabic (Frisch and Zawaydeh 2001; Frisch et al. 2004). It appears that “Jordanian Arabic speakers do recognize systematic gaps that are violations of OCP-Place as different from accidental gaps involving unrelated consonant pairs” (Frisch and Zawaydeh 2001: 99), even though there are violations of OCP-Place. Frisch et al. (2004) argue that OCP-Place is not an absolute, universal constraint. They consider the constraint as reflecting the generalizations that Arabic speakers make on the basis of their lexicon. OCP-Place is claimed to be a gradient constraint, since there are quite a number of words that violate it, but in different degrees. “Forms that violate the constraint to a lesser degree are more frequent than forms that violate the constraint to a greater degree” (Frisch et al. 2004: 182). Frisch et al. also point out that the co-occurrence of homorganic consonants that are non-adjacent (occurring in the first and third positions) is less restricted than the co-occurrence of adjacent homorganic consonants. In other words, the OCP-Place constraint is a gradient, but psychologically real constraint: “the native speaker knows an abstract but gradient OCP Place constraint (‘Roots with repeated homorganic consonants are unusual’) based on generalizations over the statistical patterns found in the lexicon” (Frisch et al. 2004: 216). Frisch et al. also looked at the effect of OCP-Place on the borrowing of Italian verbs in Maltese, a variety of Arabic with many loans from Italian. The number

Morpheme Structure Constraints

15

of Italian verbs whose consonant pattern conforms to OCP-Place is significantly higher than that of the Italian verbs that violate OCP-Place (but these latter verbs may also be borrowed, and adapted to Maltese). This again supports the psychological reality of such a constraint, without it being categorical. These findings suggest that OCP-Place is a gradient constraint that aims at avoidance of similarity: the more similar adjacent consonants are, the more they are avoided. Speakers are able to make phonotactic generalizations about lexical morphemes, but the corresponding constraints need not be categorical. Statistical tendencies for the composition of various morphological categories such as the root and the stem have been observed by Wiese (2001): 94% of all German roots begin with a consonant, and 96% of all German roots end in a consonant. In OT, this can be expressed by alignment constraints that require the left and right edges of a root to coincide with the feature [+cons]. For those roots that violate the constraint, IO faithfulness will preserve the vowel at the edges. Note in particular that the tendency to have consonants at the end of roots does not follow from a syllable constraint, since the universally most unmarked syllable is the open syllable. Thus, this type of distribution may function as a boundary signal.

5.2

Constraints on underlying forms

Dutch exhibits intriguing constraints on sequences of vowels followed by fricatives. The basic generalization is that a vowel is short before /f s/, whereas it is long before /v z/. Let us call this the VZ constraint. Due to the effect of Final Devoicing, the constraint that obstruents are voiceless at the end of a syllable, this constraint can only be observed directly if the fricative is not morpheme-final. The following morphemes illustrate this constraint: (11)

short vowel

long vowel

effen dissel

even vezel

/efHn/ ‘even’ /d/sHl/ ‘pole’

excluded

/e(vHn/ /ve(zHl/

‘even’ ‘fiber’

/evHn, e(fHn/ /ezHl, e(sHl/

This constraint is violated by a few loanwords like mazzel /mAzHl/ ‘good luck’ and puzzel /pœzHl/ ‘puzzle’, and by the native morpheme oefen- /u(fHn/ ‘to exercise’. This shows that this constraint is not an absolute condition on pronounceability, but a statistical generalization about morphemes. This VZ constraint seems to apply to intervocalic sequences only, since we do find long vowels followed by /f s/ at the end of morphemes, as in the singular forms of the following nouns: (12)

graaf kaas

[:ra(f] ‘earl (sg)’ [ka(s] ‘cheese (sg)’

grav-en kaz-en

[:ra(vHn] (pl) [ka(zHn] (pl)

However, we can interpret this constraint as also applying to morpheme-final sequences if we assume the constraint to hold for the underlying forms of morphemes. Morphemes like graaf and kaas end in a voiced fricative underlyingly, as shown by their plural forms, and hence the underlying forms of these morphemes are /:ra(v/ and /ka(z/, respectively. There are a few exceptions, such as the

16

Geert Booij

non-native word graaf ‘graph’ with the plural form graf-en [:ra(fHn]. In the case of /s/ vs. /z/, the number of exceptions is much higher, since there are a number of verbs like eis-en [eisHn] ‘require (inf)’ and ruis-en [rœysHn] ‘rustle (inf)’, in which the diphthong, which counts as a long vowel, is followed by [s]. Speakers of Dutch may thus recognize a plural form as grafen as being non-native, whereas the phonetic form of its singular form graaf [:ra(f] does not betray this stratal property. If we allow for phonotactic constraints on underlying forms that cannot be observed from their corresponding surface forms, this enables us to make generalizations about the kind of alternations we may find in a language (Booij 1999). This topic is also broached in Ernestus and Baayen (2003). They raise the question to what extent the occurrence of alternations between morpheme-final voiceless and morpheme-final voiced obstruents in Dutch morphemes is predictable. There appear to be clear regularities. For instance, if a Dutch morpheme ends in a long vowel plus a labial stop, that stop is always an underlying /p/; that is, we do not find the alternation [p – b] for such morphemes. In the case of the fricatives discussed above, we saw that the length of the preceding vowel is a strong predictor of whether the final obstruent is underlyingly voiced or voiceless, although stronger for /f v/ than for /s z/. The question is whether language users possess this kind of knowledge. Ernestus and Baayen (2003) tested this by asking subjects to make past tense forms for nonsense words. If the underlying form of the verbal root morpheme ends in a voiceless obstruent, the past tense suffix -te /tH/ will be chosen, and -de /dH/ otherwise. It appeared that language users do make use of the phonotactic tendencies involved: there is a strong correlation between the proportion of -te/-de choices for nonsense morphemes and the proportion for existing morphemes with a similar phonotactic make-up. Ernestus and Baayen (2003) therefore concluded that the speaker chooses an underlying representation for a nonsense morpheme that makes it resemble similar morphemes in the lexicon. As was the case for the Arabic roots discussed in §5.1, such phonotactic generalizations concerning morphemes may be statistical rather than absolute in nature. Moreover, they may pertain to underlying forms, where properties are present that may not be accessible in surface form. This kind of knowledge about the type of alternations that occur in a language may also be formalized without restricting such MSCs to underlying forms. Consider the singular/plural pairs of nouns in Dutch with a stem-final obstruent such as hoed [hut] – hoed-en [hudHn] ‘hat (sg, pl)’. The voice specification of the stem-final obstruent of the morpheme hoed can only be determined on the basis of the plural form. The plural form is the most informative form of the paradigm (Albright 2005, 2008), and we may assume that it is stored in lexical memory. The relation between the two forms can be specified by a schema of the following type: (13)

[x]sg ↔ [x – Hn]pl

(The symbol ↔ indicates the correlation between the two forms; x is a variable for a string of segments.) The plural form is the only reliable form for the computation of the underlying form, that is, the form on the basis of which new derived words and inflected forms can be computed. For instance, if we were to coin the adjective hoed-ig [hudHx] ‘hat-like’, the stem has to end in a /d/, since the phonetic form [hutHx] is wrong. That is, an underlying form is not necessarily

Morpheme Structure Constraints

17

a lexically stored representation, but may be computed when necessary for a morphological operation. In the case of Dutch verbs, we need to compute the underlying form of the verbal stem for the choice of the proper form of the past tense suffix (-te or -de). Recall now the generalization for Dutch verbal stems that after a long vowel (VV) there is never a p/ b alternation: if a singular form ends in [VVp], its plural form will never end in [VVbHn]. This generalization also holds for nouns, and this can be expressed by the following subschema of (13) (y is a variable for segmental strings): (14)

[y VVp]sg ↔ [y VVp-Hn]pl

In the case of the Dutch s/z alternations discussed above, we might assume subschemas like the following for nouns: (15)

a. b.

[y VVs]sg [y Vs]sg

↔ ↔

[y VVz-Hn]pl [y Vs-Hn]pl

(as in kaas – kazen ‘cheese (sg, pl)’) (as in kas – kassen ‘greenhouse (sg, pl)’)

If such schemas do not apply to all words, that is, if they are statistical generalizations only, they can be given a weight that indicates their probability. The generalizations expressed in (15) apply almost exceptionlessly to nouns, and are confirmed by irregular pairs of singular and plural nouns with vowel length alternation. Vowel Lengthening is no longer a regular rule of Dutch, but an idiosyncratic alternation, a relic of Prokosch’s Law that applied to Early Germanic, illustrated here for the noun glas: (16)

glas [:lAs] ‘glass (sg)’

glaz-en [:la(zHn] ‘glass (pl)’

The correspondence between the length of the vowel and the [voice] specification given in (15) is maintained in these irregular pairs by the combination of vowel length alternation and choice of obstruent: the forms *[:lAzHn] and *[:la(sHn] are both ill-formed. In sum: whether there is an alternation between a voiced and a voiceless stem-final obstruent for Dutch lexical morphemes can only be determined with 100 percent certainty on the basis of inflected forms such as plurals. Yet, segmental composition of the lexical morpheme may give a clue, in some cases with almost 100 percent reliability. This type of knowledge may be modeled by constraints on the underlying forms of lexical morphemes, or by alternation schemas of the type proposed in (14) and (15).

6

The expressive value of phonotactics

A final phenomenon to be discussed is that particular sound sequences may have a specific semantic or pragmatic value. Sound symbolism is the usual term for such phenomena (see Hinton et al. 1994 for a number of detailed studies). In particular, there are phonaesthemes, recurring sounds or sound sequences, with a particular value. For instance, Marchand (1969: 397) argued that “/i/ is suggestive of the subjective, emotionally small and is therefore frequent with diminutive

18

Geert Booij

and pet suffixes,” and Bauer (1996) also found a cross-linguistic tendency for the use of this vowel in diminutive suffixes (see also Nichols 1971 for consonant sound symbolism in diminutives). According to Marchand (1969), the word-initial sequence fl- in words like flick, flip, flap, flash is expressive of brisk, quick movement, and Marchand provides many examples of such phonaesthemes. Such sound combinations are not to be considered as morphemes by themselves; yet they have a particular value. Hence, one may claim that phonotactic properties of morphemes can have an expressive role. Japanese has a class of mimetic morphemes, which are sound-imitating or manner-symbolic roots. These morphemes have to be minimally bimoraic, and usually they appear in reduplicated or some other bipodic form (Mester and Itô 1989: 268): (17) poko-poko noro-noro paku-paku pata-pata

‘up and down movement’ ‘slow movement’ ‘munching’ ‘palpitating’

In her study of the expressive value of lexical patterns, Klamer (2002) observed that the violation of general phonotactic constraints in specific classes of lexical items may have an expressive value. That is, a marked semantic value correlates with a marked phonotactic structure. An example from Dutch is the class of monosyllabic words of the type /lVl/, that is, words with the same consonant /l/ in onset and coda. Such words violate the phonotactic constraint or tendency of Dutch that the liquid consonants /l r/ in a syllable should be different. Words with this kind of phonotactics may have marked interpretations, as the following examples from Klamer (2002: 273) illustrate: (18)

lal (v) lel (n) lil (v) lol (n) lul (n) lul (v)

‘jabber, babble, slur one’s words’ ‘earlobe, clout, whopper’ ‘quiver’ ‘fun, lark, trick’ ‘prick, jerk’ ‘talk nonsense’

Morphemes in which the vowel of /lVl/ is long do not occur at all. In sum, the expressive value of phonotactic patterns within morphemes may be considered from a different angle: the violation of a constraint may have expressive value.

7

Conclusions

There is no doubt that there are distributional generalizations concerning the phonological make-up of morphemes that need to be expressed somehow in a proper phonological theory. The main theoretical issues are to what extent they can be made to follow from phonological generalizations that also hold for larger units than morphemes, and whether they are absolute constraints, or gradient constraints that express statistical tendencies. MSCs may also reveal different layers of the

Morpheme Structure Constraints

19

lexicon. Thus this chapter provides a range of data, observations, and considerations that can be used as a testing ground for the adequacy of theoretical phonological models.

ACKNOWLEDGMENTS I would like to thank Paulo Chagas de Souza, Moira Yip, two anonymous reviewers, and the editors for their constructive comments and advice on an earlier draft of this chapter. In writing the final version, I also profited from taking part in a phonology seminar on phonotactic learning taught by Adam Albright at MIT and Harvard in the Spring semester of 2010.

REFERENCES Albright, Adam. 2005. The morphological basis of paradigm leveling. In Downing et al. (2005), 17–43. Oxford: Oxford University Press. Albright, Adam. 2008. Inflectional paradigms have bases too: Evidence from Yiddish. In Asaf Bachrach & Andrew Nevins (eds.) Inflectional identity, 271–312. Oxford: Oxford University Press. Alderete, John. 2003. Structural disparities in Navajo word domains: A case for LexCatFaithfulness. The Linguistic Review 20. 111–157. Anderson, Stephen R. 1974. The organization of phonology. New York: Academic Press. Bar-Lev, Zev. 1978. The Hebrew morphemes. Lingua 45. 319 –331. Bauer, Laurie. 1996. No phonetic iconicity in evaluative morphology. Studia Linguistica 50. 189–206. Bell, Alan. 1976. The distributional syllable. In Alphonse Juilland (ed.) Linguistic studies offered to Joseph Greenberg, vol. 2, 249 –262. Saratoga, CA: Anma Libri. Berent, Iris, Gary F. Marcus, Joseph Shimron & Adamantios I. Gafos. 2002. The scope of linguistic generalizations: Evidence from Hebrew word formation. Cognition 83. 113–139. Booij, Geert. 1983. Principles and parameters in prosodic phonology. Linguistics 21. 249–280. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Booij, Geert. 1999. Morpheme structure constraints and the phonotactics of Dutch. In Harry van der Hulst & Nancy Ritter (eds.) The syllable: Views and facts, 53 –68. Berlin & New York: Mouton de Gruyter. Booij, Geert. 2000. The phonology–morphology interface. In Lisa Cheng & Rint Sybesma (eds.) The first Glot International state-of-the-art book: The latest in linguistics, 287–306. Berlin & New York: Mouton de Gruyter. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Clayton, Mary. 1976. The redundancy of underlying morpheme structure conditions. Language 52. 295 –313. Davis, Stuart. 1991. Coronals and the phonotactics of nonadjacent consonants in English. In Paradis & Prunet (1991), 49–60. Don, Jan & Marian Erkelens. 2006. Vorm en categorie. Taal en Tongval 19. 40 –53. Downing, Laura J. 2006. Canonical forms in prosodic morphology. Oxford: Oxford University Press. Downing, Laura J., T. A. Hall & Renate Raffelsiefen (eds.) 2005. Paradigms in phonological theory. Oxford: Oxford University Press.

20

Geert Booij

Dressler, Wolfgang U. 1985. Morphonology: The dynamics of derivation. Ann Arbor, MI: Karoma. Ernestus, Mirjam & R. Harald Baayen. 2003. Predicting the unpredictable: Interpreting neutralized segments in Dutch. Language 79. 5 –38. Frisch, Stefan A., Janet B. Pierrehumbert & Michael B. Broe. 2004. Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22. 179 –228. Frisch, Stefan A. & Bushra A. Zawaydeh. 2001. The psychological reality of OCP Place in Arabic. Language 77. 91–106. Greenberg, Joseph H. 1950. The patterning of root morphemes in Semitic. Word 6. 162–181. Halle, Morris. 1959. The sound pattern of Russian: A linguistic and acoustical investigation. The Hague: Mouton. Halle, Morris. 1964. On the basis of phonology. In Jerry A. Fodor & Jerrold J. Katz (eds.) The structure of language: Readings in the philosophy of language, 324–333. Englewood Cliffs, NJ: Prentice-Hall. Harris, John. 1994. English sound structure. Oxford: Blackwell. Harvey, Mark & Brett Baker. 2005. Vowel harmony, directionality and morpheme structure constraints in Warlpiri. Lingua 115. 1457–1474. Hinton, Leanne, Johanna Nicols & John J. Ohala (eds.) 1994. Sound symbolism. Cambridge: Cambridge University Press. Hooper, Joan B. 1972. The syllable in phonological theory. Language 48. 525 –540. Itô, Junko & Armin Mester. 1995. Japanese phonology. In John A. Goldsmith (ed.) The handbook of phonological theory, 817–838. Cambridge, MA & Oxford: Blackwell. Itô, Junko & Armin Mester. 1999. The phonological lexicon. In Natsuko Tsujimura (ed.) The handbook of Japanese linguistics, 62–100. Malden, MA & Oxford: Blackwell. Itô, Junko & Armin Mester. 2001. Covert generalizations in Optimality Theory: The role of stratal faithfulness constraints. Studies in Phonetics, Phonology, and Morphology 7. 273–299. Jakobson, Roman. 1949. L’aspect phonologique et l’aspect grammatical du langage dans leurs interrelations. Reprinted 1963 in Roman Jakobson. Essais de linguistique générale, 161–175. Paris: Minuit. Kager, René. 1999. Optimality Theory. Cambridge: Cambridge University Press. Kaisse, Ellen M. & Patricia A. Shaw. 1985. On the theory of Lexical Phonology. Phonology Yearbook 2. 1– 30. Kenstowicz, Michael & Charles W. Kisseberth. 1977. Topics in phonological theory. New York: Academic Press. Kiparsky, Paul. 1982. From Cyclic Phonology to Lexical Phonology. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part I, 131–175. Dordrecht: Foris. Klamer, Marian. 2002. Semantically motivated lexical patterns: A study of Dutch and Kambera expressives. Language 78. 258 –286. Kristoffersen, Gjert. 2000. The phonology of Norwegian. Oxford: Oxford University Press. Marchand, Hans. 1969. The categories and types of present-day English word formation. Munich: Beck. McCarthy, John J. 1986. OCP effects: Gemination and antigemination. Linguistic Inquiry 17. 207–263. McCarthy, John J. 1998. Morpheme structure constraints and paradigm occultation. Papers from the Annual Regional Meeting, Chicago Linguistic Society 32(2). 123–150. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Oxford: Oxford University Press. McCarthy, John J. 2005. Optimal paradigms. In Downing et al. (2005), 170–210. McCarthy, John J. & Alan Prince. 1993. Generalized alignment. Yearbook of Morphology 1993. 79–153. Mester, Armin & Junko Itô. 1989. Feature predictability and underspecification: Palatal prosody in Japanese mimetics. Language 65. 258 –293.

Morpheme Structure Constraints

21

Nespor, Marina & Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris. Nichols, Johanna. 1971. Diminutive consonant symbolism in western North America. Language 47. 826 –848. Oostendorp, Marc van. 1995. Vowel quality and phonological projection. Ph.D. dissertation, University of Tilburg. Paradis, Carole & Jean-François Prunet (eds.) 1991. The special status of coronals: Internal and external evidence. San Diego: Academic Press. Postal, Paul. 1968. Aspects of phonological theory. New York: Harper & Row. Rice, Keren. 2009. Athapaskan: Slave. In Rochelle Lieber & Pavol Stekauer (eds.) The Oxford handbook of compounding, 542 –573. Oxford: Oxford University Press. Rubach, Jerzy & Geert Booij. 1990. Edge of constituent effects in Polish. Natural Language and Linguistic Theory 8. 427 –463. Shibatani, Masayoshi. 1973. The role of surface phonetic constraints in generative phonology. Language 49. 87 –106. Sommerstein, Alan H. 1974. On phonotactically motivated rules. Journal of Linguistics 10. 71–94. Stanley, Richard S. 1967. Redundancy rules in phonology. Language 43. 393 –436. Vago, Robert M. 1976. Theoretical implications of Hungarian vowel harmony. Linguistic Inquiry 7. 243 –263. Wiese, Richard. 2001. The structure of the Germanic vocabulary: Edge marking of categories and functional considerations. Linguistics 39. 95 –115. Wijk, Nicolaas van. 1939. Fonologie: Een hoofdstuk uit de structurele taalwetenschap. The Hague: Martinus Nijhoff. Yip, Moira. 1991. Coronals, consonant clusters, and the coda condition. In Paradis & Prunet (1991), 61–78. Zimmer, Karl. 1969. Psychological correlates of some Turkish morpheme structure conditions. Language 45. 309 –321. Zonneveld, Wim. 1983. Lexical and phonological properties of Dutch voicing assimilation. In Marcel van den Broecke, Vincent van Heuven & Wim Zonneveld (eds.) Sound structures: Studies for Antonie Cohen, 297 –312. Dordrecht: Foris.

87

Neighborhood Effects Adam Buchwald

1

Introduction

The organization of lexical knowledge at the phonological level has long been thought to incorporate some encoding of similarity neighborhoods: in the network structure of the lexicon, words that share phonological components are more closely connected to one another than words that are different. The notion of a “neighbor” as a similar word has provided tremendous insight into a variety of psycholinguistic phenomena related to spoken word recognition and spoken word production. The present chapter explores this notion of neighbor, focusing on the characterization of neighbors of a target word as the other words that are activated when that target is active. As we will see, this notion allows us to predict inhibitory or facilitatory effects depending on the task. These effects are well documented across tasks and participant populations. Following a brief description of the notion of the mental lexicon and competition in lexical access, we review research in spoken word recognition and production, and describe how the neighborhood construct (and similarity more generally) has been applied in phonologically based psycholinguistic research. We then examine how the notion of neighbor has been applied to domains such as language acquisition, language impairment, and other modalities of communication, including written language processing and audiovisual speech perception.

1.1

Neighbors compete for lexical selection

The term “mental lexicon” is typically used to refer to a network of words representing an individual’s lexical knowledge (Oldfield 1966; Forster 1978). Accounts of lexical knowledge posit multiple types of lexical organization, notably including meaning-based organization and form-based organization. The notion of similarity among lexical items varies at these levels; at the level of meaning-based organization, neighbors share semantic features (e.g. dog and cat are semantic neighbors, sharing several semantic properties, such as “four-legged,” “domesticated,” and “animal,” among others). With respect to form-based organization, neighbors are defined words that share phonological and phonetic detail, such as cat [kæt] and cap [kæp] (e.g. Luce and Pisoni 1998; see also Greenberg and Jenkins 1964 and The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0087

Neighborhood Effects

2

Landauer and Streeter 1973). The phonological neighbors of a word (e.g. cat) are the other words that share phonological structure (e.g. cap, hat, kit) and become activated when the word is activated in spoken word recognition and in word production. In short, these are the other words that are competing for lexical access. As all psycholinguistic accounts of lexical processing posit separate levels of meaning-based and form-based processing, the neighbors of a word at each of these levels are the other words competing for lexical selection (Dell et al. 1997; Vitevitch and Luce 1998, 1999; Levelt et al. 1999; Luce et al. 2000; Rapp and Goldrick 2000). The first part of the chapter discusses how the fundamental description of neighbors as competitors in lexical selection processes affects spoken word recognition and spoken word production. We begin with a discussion of some of the seminal results in word recognition that have helped to shape our understanding of neighborhood effects, and explore how neighborhood effects relate to other lexical and sub-lexical properties. This is followed by a review of the spoken word production literature, including both lexical-level processing and phonetic differences that arise due to neighborhood structure.

2

Neighborhood effects in spoken word recognition

While there are many differences among accounts of spoken word recognition, there is widespread agreement that when a word is heard, recognition involves a selection process in which the listener accesses a lexical item among several competing alternatives (Morton 1969, 1979; Marslen-Wilson and Welsh 1978; Elman and McClelland 1986; Norris 1994; Luce and Pisoni 1998; Luce et al. 2000; Norris et al. 2000; see Jusczyk and Luce 2002 for a review). Luce and Pisoni (1998) formalize the Neighborhood Activation Model (NAM), in which it follows from the nature of lexical competition that ceteris paribus words with a lot of active neighbors (i.e. words in dense lexical neighborhoods) are harder to access than words with few active neighbors (i.e. in sparse lexical neighborhoods), as there is more competition during lexical selection (see also Luce et al. 2000 for a computational implementation of NAM). Because lexical selection is a competitive process, factors that strengthen a word’s activation (e.g. high frequency) also help in recognizing that word. Therefore, high-frequency words from low-density neighborhoods are easier to access than low-frequency words from high-density neighborhoods (see also chapter 90: frequency effects). Luce and Pisoni (1998) examined the performance of participants on a variety of word recognition tasks with “easy” stimuli (high-frequency words from lowdensity neighborhoods) and “hard” stimuli (low-frequency words from highdensity neighborhoods). The tasks they used included perceptual identification (written response to aural presentation), lexical decision, and word repetition. For each task, they reported that participants responded faster and/or more accurately to words from sparse phonological neighborhoods compared to words from dense phonological neighborhoods. The results indicated that the best predictor of performance was frequency-weighted neighborhood density, a measure indicating the frequency of a word compared to the total frequency of that word and its neighbors (also see Newman et al. 1997). These results are consistent with the claim that an increase in the number of neighbors leads to more competition in

3

Adam Buchwald

tasks that involve spoken word recognition and that more competition makes recognition slower and less accurate. To determine whether these neighborhood properties affect online word recognition errors, Vitevitch (2002b) analyzed a corpus of spoken word recognition errors (“slips of the ear”) and determined that words occurring in dense phonological neighborhoods were more prone to recognition errors than words in sparse neighborhoods. Taken together, these results suggest that the density of a word’s phonological neighborhood is directly related to the strength of competition in word recognition; words from high-density neighborhoods have stronger, more active competitors, and recognition of these words is slower and more likely to engender spoken word recognition errors than words with low density neighborhoods (cf. Vitevitch and Rodríguez 2005). Other studies have shown variance in neighborhood effects that are based on the structure of a neighborhood, even when the number of neighbors remains constant. Vitevitch (2002a) reported that subjects were slower to perform shadowing and lexical decision tasks for words with high onset density (i.e. a large proportion of neighbors sharing the initial phoneme) compared to words with lower onset density. Vitevitch (2007) reported on an auditory lexical decision task, a repetition task, and an AX (same–different) discrimination task using groups of words matched for neighborhood density but differing in neighborhood structure. In particular, Vitevitch used CVC words with different neighborhood spread – that is, the number of segment positions in the word that can be changed to form a new word. For example, cat has a spread of 3, as each segment can be changed to form a new word (e.g. hat, kit, cap), whereas mob has a spread of 2 (e.g. lob, mop, but there are no [mVb] words where V is not [A]). Some stimulus words had neighbors that could be formed by substitutions of each segmental position, and others only had neighbors that could be formed from substituting one or two positions. Vitevitch (2007) found that participants were slower at responding to words with a spread of 3 than words with a spread of 2, even when the overall neighborhood size and frequency were controlled. The findings from these studies indicate that both the size and the structure of a lexical neighborhood affect spoken word recognition. Magnuson et al. (2007) reported on the results from an eye-tracking study that provides more insight into the time course of lexical competition based on neighborhood properties. Magnuson et al. asked participants to perform an auditory word–picture matching recognition task. Participants heard instructions to click on a picture of an object which was in an array of four pictures. Rather than relying on accuracy or reaction time paradigms which infer the nature of cognitive processes from a single response, Magnuson et al. measured the participants’ gaze toward different pictures in the array over time; thus, the eye-tracking paradigm allowed them to examine competition over the course of processing. Their findings indicated an early facilitatory effect of neighborhood density (revealed by participants looking toward the target), followed by a later inhibitory effect such as those more typically seen in word recognition studies (discussed above). Thus, by examining processing throughout the course of lexical access, Magnuson et al. were able to uncover a more nuanced characterization of competition effects in lexical access. Thus far, we have discussed neighborhood density without providing a definition of a neighbor. The definition of a neighbor that is most commonly used in the word recognition literature is a word that differs from the target by a single

Neighborhood Effects

4

segment deletion, addition, or substitution; in other words, words that differ from each other by an edit distance of 1. Thus, the neighbors of cat include hat (first segment substitution), kit (second segment substitution), cap (third segment substitution), scat, cats (insertions), and at (deletion), among others. This definition of neighbor has been used in large part because many of the seminal studies have focused on either CVC words or monosyllabic words more generally (Luce 1986; Luce and Pisoni 1998; Benkí 2003). For these words, there is a fair amount of variation in neighborhood density. However, when one looks over the entire lexicon rather than at a subset, more than half of the words in the lexicon are “hermits” by this definition; that is, they do not have any neighbors (Vitevitch 2008). Buchwald et al. (2008) and Felty et al. (2008) attempt to examine this issue by inferring the most accurate definition of a neighbor from spoken word recognition errors. Felty et al. (2008) reported on a large database of spoken word recognition errors obtained by having participants identify words mixed with noise. Their 1428 stimulus words were designed to be representative of the English lexicon, including a range of syllable length, stress patterns, frequency, and familiarity. The words were randomly selected from a larger lexical database. The word recognition errors reported by the participants were taken to be a direct reflection of the words that were highly competitive with the target. They reported that, particularly for longer words, the responses typically differed from the target by an edit distance of greater than 1. Thus, while it remains likely that using words with a phoneme edit distance of 1 to define the lexical neighborhood can reasonably approximate neighborhood density effects in CVC words, it may not be the most appropriate definition of a lexical competitor.

2.1

Phonotactic probability and lexical neighborhoods

The nature of the competition effects we have been discussing focuses on the lexical selection process; that is, competition among lexical forms at a level of processing with word-level representations. However, it should be noted that the notion of lexical neighbor rests on some measure of sub-lexical phonetic similarity; for two words to be neighbors, they must share sub-lexical structure (e.g. cat and hat share [æt]). If a word has a lot of neighbors, then by definition there are many words that share its sub-lexical structure. Thus, it is worth asking whether having a lot of shared sub-lexical structure is directly related to the competition effect; that is, what is the role of phonotactics? “Phonotactics” is the term used to refer to the sequential arrangements of segments in the words of a language (see also chapter 33: syllable-internal structure). Consider the word hang [hæI], which is a perfectly well-formed word of English (i.e. it is phonotactically legal), with [h] in word-onset position and [I] in the word-final coda position. However, a non-word such as ngah *[Iæh] is not phonotactically legal, because it does not conform to the phonotactics of English, since English has phonotactic constraints both against [h] in coda and against [I] in onset. The fact that languages have different phonemic inventories and that some languages restrict the segments in certain positions is a direct reflection of categorical phonotactic constraints (see chapter 86: morpheme structure constraints). In addition to these within-language constraints, it has been shown that participants can learn new phonotactic rules that do not exist in their language over the course of an experiment (Dell et al. 2000).

5

Adam Buchwald

Of course, with respect to lexical selection, all lexical items in a language are phonotactically legal. Thus, many investigations in phonotactics have been more concerned with probabilistic phonotactics; that is, the likelihood that segments appear in specific syllabic positions, and the likelihood with which segments may co-occur (Jusczyk et al. 1994). Jusczyk et al. (1994) showed that 9-month-old infants prefer listening to phonotactic patterns that occur often in their native language to those that are attested, but occur less frequently. The concept of phonotactic probability is closely tied to that of neighborhood density; phonotactic sequences that have high probability will be shared by many words. Thus, words containing those sequences will be likely to belong to high-density neighborhoods. In a variety of tasks, including wordlikeness judgments and non-word repetition, English speakers have been shown to process non-words with high phonotactic probability faster and rate these non-words as more “English-like” compared to their low phonotactic probability counterparts (Vitevitch et al. 1997; Vitevitch et al. 1999; Frisch et al. 2000). This leads to an apparent contradiction: acoustic stimuli are processed faster and more accurately when they have high phonotactic probability but slower and less accurately when they are words from dense neighborhoods. Vitevitch and Luce (1998, 1999) explored this apparent contradiction by contrasting words of high and low phonotactic probability (and neighborhood density) with non-words of high and low phonotactic probability. Vitevitch and Luce (1998) had participants perform a repetition task on these four stimulus types. Their results indicated that the competition effects of words from high-density neighborhoods slowed processing – even though those words had high phonotactic probability. In contrast, having a high phonotactic probability facilitated the processing of non-words. Thus, while words with a lot of competitors were processed less efficiently, the processing of non-words was facilitated when they had substantial lexical support for their segments and segmental sequences, indicating that the mechanism for non-word processing is sensitive to phonotactic probability. These results were interpreted as evidence for separate lexical and sub-lexical representations and processing systems. High phonotactic probability facilitates sub-lexical processing in production tasks whereas low neighborhood density facilitates lexical processing in perception tasks. Bailey and Hahn (2001) raised the issue that the data of Vitevitch and Luce do not specifically show separate lexical effects (i.e. neighborhood density) and sublexical effects (i.e. phonotactic probability). To address this issue, Bailey and Hahn presented participants with a wordlikeness judgment task in which they directly contrasted phonotactic probability and neighborhood density for non-words. They reported that the best predictors of wordlikeness judgments incorporated lexical neighborhood influences, phonotactic probability, and the relationship between the two. One important innovation of Bailey and Hahn’s work was to incorporate a measure of phonetic similarity that considered similarity among segments (e.g. that treated /k/ and /g/ as more similar than /k/ and /b/), thus using a more linguistically sophisticated notion of phonetic similarity among words. Pylkkänen et al. (2002; see also Pylkkänen and Marantz 2003) provided MEG (magneto-encephalography) support for the claim that lexical neighborhood effects and phonotactic probability effects are neurally distinct. Storkel et al. (2006) further separated the effects of phonotactic probability and neighborhood density in a word-learning task. Participants had to learn novel words

Neighborhood Effects

6

that varied orthogonally in phonotactic probability and neighborhood density. The results indicated that the participants were better at learning words from high-density neighborhoods than low-density neighborhoods, but worse at learning high-probability words than low-probability words. Storkel et al. argued that these findings revealed a facilitatory effect of neighborhood density in encoding and integrating novel lexical representations with previously stored lexical representations.

2.2

Summary

Neighborhood effects in word recognition may be described as effects of competition: words with a lot of phonetic neighbors that are strong competitors have a lot of competition for lexical access. Thus, having a large number of neighbors typically increases response latency in word recognition tasks. The precise properties of the neighbors and their relationship to the target can modulate these effects to some extent; in other words, some competitors are stronger than others. More recent work has begun to examine both the changes in lexical competition over the time course of word recognition and how different degrees of sub-phonemic similarity affect the strength of competitors in word recognition. The relationship between lexical competition resulting from neighborhood size and other phonological phenomena driven by similarity effects (e.g. gradient OCP constraint based on consonant similarity: Frisch et al. 2004; Coetzee and Pater 2008) remains relatively unexplored.

3

Neighborhood effects in spoken word production

As noted above, the neighbors of a target word are the other words that become activated when the target word is active. In the process of spoken word recognition, these other words compete for lexical selection and can make lexical access slower and more error prone. In spoken word production, however, the opposite pattern is seen; words from dense neighborhoods are produced faster and are less error prone than words from sparse neighborhoods. In a variety of tasks examining speech production errors, Vitevitch and colleagues have reported that words from dense neighborhoods are produced faster and more accurately than words from sparse neighborhoods in English (Vitevitch 2002c; Vitevitch and Sommers 2003; Vitevitch et al. 2004). Vitevitch and Sommers (2003) performed a tip-of-the-tongue (TOT) elicitation task in which the lexical retrieval process is thought to stall before production of a word’s form, even when the speaker may be able to access a variety of information about a word (meaning, gender, etc.). Vitevitch and Sommers reported that adults were significantly more likely to achieve a TOT state for words from sparse neighborhoods compared to words from dense neighborhoods. Similarly, Vitevitch (2002b) reported that speech errors were more likely for words from sparse neighborhoods compared to words from dense neighborhoods for two additional speech error-inducing tasks. These results are consistent with data from picture naming tasks as well (Vitevitch 2002c; Vitevitch et al. 2004), indicating that having a large number of similar words seems to facilitate the process of lexical retrieval in production.

7

Adam Buchwald

Here we have another apparent contradiction: in spoken word recognition tasks, words from high-density phonological neighborhoods are recognized more slowly and less accurately, whereas in spoken word production tasks these words appear to be facilitated by their neighbors. The facilitatory effect of neighborhood density in speech production has been argued to follow in a straightforward fashion from interactive theories of speech production in which words are activated at a lexical level for lexical selection, and there is feedback from a “lower” sub-lexical level back up to this lexical level (Dell 1986, 1988; Dell et al. 1997; Rapp and Goldrick 2000; Dell and Gordon 2003; cf. Levelt et al. 1999, and see Vitevitch et al. 2004 for discussion). When a word is activated at the lexical level, it sends activation downstream to its phonemic constituents. Through the interactive process of feedback, the units representing the active phonemes send activation back up to the items at the lexical level that contain them. Thus when the unit(s) representing the word cat are activated on the lexical level, the units representing [k], [æ], and [t] on the sub-lexical level receive activation from this lexical unit. In a system with feedback, these sub-lexical units then send activation not only back up to cat, but also to the other words they are connected to (e.g. cap, hat, etc.), i.e. the neighbors of cat. This in turn provides more lexical support for the units representing those sounds, and thus makes the target word more likely to be produced than a semantic competitor which is not receiving additional activation from feedback. Thus, it is the interaction of lexical information and sub-lexical information that creates the facilitatory effect of high-density neighborhoods.

3.1

Phonetic effects of neighborhood density in speech production

In addition to examining speed and accuracy of production, researchers have looked at the effect of neighborhood density on a variety of phonetic and acoustic properties of speech production. One phenomenon that has been well documented is the expansion of the vowel space in the production of words from high-density neighborhoods compared to low-density neighborhoods (Munson and Solomon 2004; Wright 2004; Munson 2007). Each of these studies shows that vowels in words from dense neighborhoods are produced closer to the periphery of the vowel space, whereas vowels in words from sparse neighborhoods are produced closer to the center of the vowel space; in other words, the distinctiveness of the vowels in words from high-density neighborhoods is enhanced relative to vowels in words from low-density neighborhoods. Vowel space expansion such as is documented in words from high-density neighborhoods is typical of what speakers do when they are producing “clear” speech (Bradlow 2002), and is also associated with more intelligible speakers (Bradlow et al. 1996). Thus, when speakers are producing words from dense phonological neighborhoods, they adopt the strategies used in clear speech, referred to as hyperarticulation in Lindblom’s Hyperspeech and Hypospeech (H&H) theory (Lindblom 1990). Additionally, Scarborough (2004) found that vowels in low-frequency words from high-density neighborhoods exhibit more co-articulation (e.g. V-to-V co-articulation) than vowels in high-frequency words from low-density neighborhoods, which she argued (contra Lindblom 1990) is helpful to the process of lexical access for the listener.

Neighborhood Effects

8

In recent work, Baese-Berk and Goldrick (2009) address a specific type of neighborhood effect: the presence or absence of a specific minimal pair lexical item. They examined the productions of words beginning with a voiceless consonant that have a voiced consonant-initial cognate (e.g. cod ~ god) and compared them to voiceless-initial words without a voiced cognate (e.g. cop ~ *gop). Their data revealed that participants produce more extreme voice onset times (VOT) when producing words with the minimal pair neighbor than when producing words without a minimal pair neighbor. As VOT is a key indicator of the voicing contrast (Lisker and Abramson 1964), the enhanced VOT in the presence of a minimal pair neighbor can be viewed as another type of hyper-articulation due to a lexical item being from a lexical neighborhood with a particular neighborhood structure. As can be inferred from this limited review of phonetic consequences of neighborhood density and structure, there have been relatively few attempts to understand how lexical phonological properties such as neighborhood density can affect the acoustic details of speech production. Nevertheless, this remains a fruitful area of research for the future and will likely lead to further insights regarding the relationship between lexical representations in the lexicon and the processing systems that allow those representations to be articulated in speech production.

4 4.1

Lexical neighborhoods: Language acquisition and language impairment Language acquisition

If the neighbors of a word are other phonetically similar words, then it stands to reason that, as we learn more words, the structure of our lexical neighborhoods will change. This issue of how lexical neighborhoods develop during language acquisition and how lexical neighborhoods affect children’s language ability has been addressed in the literature in several ways. Many of the attempts to study this issue focus on analyses of children’s phonological lexicons. In a straightforward analysis of age-appropriate lexicons, Charles-Luce and Luce (1990, 1995) reported that words in the vocabulary of younger children (5-year-olds) have fewer neighbors than the words in the vocabulary of older children (7-year-olds), which in turn have fewer neighbors than those words in the vocabulary of adults. In other words, as children learn more words, their lexical neighborhoods become denser. Charles-Luce and Luce argued that these findings indicate less need for detailed phonetic representations of words in younger children’s lexicons, as there are fewer confusable words (cf. Dollaghan 1994). One influential account of lexical acquisition holds that children’s initial phonological representations are more holistic and only become more differentiated by encoding phonological and phonetic structure after their lexicons have grown (Walley 1993; chapter 72: consonant harmony in child language). Consistent with this account, Metsala (1997; see also Garlock et al. 2001) reported that during a word recognition task using the gating paradigm, children required less phonetic material to recognize words from sparse neighborhoods as they got older and their vocabularies increased. She argued that this reflects a more

9

Adam Buchwald

differentiated representation as children get older such that words can be recognized from their constituent parts rather than requiring the whole word for recognition. Storkel (2004b) reported that children learn words from dense neighborhoods earlier than they learn words from sparse neighborhoods. Storkel (2002) argued that in the developing child’s lexicon, words from dense neighborhoods have more detailed representations than words from sparse neighborhoods. Coady and Aslin (2003) reported that the early developing lexicon contains more words from high-density neighborhoods than the later lexicon, suggesting that infrequent sound patterns are learned later. Thus, they claimed that this is not consistent with an account of children’s lexical representations in which they start impoverished and become more detailed later (as in Walley 1993). In a direct comparison of neighborhood density effects in typically developing children, Munson et al. (2005) found that children at the age of 4;3 did not exhibit effects of neighborhood density on response time in a repetition task, but older children (7;2) did exhibit effects. However, children in both age groups showed an effect of phonotactic probability on onset-to-onset latency in non-word repetition. As with understanding spoken word processing in adults, the covariance of phonotactic probability and neighborhood density frequently makes effects of these properties quite difficult to disentangle (see Storkel 2004a, 2009 for discussion).

4.2

Lexical neighborhoods and language impairment

Effects of lexical neighborhood in language impairment have been studied in two broad populations: adults with acquired language impairment and children with developmental language impairment. While neighborhood effects have been reported in the language processing skills of each of these populations, there are differences that follow from differences in the populations. With respect to adults with acquired language impairment (i.e. aphasia), Gordon (2002) reported that aphasic speakers produced words from high-density phonological neighborhoods more accurately in both spontaneous speech and in controlled picture-naming tasks, with a strong effect of neighborhood frequency in production as well. Goldrick et al. (2010) reported on analyses comparing the phonological errors of aphasic speakers with the intended target words. These analyses inferred neighborhood structure by using the word production errors as an index of the other active lexical competitors. They reported that responses and targets were more likely to share position-specific segmental information than is predicted by chance – in other words, neighbors that share the same segments in the same position within a word were more likely to be produced in error than neighbors of the same edit distance in which the segments did not share segmental position. They also reported an independent effect of sharing the first segment, in which responses sharing the first segment as the target were more likely to be selected than other possible words of the same edit distance from the target. These effects seen in aphasic production errors presumably follow from the nature of the impairment. The individuals in these studies were adult speakers of the language with fully formed lexicons, and their later impairment had affected retrieval and production of words in production tasks. Thus, if the structure of the lexicon encodes similarity neighborhoods, it is unsurprising that these

Neighborhood Effects

10

similarity neighborhoods would affect production errors. This contrasts with individuals with developmental language deficits (including phonological impairment as well as hearing impairment which may affect speech input), in which the impairment affects language processing alongside lexical development. Newman and German (2002) examined the performance of 7- to 12-year-old typically developing children and children with word-finding difficulties on a series of repetition tasks. Their analyses revealed a number of interesting findings. First, children from both groups were more accurate at repeating words from sparse neighborhoods compared to words from dense neighborhoods, a result that likely reflects the facilitation that words from sparse neighborhoods receive from having fewer competitors for word recognition. Second, children in both groups were more accurate at repeating words from neighborhoods with high average neighborhood frequency. This result seemingly contrasts with the first finding, suggesting that word recognition in a neighborhood facilitates other recognition in that neighborhood. Here we have a facilitatory effect of having frequent neighbors, whereas the first finding was an inhibitory effect of competition. To address this apparent contradiction, Newman and German (2002) examined the number of neighbors that had a higher frequency than the target. Words with fewer neighbors of higher frequency were repeated more accurately than words with many neighbors of higher frequency. Thus, both groups of children were more accurate in repeating words from sparse neighborhoods with high-frequency neighbors. This suggests that the word recognition process for both groups was affected by the neighborhood size and structure, indicating that even children with word-finding difficulties appear to be sensitive to acoustic-phonetic similarity among words and organize their lexicons into lexical neighborhoods that are used in lexical processing tasks. Another group of children who have been examined with respect to neighborhood effects are children with cochlear implants (CIs). The children that have been studied are typically born profoundly deaf and receive their CI – a neural prosthesis – early in life to provide auditory input from which spoken language abilities may be developed (Svirsky et al. 2000). Kirk et al. (1995) examined accuracy on word recognition tasks for children with CIs, and reported that they were more accurate at identifying “easy” words (high-frequency words from sparse neighborhoods) compared to “hard” words (low-frequency words from dense neighborhoods). Other work with adults with CIs and other hearing-impaired adults has reported similar effects, though sometimes diminished relative to healthy controls (CI: Collison et al. 2004; hearing-impaired: Dirks et al. 2001).

4.3

Summary

Research with typically developing children and individuals with language impairment demonstrates that the effects of lexical neighborhood are not limited to neurologically intact adults. There is evidence that these populations are also sensitive to acoustic-phonetic similarity and organize their lexical knowledge accordingly. Given the articulatory differences between words from high-density neighborhoods and words from low-density neighborhoods discussed in §3, it remains unexplored whether children acquiring language are able to exploit these differences in some way that facilitates the organization of lexical knowledge into acoustic-phonetically defined similarity neighborhoods.

11

5

Adam Buchwald

Lexical neighborhood effects in orthographic processing and audiovisual perception

Thus far, we have discussed lexical neighborhood effects in the perception, acoustics, and articulation of spoken language. This work has largely focused on the storage and processing of acoustic-phonetic lexical knowledge. In this section, we briefly discuss findings indicating that lexical knowledge is stored in terms of lexical similarity neighborhoods in other modalities (e.g. orthography and visual speech) as well as in multimodal processing (e.g. audiovisual speech perception). These findings suggest that the effects of lexical organization on phonological processing are part of a broader class of effects from the organization of lexical knowledge.

5.1

Orthographic similarity neighborhoods

The literature on the effect of orthographic similarity neighborhoods on written language processing is extremely large, and much of it is outside the scope of the present chapter. We focus here on parallels between findings in orthography and the findings discussed in this chapter on phonological similarity neighborhoods. In a seminal paper on visual word recognition, Coltheart et al. (1977) defined a word’s orthographic neighbors as those words that can be formed with one letter changed but letter positions remaining unchanged. This measure has become known as Coltheart’s N (or just N, as used here), and has been used quite frequently in studies of visual word recognition (i.e. reading) as well as written language production (i.e. spelling). Coltheart et al. reported that in a lexical decision task for visually presented words and non-words, participants were faster to respond to low N non-words than high N non-words. Coltheart et al. did not obtain the analogous finding for words. Later studies have been equivocal on the effect of N and neighborhood frequency for words in visual lexical decision tasks. It is common for studies to report a facilitatory effect of N (e.g. Andrews 1989, 1992; Sears et al. 1995) but an inhibitory effect of neighborhood frequency, or even a single high-frequency neighbor (e.g. Grainger et al. 1989; Grainger 1990; Grainger and Segui 1990; Carreiras et al. 1997). Recent attempts to disentangle this issue have shown that some neighbors appear to be stronger competitors than others (Davis and Taft 2005), and that rigid coding of letter position and word length may not be the best predictor of the make-up of orthographic similarity neighborhoods (see Grainger 2008 for a review, and Yarkoni et al. 2008 for a proposal). In research on written language production, there have been relatively few studies, but the results have consistently shown a facilitatory effect of highneighborhood density. For example, Roux and Bonin (2009) reported that healthy adults demonstrate faster and more accurate spelling of words from dense neighborhoods compared to words from sparse neighborhoods. Sage and Ellis (2006) reported on the spelling performance of BH, a brain-damaged individual with acquired dysgraphia. BH’s impairment affected the working memory mechanism in written language production and she showed less impairment producing words from dense neighborhoods compared to words from sparse neighborhoods. The authors reported that there was a beneficial therapeutic effect for neighbors

Neighborhood Effects

12

of trained words; if a word was trained, spelling accuracy on the neighbors of that word improved. Taken together, these studies suggest that orthographic similarity neighborhoods affect the cognitive processing involved in producing written language.

5.2

Similarity neighborhoods in visual and audiovisual speech processing

While researchers examining speech perception and spoken word recognition typically focus on the auditory modality, there has been longstanding evidence that shows that visual speech perception actively contributes to word recognition, even in normal hearing adults in clear listening conditions (Sumby and Pollack 1954; McGurk and MacDonald 1976). Over the past two decades, characterizations of speech perception and the sensory processing of speech signals have become more focused on the multimodal nature of speech (Massaro 1987, 1998; Summerfield 1987; Massaro and Cohen 1995; Massaro and Stork 1998; Calvert et al. 2004; Kim et al. 2004; Bernstein 2005; Rosenblum 2005). One line of work on this topic has explored the processing of linguistic information conveyed in visual speech, including the ability to identify words from visual-only speech signals (e.g. Auer and Bernstein 1997; Lachs et al. 2000; Auer 2002; Mattys et al. 2002). Expanding on the notion of the perceptual equivalence class from Miller and Nicely (1955) (see also Shipman and Zue 1982 and Huttenlocher and Zue 1984), Auer and Bernstein (1997) developed the construct of lexical equivalence class (see also Lachs et al. 2000 and Mattys et al. 2002), which is an equivalence class for words that are indistinguishable from the visual speech stream (e.g. pin and bin, which differ only in voicing, a feature that is not detectable in visual speech). Mattys et al. (2002) and Auer (2002) showed that words with a large lexical equivalence class – the visual speech similarity neighborhood – were less recognizable than words with a small lexical equivalence class for both hearing and deaf observers. This is consistent with the notion of other words in the similarity neighborhood as competitors for word recognition, so words with few competitors are easier to recognize. Further, Tye-Murray et al. (2007) demonstrated that both auditory speech-based similarity neighborhoods and visual speech-based similarity neighborhoods are predictive of word recognition in audiovisual speech perception.

5.3

Summary

This section has explored the effects of similarity neighborhoods on other modalities of processing, including orthographic processing, visual speech processing, and audiovisual speech processing. In each of these domains, we have seen effects of similarity neighborhoods on task performance reflecting that the organization of lexical knowledge into neighborhoods of similar words is a general cognitive mechanism that is not specific to acoustic stimuli.

6

Conclusion and future directions

The research on form-based lexical similarity neighborhoods has been influential in revealing the nature of lexical processing as competition among lexical nodes.

13

Adam Buchwald

This conception helps explain a variety of findings in spoken word recognition, as well as word recognition in other modalities. In language production, two distinct types of effect of lexical neighbors have been reported. First, there is a facilitatory effect of neighborhood size in tasks of lexical access in word production. This is likely due to feedback from a segmental level of processing to a lexical level – when words similar to the target are activated, they activate their constituent segments. These segments in turn send activation back up to the lexical nodes they are connected to, thus providing a facilitatory effect of density. Second, there are reported effects of neighborhood structure on the acoustic details of speech production. Many of the reported effects lead to hyperarticulation of words in dense neighborhoods (e.g. expanded vowel space; more extreme VOT), which presumably helps to keep these words distinct from their neighbors. Many of these effects of neighborhood are also seen in acquisition, in cases of language impairment, and in other modalities of form-based lexical processing. One issue that arises in each domain we discussed is the relationship between lexical effects of neighborhood structure and sub-lexical effects of phonotactic probability. The widely held view on these related phenomena is that they exert their effects on different levels of processing, with a large neighborhood associated with slower and less accurate word recognition at a lexical processing level but high probability associated with faster and more accurate recognition at a sub-lexical processing level. While drawing the distinction between the two levels is parsimonious and fits with both behavioral and neural evidence, it is worth noting that phonotactic probability is an emergent property of the structure of the lexicon; that is, phonotactic probability is defined over the lexicon. Thus, these two properties that appear to generate opposite effects in word recognition come from different levels of generalization over the same representations, and each can be thought of as an effect of similarity. The relationship between these similaritybased effects and other similarity effects that have been explored in phonology (e.g. Frisch et al. 2004; Coetzee and Pater 2008) remains somewhat unexplored in the literature. Another direction of future research, which builds on some recent work, involves generating a more detailed definition of similarity and of what it means to be the competitor of a target word. While many of the previous investigations into lexical similarity neighborhood effects have used relatively coarse-grained metrics for discussing similarity, a variety of more recent efforts have been made to generate a more phonetically driven notion of similarity (Bailey and Hahn 2001; Hahn and Bailey 2005; Albright 2006; Felty et al. 2008). One promising direction involves using graph theory to better understand the network structure of the mental lexicon (Vitevitch 2008; Gruenenfelder and Pisoni 2009). Future attempts to refine these metrics will hopefully provide a more precise, gradient measure of similarity which can better predict effects and perhaps be useful in designing treatment protocols that help people with acquired form-based lexical impairment (as in Sage and Ellis 2006). Finally, it is critical to consider the possible role of lexical neighborhood density and neighborhood effects in phonological alternations. The notion that lexical frequency may play a role in phonological alternations has been advanced (e.g. Pierrehumbert 2001) with the argument that frequent words may be more likely to undergo sound change, but whether other lexical factors such as

Neighborhood Effects

14

neighborhood density affect the likelihood of a word participating in alternations has received less attention. Hall (2005) reported that the incidence of Canadian Raising (i.e. where /a// → [Z/] before voiceless obstruents, as in write → [rZ/t]; cf. ride → [ra/d]) among English speakers was actually affected by the neighbors sharing the CV biphone, even though Canadian Raising is thought to be conditioned by whether the following consonant is voiced. However, whether lexical neighborhoods can be shown to predict the application of a phonological process (as lexical frequency has been shown to) remains largely unexplored.1

REFERENCES Albright, Adam. 2006. Gradient phonotactic effects: Lexical? grammatical? both? neither? Paper presented at the 80th Annual Meeting of the Linguistic Society of America, Albuquerque. Andrews, Sally. 1989. Frequency and neighborhood effects on lexical access: Activation or search? Journal of Experimental Psychology: Learning, Memory and Cognition 15. 802–814. Andrews, Sally. 1992. Frequency and neighborhood effects on lexical access: Lexical similarity or orthographic redundancy? Journal of Experimental Psychology: Learning, Memory and Cognition 18. 234 –254. Auer, Edward T., Jr. 2002. The influence of the lexicon on speech read word recognition: Contrasting segmental and lexical distinctiveness. Psychonomic Bulletin and Review 9. 341–347. Auer, Edward T., Jr. & Lynne E. Bernstein. 1997. Speechreading and the structure of the lexicon: Computationally modeling the effects of reduced phonetic distinctiveness on lexical uniqueness. Journal of the Acoustical Society of America 102. 3704–3710. Baese-Berk, Melissa & Matthew Goldrick. 2009. Mechanisms of interaction in speech production. Language and Cognitive Processes 24. 527–554. Bailey, Todd M. & Ulrike Hahn. 2001. Determinants of wordlikeness: Phonotactics or lexical neighborhoods? Journal of Memory and Language 44. 568 –591. Benkí, Jose. 2003. Quantitative evaluation of lexical status, word frequency and neighborhood density as context effects in spoken word recognition. Journal of the Acoustical Society of America 113. 1689–1705. Benua, Laura. 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst. Bernstein, Lynne E. 2005. Phonetic processing by the speech perceiving brain. In Pisoni & Remez (2005). 79–98. Bradlow, Ann R. 2002. Confluent talker- and listener-related forces in clear speech production. In Carlos Gussenhoven & Natasha Warner (eds.) Laboratory phonology 7, 241–273. Berlin & New York: Mouton de Gruyter. Bradlow, Ann R., Gina Torretta & David B. Pisoni. 1996. Intelligibility of normal speech. I: Global and fine-grained acoustic-phonetic talker characteristics. Speech Communication 20. 255 –272.

1

With respect to a formal mechanism that could account for lexical neighborhood effects on phonological alternations, one prominent approach within Optimality Theory (Prince & Smolensky 1993) focuses on output–output faithfulness (Burzio 1994; Benua 1997) among morphemes and their related allomorphs. While this formal mechanism is typically applied to the different allomorphs of a single morpheme rather than to words from different morphological paradigms, it provides a blueprint of a formal mechanism that could be adapted to account for effects among similar words that are not from the same morphological paradigm. See also chapter 63: markedness and faithfulness constraints and chapter 83: paradigms.

15

Adam Buchwald

Buchwald, Adam, Robert A. Felty & David B. Pisoni. 2008. Neighbors as competitors: Phonological analysis of spoken word recognition errors. Journal of the Acoustical Society of America 123. 3328. Burzio, Luigi. 1994. Principles of English stress. Cambridge: Cambridge University Press. Calvert, Gemma A., Charles Spence & Barry E. Stein (eds.) 2004. The handbook of multisensory processes. Cambridge, MA: MIT Press. Carreiras, Manuel, Manuel Perea & Jonathan Grainger. 1997. Effects of orthographic neighborhood in visual word recognition: Cross-task comparisons. Journal of Experimental Psychology: Learning, Memory and Cognition 23. 857–871. Charles-Luce, Jan & Paul A. Luce. 1990. Similarity neighborhoods of words in young children’s lexicons. Journal of Child Language 17. 205 –215. Charles-Luce, Jan & Paul A. Luce. 1995. An examination of similarity neighborhoods in young children’s receptive vocabulary. Journal of Child Language 22. 727–735. Coady, Jeffry A. & Richard N. Aslin. 2003. Phonological neighborhoods in the developing lexicon. Journal of Child Language 30. 441–469. Coetzee, Andries W. & Joe Pater. 2008. Weighted constraints and gradient restrictions on place co-occurrence in Muna and Arabic. Natural Language and Linguistic Theory 26. 289–337. Collison, Elizabeth A., Benjamin Munson & Arlene E. Carney. 2004. Relations among linguistic and cognitive skills and spoken word recognition in adults with cochlear implants. Journal of Speech, Language, and Hearing Research 47. 496 –508. Coltheart, Max, Eileen Davelaar, Jon Torfi Jonasson & Derek Besner. 1977. Access to the internal lexicon. In Stan Dornic (ed.) Attention and performance, vol. 6, 535–555. Hillsdale, NJ: Lawrence Erlbaum. Davis, Colin J. & Marcus Taft. 2005. More words in the neighborhood: Interference in lexical decision due to deletion neighbors. Psychonomic Bulletin and Review 12. 904 – 910. Dell, Gary S. 1986. A spreading activation theory of retrieval in sentence processing. Psychological Review 93. 283 –321. Dell, Gary S. 1988. The retrieval of phonological forms in production: Tests of predictions from a connectionist model. Journal of Memory and Language 27. 124 –142. Dell, Gary S. & Jean K. Gordon. 2003. Neighbors in the lexicon: Friends or foes. In Niels O. Schiller & Antje S. Meyer (eds.) Phonetics and phonology in language comprehension and production: Differences and similarities, 9–38. Berlin & New York: Mouton de Gruyter. Dell, Gary S., Kristopher D. Reed, David R. Adams & Antje S. Meyer. 2000. Speech errors, phonotactic constraints & implicit learning: A study of the role of experience in language production. Journal of Experimental Psychology: Learning, Memory and Cognition 26. 1355–1367. Dell, Gary S., Myrna F. Schwartz, Nadine Martin, Eleanor M. Saffran & Deborah A. Gagnon. 1997. Lexical access in aphasic and nonaphasic speakers. Psychological Review 104. 801–838. Dirks, Donald D., Sumiko Takayanagi, Moshfegh Anahita, P. Douglas Noffsinger & Stephen A. Fausti. 2001. Examination of the neighborhood activation theory in normal and hearing-impaired listeners. Ear and Hearing 22. 1–13. Dollaghan, Charles A. 1994. Children’s phonological neighborhoods: Half empty or half full? Journal of Child Language 21. 257–271. Elman, Jeffrey L. & James L. McClelland. 1986. The TRACE model of speech perception. Cognitive Psychology 18. 1– 86. Felty, Robert A., Adam Buchwald & David B. Pisoni. 2008. Lexical analysis of spoken word recognition errors. Journal of the Acoustical Society of America 123. 3327. Forster, Kenneth I. 1978. Accessing the mental lexicon. In Edward Walker (ed.) Explorations in the biology of language, 139 –174. Montgomery, VT: Bradford.

Neighborhood Effects

16

Frisch, Stefan A., Nathan R. Large & David B. Pisoni. 2000. Perception of wordlikeness: Effects of segment probability and length on the processing of nonwords. Journal of Memory and Language 42. 481–496. Frisch, Stefan A., Janet B. Pierrehumbert & Michael B. Broe. 2004. Similarity avoidance and the OCP. Natural Language and Linguistic Theory 22. 179 –228. Garlock, Victoria M., Amanda C. Walley & Jamie L. Metsala. 2001. Age-of-acquisition, word frequency, and neighborhood density effects on spoken word recognition by children and adults. Journal of Memory and Language 45. 468 –492. Goldrick, Matthew, Jocelyn R. Folk & Brenda Rapp. 2010. Mrs. Malaprop’s neighborhood: Using word errors to reveal neighborhood structure. Journal of Memory and Language 62. 113 –134. Gordon, Jean K. 2002. Phonological neighborhood effects in aphasic speech errors: Spontaneous and structured contexts. Brain and Language 82. 113 –145. Grainger, Jonathan. 1990. Word frequency and neighborhood frequency effects in lexical decision and naming. Journal of Memory and Language 29. 228 –244. Grainger, Jonathan. 2008. Cracking the orthographic code: An introduction. Language and Cognitive Processes 23. 1–35. Grainger, Jonathan & Juan Segui. 1990. Neighborhood frequency effects in visual word recognition: A comparison of lexical decision and masked identification latencies. Perception and Psychophysics 47. 191–198. Grainger, Jonathan, J. Kevin O’Regan, Arthur M. Jacobs & Juan Segui. 1989. On the role of competing word units in visual word recognition: The neighborhood frequency effect. Perception and Psychophysics 45. 189 –195. Greenberg, Joseph H. & James J. Jenkins. 1964. Studies in the psychological correlates of the sound system of American English. Word 20. 157–177. Gruenenfelder, Thomas M. & David B. Pisoni. 2009. The lexical restructuring hypothesis and graph theoretic analyses of networks based on random lexicons. Journal of Speech, Language, and Hearing Research 52. 596 –609. Hahn, Ulrike & Todd M. Bailey. 2005. What makes words sound similar? Cognition 97. 227–267. Hall, Kathleen Currie. 2005. Defining phonological rules over lexical neighbourhoods: Evidence from Canadian raising. Proceedings of the West Coast Conference on Formal Linguistics 24. 191–199. Huttenlocher, D. P. & Victor W. Zue. 1984. A model of lexical access from partial phonetic information. Paper presented at IEEE International Conference on Acoustics, Speech and Signal Processing, San Diego. Jusczyk, Peter W. & Paul A. Luce. 2002. Speech perception and spoken word recognition: Past and present. Ear and Hearing 23. 2–40. Jusczyk, Peter W., Paul A. Luce & Jan Charles-Luce. 1994. Infants’ sensitivity to phonotactic patterns in the native language. Journal of Memory and Language 33. 630 –645. Kim, Jeesun, Chris Davis & Phill Krins. 2004. Amodal processing of visual speech as revealed by priming. Cognition 93. B39–B47. Kirk, Karen Iler, David B. Pisoni & Mary Joe Osberger. 1995. Lexical effects on spoken word recognition by pediatric cochlear implant users. Ear and Hearing 16. 470 – 481. Lachs, Lorin, Jonathan W. Weiss & David B. Pisoni. 2000. Use of partial stimulus information by cochlear implant users and listeners with normal hearing in identifying spoken words: Some preliminary analyses. The Volta Review 102. 303–320. Landauer, Thomas K. & Lynn A. Streeter. 1973. Structural differences between common and rare words: Failure of equivalence assumptions for theories of word recognition. Journal of Verbal Learning and Verbal Behavior 12. 119 –131. Levelt, Willem J. M., Ardi Roelofs & Antje S. Meyer. 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences 22. 1–38.

17

Adam Buchwald

Lindblom, Björn. 1990. Explaining phonetic variation: A sketch of the H&H theory. In W. J. Hardcastle & A. Marchal (eds.) Speech production and speech modeling, 403 –439. Dordrecht: Kluwer. Lisker, Leigh & Arthur S. Abramson. 1964. A cross-language study of voicing in initial stops: Acoustical measurements. Word 20. 384 –422. Luce, Paul A. 1986. Neighborhoods of words in the mental lexicon. Ph.D. dissertation, Indiana University. Luce, Paul A. & David B. Pisoni. 1998. Recognizing spoken words: The neighborhood activation model. Ear and Hearing 19. 1–36. Luce, Paul A., Stephen D. Goldinger, Edward T. Auer, Jr. & Michael S. Vitevitch. 2000. Phonetic priming, neighborhood activation and PARSYN. Perception and Psychophysics 62. 615 –625. Magnuson, James S., James A. Dixon, Michael K. Tanenhaus & Richard N. Aslin. 2007. The dynamics of lexical competition during spoken word recognition. Cognitive Science 31. 133 –156. Marslen-Wilson, William & A. Welsh. 1978. Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology 10. 29 –63. Massaro, Dominic W. 1987. Speech perception by ear and eye. In Barbara Dodd & Ruth Campbell (eds.) Hearing by eye: The psychology of lip-reading, 53 –84. Mahwah, NJ: Lawrence Erlbaum. Massaro, Dominic W. 1998. Perceiving talking faces: From speech perception to a behavioral principle. Cambridge, MA: MIT Press. Massaro, Dominic W. & Michael M. Cohen. 1995. Perceiving talking faces. Current Directions in Psychological Science 4. 104 –109. Massaro, Dominic W. & David G. Stork. 1998. Speech recognition and sensory integration: A 240-year-old theorem helps explain how people and machines can integrate auditory and visual information to understand speech. American Scientist 86. 236 –244. Mattys, Sven L., Lynne E. Bernstein & Edward T. Auer, Jr. 2002. Stimulus-based lexical distinctiveness as a general word-recognition mechanism. Perception and Psychophysics 64. 667–679. McGurk, Harry & J. MacDonald. 1976. Hearing lips and seeing voices. Nature 264. 746–748. Metsala, Jamie L. 1997. An examination of word frequency and neighborhood density in the development of spoken-word recognition. Memory and Cognition 25. 47–56. Miller, George A. & Patricia Nicely. 1955. An analysis of perceptual confusions among some English consonants. Journal of the Acoustical Society of America 27. 338 –352. Morton, John. 1969. Interaction of information in word recognition. Psychological Review 76. 165 –178. Morton, John. 1979. Word recognition. In John Morton & J. C. Marshall (eds.) Structures and processes, 109 –156. Cambridge, MA: MIT Press. Munson, Benjamin. 2007. Lexical access, lexical representation & vowel articulation. In Jennifer Cole & José Ignacio Hualde (eds.) Laboratory phonology 9, 201–228. Berlin & New York: Mouton de Gruyter. Munson, Benjamin & Nancy P. Solomon. 2004. The effect of phonological neighborhood density on vowel articulation. Journal of Speech, Language, and Hearing Research 47. 1048–1058. Munson, Benjamin, Cyndie L. Swenson & Shayla C. Manthei. 2005. Lexical and phonological organization in children: Evidence from repetition tasks. Journal of Speech, Language, and Hearing Research 48. 108 –124. Newman, Rochelle S. & Diane J. German. 2002. Effects of lexical factors on word naming among normal-learning children and children with word-finding disorders. Language and Speech 43. 285 –317. Newman, Rochelle S., James R. Sawusch & Paul A. Luce. 1997. Lexical neighborhood effects in phonetic processing. Journal of Experimental Psychology: Human Perception and Performance 23. 873 –889.

Neighborhood Effects

18

Norris, Dennis. 1994. Shortlist: A connectionist model of continuous speech recognition. Cognition 52. 189 –234. Norris, Dennis, James M. McQueen & Anne Cutler. 2000. Merging information in speech recognition: Feedback is never necessary. Behavioral and Brain Sciences 23. 299 –370. Oldfield, R. C. 1966. Things, words and the brain. Quarterly Journal of Experimental Psychology 18. 340 –353. Pierrehumbert, Janet B. 2001. Exemplar dynamics: Word frequency, lenition and contrast. In Joan Bybee & Paul Hopper (eds.) Frequency and the emergence of linguistic structure, 137–157. Amsterdam & Philadelphia: John Benjamins. Pisoni, David B. & Robert E. Remez (eds.) 2005. The handbook of speech perception. Malden, MA & Oxford: Blackwell. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Pylkkänen, Liina & Alec Marantz. 2003. Tracking the time course of word recognition with MEG. Trends in Cognitive Sciences 7. 187–189. Pylkkänen, Liina, Andrew Stringfellow & Alec Marantz. 2002. Neuromagnetic evidence for the timing of lexical activation: An MEG component sensitive to phonotactic probability but not to neighborhood density. Brain and Language 81. 666 –678. Rapp, Brenda & Matthew Goldrick. 2000. Discreteness and interactivity in spoken word production. Psychological Review 107. 460–499. Rosenblum, Lawrence D. 2005. Primacy of multimodal speech perception. In Pisoni & Remez (2005), 51–78. Roux, Sébastien & Patrick Bonin. 2009. Neighborhood effects in spelling in adults. Psychonomic Bulletin and Review 16. 369 –373. Sage, Karen & Andrew W. Ellis. 2006. Using orthographic neighbours to treat a case of graphemic buffer disorder. Aphasiology 20. 851–870. Scarborough, Rebecca A. 2004. Coarticulation and the structure of the lexicon. Ph.D. dissertation, University of California, Los Angeles. Sears, Chris R., Yasushi Hino & Stephen J. Lupker. 1995. Neighborhood frequency and neighborhood size effects in visual word recognition. Journal of Experimental Psychology: Human Perception and Performance 21. 876 –900. Shipman, David W. & Victor W. Zue. 1982. Properties of large lexicons: Implications for advanced isolated word recognition systems. Paper presented at IEEE 1982 International Conference on Acoustics, Speech and Signal Processing. Storkel, Holly L. 2002. Restructuring of similarity neighborhoods in the developing mental lexicon. Journal of Child Language 29. 251–274. Storkel, Holly L. 2004a. Methods for minimizing the confounding effects of word length in the analysis of phonotactic probability and neighborhood density. Journal of Speech, Language, and Hearing Research 47. 1454–1468. Storkel, Holly L. 2004b. Do children acquire dense neighborhoods? An investigation of similarity neighborhoods in lexical acquisition. Journal of Applied Psycholinguistics 25. 201–221. Storkel, Holly L. 2009. Developmental differences in the effects of phonological, lexical and semantic variables on word learning by infants. Journal of Child Language 36. 291–321. Storkel, Holly L., Jonna Armbrüster & Tiffany P. Hogan. 2006. Differentiating phonotactic probability and neighborhood density in adult word learning. Journal of Speech, Language, and Hearing Research 49. 1175–1192. Sumby, William H. & Irwin Pollack. 1954. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26. 212–215. Summerfield, A. Quentin. 1987. Some preliminaries to a comprehensive account of audiovisual speech perception. In Barbara Dodd & Ruth Campbell (eds.) Hearing by eye: The psychology of lip-reading, 3 –52. Mahwah, NJ: Lawrence Erlbaum.

19

Adam Buchwald

Svirsky, Mario A., Amy M. Robbins, Karen Iler Kirk, David B. Pisoni & Richard T. Miyamoto. 2000. Language development in profoundly deaf children with cochlear implants. Psychological Science 11. 153 –158. Tye-Murray, Nancy, Mitchell S. Sommers & Brent Spehar. 2007. Auditory and visual lexical neighborhoods in audiovisual speech perception. Trends in Amplification 11. 233–241. Vitevitch, Michael S. 2002a. Influence of onset density on spoken word recognition. Journal of Experimental Psychology: Human Perception and Performance 28. 270 –278. Vitevitch, Michael S. 2002b. Naturalistic and experimental analyses of word frequency and neighborhood density effects in slips of the ear. Language and Speech 45. 407–434. Vitevitch, Michael S. 2002c. The influence of phonological similarity neighborhoods on speech production. Journal of Experimental Psychology: Learning, Memory and Cognition 28. 735 –747. Vitevitch, Michael S. 2007. The spread of the phonological neighborhood influences spoken word recognition. Memory and Cognition 35. 166 –175. Vitevitch, Michael S. 2008. What can graph theory tell us about word learning and lexical retrieval? Journal of Speech, Language, and Hearing Research 51. 408 –422. Vitevitch, Michael S. & Paul A. Luce. 1998. When words compete: Levels of processing in perception of spoken words. Psychological Science 9. 325 –329. Vitevitch, Michael S. & Paul A. Luce. 1999. Probabilistic phonotactics and neighborhood activation in spoken word recognition. Journal of Memory and Language 40. 374 –408. Vitevitch, Michael S. & Eva Rodríguez. 2005. Neighborhood density effects in spoken word recognition in Spanish. Journal of Multilingual Communication Disorders 3. 64 –73. Vitevitch, Michael S. & Mitchell S. Sommers. 2003. The facilitative influence of phonological similarity and neighborhood frequency in speech production in younger and older adults. Memory and Cognition 31. 491–504. Vitevitch, Michael S., Paul A. Luce, Jan Charles-Luce & David Kemmerer. 1997. Phonotactics and syllable stress: Implications for the processing of spoken nonsense words. Language and Speech 40. 47– 62. Vitevitch, Michael S., Paul A. Luce, David B. Pisoni & Edward T. Auer, Jr. 1999. Phonotactics, neighborhood activation and lexical access for spoken words. Brain and Language 68. 306 –311. Vitevitch, Michael S., Jonna Armbrüster & Shinyung Chu. 2004. Sub-lexical and lexical representations in speech production: Effects of phonotactic probability and onset density. Journal of Experimental Psychology: Learning, Memory and Cognition 30. 514–529. Walley, Amanda C. 1993. The role of vocabulary development in children’s spoken word recognition and segmentation ability. Developmental Review 13. 286 –350. Wright, Richard A. 2004. Factors of lexical competition in vowel articulation. In John Local, Richard Ogden & Rosalind Temple (eds.) Phonetic interpretation: Papers in laboratory phonology VI, 26 –50. Cambridge: Cambridge University Press. Yarkoni, Tal, David Balota & Melvin Yap. 2008. Moving beyond Coltheart’s N: A new measure of orthographic similarity. Psychonomic Bulletin and Review 15. 971–979.

88

Derived Environment Effects Luigi Burzio

1

Introduction

Derived environments often exhibit peculiar phonological properties. Notable effects can be identified relative to various senses of “derived” in the expression “derived environment” (DE). In all cases, “environment” refers to some phonological context. Such environment or context can be “derived” in a phonological sense, by virtue of some phonological process having applied to obtain it or, in a morphological sense, by virtue of it being the result of the combination of morphemes or other morphological operation. The following sections will first review the different cases from a descriptive point of view, and then turn to their respective theoretical accounts. §2 and §3 review the basic facts, arguing that there are three subcases overall. §4 turns to contemporary analyses of the two better-known subcases, while §5 reviews pre-Optimality Theory accounts of the same. §6 presents an account of the third subcase.

2

Phonologically derived environments

Environments that are derived by some phonological process can differ from underived environments relative to further phonological processes. An English example of this effect is provided by the following contrasting pair.1 (1) a. b.

verb

-able adjective

’remedy ’levy

re’me(diable ’leviable

In (1) we assume that in both cases the calculation of the adjective proceeds from the surface form of the verb concatenated with the suffix -able. Such calculation 1

The symbol “(” is used here and elsewhere in orthographic forms to refer to a long vowel. Stress is also marked in such forms. In (1) the vowel is diphthongized in accordance with the Early Modern English Great Vowel Shift, here the diphthong [ij]. The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0088

Luigi Burzio

2

would find a stage in which the position of the stress is changed in (1a) (’remedy → re’medi . . .), but not in (1b). This phonologically derived status of (1a) correlates with the occurrence of vowel lengthening in the adjective in (1a), compared with its absence in (1b). It is clear that it is the restressing that licenses vowel lengthening and not the opposite, because the restressing is independently predictable. Specifically, the stress of both adjectives in (1) simply conforms with the English norm, by which the rightmost stress can be at most antepenultimate, but a syllable like ble (= [b=], with syllabic l) can evade syllable count (Burzio 1994), hence re.’me.di.a, ’le.vi.a. The lengthening of (1a) reflects a process of English phonology, which is relatively regular aside from its non-occurrence in cases like (1b), affecting vowels in the context / __ CiV (cf. Bost[ow]nian, Can[ej]dian, etc.) except for the vowel i, which is immune, e.g. Palestinian (see Chomsky and Halle 1968: 47; Halle and Mohanan 1985). Non-restressing cases like (1b) confirming the generalization are ’bury/’burial, not *b[ij]rial; Ma’lawi/ Ma’lawian, not *Mal[ej]wian; Ken’tucky/Ken’tuckian, not *Kent[uw]ckian, although exceptions – which I will put aside – exist both ways (Wolf 2008: 302), e.g. restressing but short Trini’dadian, I’talian, Chau’cerian, He’gelian, and non-restressing but long Alab[ej]mian, Bah[ej]mian. In addition, both adjectives in (1) constitute environments that are derived morphologically, by way of affixation. The latter is evidently not relevant to the phenomenon at hand, else no difference between (1a) and (1b) would be expected. Finally, note as well that characterization of the difference between (1a) and (1b) would not be possible unless it was indeed the surface form of the verb that enters into the calculation of the adjective. Direct calculations from bare-bone underlying representations (chapter 1: underlying representations) containing no stress information would predict no difference, since there would then be no restressing in either case, just regular assignment of stress. In sum, assuming the surface forms of the verbs are relevant in (1), then the adjective in (1a) would undergo phonological restressing unlike the one in (1b), and thus the latter would represent a case of phonologically non-derived environment blocking (henceforth NDEB) relative to the process of CiV lengthening. The literature documents several other cases of this general type across a significant spectrum of languages. Ìubowicz (1999, 2002) reports the cases in (2a) to (2d) below, to which we may add the Finnish case in (2e). The original sources for these data are Rubach (1984) for Polish; Kenstowicz and Rubach (1987) and Rubach (1993, 1995) for Slovak; Bolognesi (1998) for Campidanian Sardinian; Prince (1975) for Tiberian Hebrew; and Kiparsky (1973a, 1993) for Finnish. (2) a.

Polish i. kro[k] / kro[Œ]-ek ‘step / little step’ ii. dron[g] / dron[Ú]-ek ‘pole / little pole’ iii. bri[–] / bri[–]-ek ‘bridge / little bridge’

Processes

General process/ DE-only process

k→Œ

Velar palatalization

g→–→Ú

Spirantization of voiced palatal affricates

– → *Ú (underived)

Derived Environment Effects b.

c.

Slovak i. lop[a]t-a / lop[a(]t ‘shovel-nom sg / gen pl’ ii. kazet-a / kaz[ie]t ‘box-nom sg / gen pl’ iii. dc[e(]ra ‘daughter’ Campidanian Sardinian i. [f]amilia / sa [v]amilia ‘family / the family’ ii. pisci / belu [ß]isci ‘fish / nice fish’ iii. [b]ia / sa [b]ia ‘road / the road’

a → a(

Vowel lengthening

e → e( → ie

Diphthongization of long [e(]

e( → *ie (underived) f→v p→b→ß

Finnish assibilation i. joke-nä / joki ‘river-ess / nom’ ii. vete-nä / vesi ‘water-ess / nom’ iii. äiti-nä / äiti ‘mother-ess / nom’

Post-vocalic voicing of obstruents Post-vocalic spirantization

b → *ß (underived)

d. Tiberian Hebrew i. ktab’tem / ka(’tab a → a( ‘we-masc write’ / ‘he writes’ ii. œim’ka( / œe(’m-ot i → i ( → e( ‘your names / names’ iii. qi(’to(r i( → *e( ‘smoke’ (underived) e.

3

e→i

Pre-tonic, open syllable lengthening High long V lowering

e-raising, word-finally

te → ti → si t-assibilation before i ti → *si (underived)

In each of the cases in (2), row (i) documents the existence of a process that occurs independently of whether or not the environment is phonologically derived, analogously to the restressing in (1a) above. Then, row (ii) documents the existence of a second process that occurs when the first process has applied, analogously to the CiV lengthening of (1a), while row (iii) further documents the fact that the second process does not apply unless the first one has (NDEB), analogously to the failed CiV lengthening of (1b) above. Hence, in Polish, voiced palatal affricates spirantize to fricatives only when they are derived from velar stops before front vowels (chapter 121: slavic palatalization); in Slovak, long /e(/ diphthongizes to [ie] only if derived via a lengthening process (induced by specific affixes); in the Sardinian case, voiced stops spirantize only when they are derived via post-vocalic voicing; in Tiberian Hebrew, high long vowels lower only when they are derived via a lengthening process; and in Finnish, t assibilates to s before i, but only when the latter is derived from e. In sum, certain phonological changes appear to occur only in conjunction with other specific phonological changes, and not by themselves (NDEB).

Luigi Burzio

4

3

Morphologically derived environments

Phonological processes may be conditioned not only by other phonological process, but by morphological processes or structure as well. Two subcases need to be distinguished, and are reviewed in turn below. In one subcase, the context of application of the phonological process spans across morphemes, as in criti[s]-ism, where the velar of criti[k] softens before the i, across a morpheme boundary. In the other subcase, the phonological process occurs in environments that are morphologically derived, but without reference to the specifics of the morphological structure. For instance, the nouns or adjectives altern[H]te, moder[H]te, design[H]te are presumably derived from the corresponding verbs in -a(te, but without any overt morphology. Hence the vowel shortening in these cases must depend purely on derived status, without reference to any particular morphological material or boundaries.

3.1

Boundary contexts

Kiparsky (1973a, 1993) notes that, in addition to the case in (2) above, Finnish assibilation also displays the pattern in (3). (3)

a. b. c.

halut-a tilat-a äiti

‘want-inf’ ‘order-inf’ ‘mother’

halus-i tilas-i

‘want-past’ ‘order-past’

Here, assibilation turns t to s before i, but only when the latter belongs to a different morpheme, as in each of (3a) and (3b). In particular, the failed assibilation in ti of tilas-i (3b) shows that being in a morphologically derived form is not sufficient, and that it is necessary for the assibilation environment itself to be created morphologically. The non-assibilating form in (3c) establishes that assibilation does not just single out final syllables, confirming the relevance of the derived environment. The already noted English velar softening (chapter 71: palatalization), illustrated in (4), appears to be similarly restricted to morphologically derived environments. (4)

a.

Derived

/k/ → [s]

b.

Underived

[k]

critic / critic-ism; electric / electric-ity; opaque / opac-ity kinetic, kidney, kitchen, Viking, . . .

I note here the item kinematic-ity, which constitutes a proper counterpart to Finnish tilas-i in (2e), velar softening affecting only the [k] adjacent to the morpheme boundary, and not the initial one. A complicating factor in this classification is that velar softening has exceptions, like monarch-ist, anarch-ist, and several others, raising the possibility that items like (4b) may in fact also just be lexical exceptions, rather than being indicative of NDEB. Hence in this case, as in others below, classification will be dependent on the choice of analyses. I will, nonetheless, assume that items like those in (4b) are not lexical exceptions, and therefore that the classification of velar softening as operating only across morpheme boundaries like Finnish assibilation is correct. In addition, and with the same caveats, I will

Derived Environment Effects

5

assume that each of the phenomena in (5) below also instantiates that same type of NDEB, while the references cited in each case may be consulted for specific analyses. (5) a.

Further cases of NDEB, for DEs including morpheme boundaries Korean palatalization (Kiparsky 1993; Iverson and Wheeler 1988) /hæ tot-i/ → [hæ doj-i] ‘(sun)rise-nom’ /mati/

b. Polish velar palatalization (Ìubowicz 2002; Rubach 1984) /krok-i-o/ → [kro9-1-o] ‘to step’ /k’iWel/ c.

Polish dental palatalization (Kenstowicz 1994; Rubach 1984) /serwis-e/ → [serwi5-e] ‘auto service-loc’ /serwis/

‘knot’ ‘jelly’ (nom)

d. Pre-coronal laminalization in Chumash (Kiparsky 1993; Poser 1982, 1993) /s-tepu?/ → [7-tepu?] ‘he gambles’ /stumukun/ ‘mistletoe’ e.

Sanskrit (ruki rule) retroflection after r, u, k, i (Kiparsky 1973a, 1993) /agni-su/ → [agni-:u] ‘fire-dat pl’ /kisalaja/ ‘sprout’

f.

Indonesian nasal substitution (Pater 1999) /mHN-pilih/ → [mHm-ilih] ‘to choose’

/Hmpat/

‘four’

Finnish cluster assimilation (Kiparsky 1973a) /pur-nut/ → [purrut] ‘bitten’

/horna/

‘hell’

Mohawk epenthesis (Kiparsky 1973a) /k-wi’stos/ → [kewi’stos] ‘I am cold’

/rú(kweh/

‘man’

g. h.

In each of the cases in (5), the morphologically derived environment affected by the change is compared with an otherwise identical but non-derived environment in which the change fails to occur (see portions in boldface). In all cases, the morphologically derived environment includes a morpheme boundary.

3.2

Non-boundary contexts

Turning now to cases where morphologically derived status appears to make a difference without implicating material contributed by the morphology, English vowel shortening, e.g. as in di’vin-ity vs. underived ’ivory, will serve as the prototype, although its exact analysis, given below, will be critical to this role. Burzio (1993, 1994, 2000a) argued that, while tradition had focused on individual shortening processes, like the “trisyllabic” shortening of di’vin-ity or of (6a) below, the actual generalization is in fact found over non-shortening contexts or the “exceptions,” while the shortening is otherwise fully general. Given the variety of vowel length changes illustrated by the left-hand cases in (6), including not only shortening but also lengthening, as in (6h), separate characterization of each case would result in a colossal conspiracy. By contrast, such distribution can receive a unitary analysis in terms of vowel length being allophonic in this sector of the lexicon, rather than contrastive, as in the rest of the language. On this view, long vowels would be disallowed in general and produced only under specific contextual demands. The factor responsible for long vowels appears to be stress, and specifically the stress that would be inherited from the respective morphological bases given in (6) in parentheses. A review of each case in (6) will drive this point home.

Luigi Burzio

6 (6)

Vowel length in the English Latinate-derived lexicon stress preserved a. b. c. d. e. f. g. h.

’natur-al (’na(ture) o’blig-a‘tory (ob’li(ge) ‘defa’m-a(tion (de’fa(me) ar’ticula‘t-ory (ar’ticu‘la(te) ’alternateN/A (’alter‘na(teV) ’aspir-ant (as’pi(re) ’generat-ive (’gene‘ra(te) E‘liza’be(th-an (E’lizabeth)

yes no no no yes

yes yes no

de’si(r-ous (de’si(re) ’legis‘la(t-ive (’legis‘la(te) Her’culean (’Hercules)

The cases in (6a) and (6b) preserve the stress of their bases, and the short vowel in each case will follow from the fact that such stress patterns are independently known not to require a long vowel. The pattern of ’natural is the same as that of A’merica, with stress on an antepenultimate light syllable (short vowel), while that of o(’bliga)‘tory is relevantly like that of (‘Ari)’zona, the parentheses here marking a binary foot in non-word-final position, evidently also not needing a long vowel/ heavy syllable to bear stress. Like the cases in (6a) and (6b), those in (6c)–(6e) also do not need long vowels, but for different reasons. Here, a stress on the vowel in boldface cannot be preserved from the base. Given the stress of affixes -’a(tion, -‘ory (American English), which apparently must take priority, stress preservation is excluded in (6c) and (6d) by the ban on stress clashes. Similarly, given the independent fact that nouns tend to avoid the final stress of verbs (cf. per’vertV / ’pervertN; per’mitV / ’permitN), the stress of the base verb is presumably excluded in (6e). Hence, on the “allophonic” hypothesis, vowels are short in each of (6c)–(6e), because nothing motivates the long allophones (chapter 11: the phoneme). Matters are different in the derived forms in (6f)–(6h), where all boldface vowels are in penultimate syllables. Like other languages, English is well known to stress penultimates only if heavy, and antepenultimates otherwise (‘A.ri.’zo(.na, a.’gen.da vs. A.’me.ri.ca, ’as.te.risk), if we put aside verbs, which require a separate discussion (Chomsky and Halle 1968; chapter 102: category-specific effects). The derived forms in (6f)–(6h) all conform with this generalization. The left/right variation in each case, however, reveals that the general mandate for vowels to be short competes evenly with preservation of stress, some outcomes favoring the former, and others the latter – a case of lexically controlled phonological variation (Burzio 2006). Lengthening as in E‘liza’be(than in (6h) is on this analysis just like the non-shortening of de’si(rous in (6f) and other cases, since it allows preservation of the stress of E’lizabeth, albeit as a secondary. The alternative *’Eli’zabethan with a short e would lose that stress altogether, given again the ban on stress clashes, as in fact happens in Her’culean of (6h) (though the equally expected ‘Hercu’le(an is also attested). The point at issue is that long vowels occur in the Latinate lexicon only to preserve stress, though variably, as in (6f)–(6h), and are banned otherwise, ignoring occasional cases like o’be(sity, which are exceptions to the pattern in (6a), but are an effect of different granularity from the variation in (6f)–(6h), which is very robust (giving roughly a 50–50 split; Burzio 1993, 2000a). The conclusion holds that in this morphologically derived domain vowels are required to be short, except, and variably, under stress demands. It is therefore this general requirement, statable as *V(, that “blocks” in environments that are not so derived, e.g. ’di(nosaur, ’i(vory, etc.

Derived Environment Effects

7

The cases listed below would appear to be of the same a-contextual type as argued for the English cases in (6), while, again, the specific analyses may be ultimately critical to the correct classification. (7)

Further cases of NDEB, for DEs not including morpheme boundaries a.

Italian participles (Burzio 1998) as.cen.dere ‘ascend’ as.ce.so ‘ascended’

(less marked syllable)

b.

Turkish disyllabicity condition (Inkelas and Orgun 1995: 770) ham ‘unripe’ *fa-n ‘(note) fa-2sg poss’ fa-dan ‘fa (note)-abl’ (avoidance of marked prosodic structure)

c.

Japanese two-mora requirement (Itô 1990; Kiparsky 1993) su ‘vinegar’ choko ‘chocolate (truncation)’ *cho (two-mora requirement satisfied)

d. Catalan stressed vowel lowering (Mascaró 1976; Kiparsky 1993: 293) ’sentrH ‘center’ ’s> IO-Ident[cont] >> CC-Ident2[cont],” where CC-Ident[cont] refers to the assimilatory pressure in continuancy, and the superscripts give a measure of the pressure in terms of entailments (attraction), translated into OT ranking. Note that no commitment is made here as to the relative strength of individual entailments (only that summation of like entailments will always yield greater strength), and hence no prediction is made that agreement on some other feature (e.g. voicing) must have the same triggering effect as agreement in place in (9), or that any assimilation will necessarily bring about further assimilation by a domino effect. The case in (9) is by no means unique. It is in fact virtually duplicated by English nasal place assimilation, where the failed assimilation of in-famous calls for the assumption that identity in place automatically induces identity in continuancy, just as in (9). This assumption is needed to correctly exclude *im-famous (same place but different continuancy; cf. im-possible), while the alternative *i[º]-famous will be excluded by the ban on nasal fricatives in English. Furthermore, Wayment (2009) argues, on a broad empirical basis, that all assimilatory phenomena are parasitic in the manner of (9), and thus all involve attraction, as just described. However, sequential proximity also appears to contribute to overall similarity and hence to attraction, so that, when proximity is maximal as in (9), the prerequisite similarity in features may be slight and not immediately detectable, while longdistance phenomena, like long-distance consonant assimilation (Hansson 2001; Rose and Walker 2004; chapter 77: long-distance assimilation of consonants) and vowel harmony (van der Hulst and van de Weijer 1995; chapter 91: vowel harmony: opaque and transparent vowels; chapter 110: metaphony in romance; chapter 118: turkish vowel harmony; chapter 123: hungarian vowel harmony), will predictably exhibit more robust feature-based similarity as a triggering condition, as the references just cited confirm. Beside assimilations, Burzio argues that attraction underlies further morphophonological phenomena: in particular, that it can reconstruct the “dispersion” account of segmental inventories (Liljencrants and Lindblom 1972; Flemming 1995; chapter 2: contrast), as maximal distance among members of the inventory corresponds to minimal attraction/entailment violation. In addition, under the reduction of contextual neutralization effects to dispersion principles advocated by Steriade (2009) and Flemming (2008), attraction would also speak to those effects

Luigi Burzio

10

in turn. In particular, segmental neutralizations (like coda devoicing) would occur in those environments that attenuate critical perceptual cues, as argued by Steriade (1994, 1999, 2009), and thus compromise distance from the nearest attractor in the inventory. The latter attractor will then exert its influence, neutralizing the contrast. Attraction thus represents a formal alternative to Steriade’s (2009) “perceptual map” and, as argued in Burzio (2000b), is a variant interpretation of Wilson’s (2000, 2001) “targeted constraints,” which are also a formal alternative to Steriade’s perceptual map. Attraction has also been argued to underlie morphological syncretisms (also a type of neutralization; see Burzio 2005, 2007; Burzio and Tantalou 2007). From this point of view, the NDEB of (1) would then be naturally interpreted as another attraction effect, parallel, in fact, to the one in (9). While the relevant relation in (9) is between two segments in the same sequence, the one in (1) is between the adjective’s stem and the corresponding verb. Just as in (9), identity in one respect, here stress, results in identity in another, here vowel length, as in (1b). The REH and attraction have in fact been argued to subsume the general OT notion of “faithfulness” (chapter 63: markedness and faithfulness constraints), and at the same time also to derive the set of relations to which faithfulness has been shown to apply (input–output; base–derivative; base–reduplicant; paradigms; similarity of consonants). Similarity can define all such relations, with sequential adjacency/proximity also contributing to similarity, as noted (Wayment 2009). However, morphology also plays a role in further defining faithfulness (§6). In OT notation, the attraction accounts of (9) and (1) may then be rendered as in the parallel (11a) and (11b), respectively. (11)

a. b.

CC-Ident+[cont] Ident+(V-length)

>> >>

IO-Ident[cont] *ÍCiV

>> >>

CC-Ident[cont] Ident(V-length)

The schema in (11a) is the same as the one given above, except that superscript “3, 2” have been replaced with just “+” and no superscript, respectively. In (11b), Ident(V-length) applies between input (verb) and output (adjective) in (1). As in (11a), the “+” version corresponds to the stronger attraction, here due to identity in stress. The second constraint in (11b) promotes a long vowel before “CiV” and prevails in (1a) by competing only with the lower-ranked Ident (weaker attraction), but is ineffective in (1b), where the higher-ranked Ident is involved. The attraction account of (1) as in (11b) seems applicable in transparent ways to the other cases of this type cited in (2) and (3) above.

4.1.2 The “local conjunction” approach An account of NDEB based on the formal device of local conjunction (LC) of constraints in OT (chapter 62: constraint conjunction) is proposed in Ìubowicz (1999, 2002). When deployed in a case like (1), such an account would feature the hierarchy in (12), which can be compared with the one in (11b). (12)

[*ÍCiV & Ident(stress)] >> Ident(V-length) >> *ÍCiV

The conjunction in (12) combines one faithfulness constraint, Ident(stress), and one markedness constraint, *ÍCiV. Simultaneous violation of both conjuncts would thus invoke the higher-ranked conjunction, forcing the repair as in the

Derived Environment Effects

11

stress-unfaithful (1a), while the markedness constraint alone would be low-ranked, hence failing to induce the repair, as in (1b). As with the account in the previous section, stress constraints must exclude un-restressed *’remediable (Burzio 1994). It is easy to see that this approach also covers the cases in (2a)–(2d), as Ìubowicz shows, and possibly the one in (2e) (but see below), given appropriate choices of markedness and faithfulness constraints that make up the conjunction. The LC solution seems ingenious and has interesting typological properties, predicting that not only conjunctions of markedness and faithfulness constraints, but conjunctions of two markedness constraints, as well as conjunctions of two faithfulness constraints, should prove equally useful. These predictions seem to be supported to the extent that conjunctions of markedness constraints have frequently been proposed in the OT literature (see McCarthy 2002: 18f.; Ìubowicz 2002: note 4; and, for general reviews, Fukazawa 1999), while conjunctions of faithfulness constraints have also been proposed, in particular to account for counterfeeding chain shifts (Kirchner 1996; chapter 73: chain shifts). The LC approach cuts the empirical domain differently from the attractionbased approach. While it may relate NDEB to the other “conjunctive” effects just noted, it does not relate them to other phenomena in the domain of attraction, like the parasitic character of assimilations shown in (9). The reason a schema along the lines of (12) will be silent on (9) is that in the latter case the crucial difference, i.e. same place [. . . b#f . . .] (9a) vs. different place [. . . b#s . . .] (9b) is not one created by some process, but rather one already present in the input. Hence there is no difference in terms of IO-faithfulness that could be recruited for a LC like the one in (12).2 To make the LC approach applicable to cases like (9), we might import from the attraction framework the notion that faithfulness may also hold across segments within the same input sequence, but further arguable attraction effects (like dispersion or syncretism) would remain recalcitrant. Other cases, like those in (13), from Burzio (2002b), may, like the one in (9), also involve differences without an actual process, and may thus also pose a challenge to the LC approach. base

derivative: more similar

derivative: less similar

interacting dimensions

com’pare di’vide ap’ply de’ny d. ’larynx

com’parable di’vidable

’comparable di’visible ’applicable

stress; semantics vowel length; segmentism vowel length; segmentism

la’rynges

stress; segmentism

(13) a. b. c.

de’niable ’larynxes

The case in (13a) features, in the “less similar” column, an idiosyncratic semantic change, as the derived adjective ’comparable means “roughly equal” rather than 2

The same predicament would face McCarthy’s (2003) “Comparative Markedness” (CM) approach, which distinguishes markedness violations that are present in the input (old markedness) from those that are not (new markedness). As Ìubowicz (2003) shows, this approach has similar effects to those of the LC approach. Conceivably, a higher-ranked “new” *ÍCiV could in particular apply to (1a) given the changed stress, but not to (1b). In (9), however, nothing is independently “new” in (a) any more than in (b), yielding no account in these terms. Further difficulties with CM are noted in Wolf (2008: §4.2.3).

12

Luigi Burzio

“able to be compared,” while the cases in (13b)–(13d) exhibit idiosyncratic segmental changes. In each case, such changes cluster with other changes, in stress, vowel length, or both. These further changes are then roughly predictable, though space prevents a full discussion here. For instance, the form la’rynges displays regular stress (heavy penultimate) rather than the stress of its base. By comparison, the same changes “block” in the absence of the first type of change, as shown in the “more similar” column. Such cases of NDEB are covered by attraction, which responds to difference/distance regardless of its source. By contrast, the LC approach will require an extension of the formal notion of faithfulness to the domain of suppletion and semantic idiosyncrasy. The two approaches reviewed so far may also differ in the degree of locality that they impose on possible interactions. The LC approach enforces the locality required by the notion of “local conjunction.” For instance, LC would be applicable to (1a), due to the fact that both changes, in stress and vowel length, occur in the same vowel. The theory of LC is not fully explicit, however, on what exactly counts as a local domain (Smolensky 1997), and we may note that in fact not all cases of phonological NDEB involve the same segment. For instance, the two relevant changes in the Finnish case in (2e) vete → veti → vesi occur in different segments. See, however, Ìubowicz’s (2002: note 29) appeal to an alternative analysis that would bring this case in line. An apparently similar challenge is also posed by Sanskrit retroflexion of s after the disjunction {r, u, k, i} (Kiparsky 1973a; Kenstowicz 1994: 202; chapter 119: reduplication in sanskrit). As for the predictions of the REH on the exact range of interaction of changes, they are also not very clear at the present stage, but perhaps some inference can be drawn from the fact that the REH aims to characterize inventories as sets of attractors, combined with the independent fact that inventories are attested for segments and morphemes. If these are the attractors, then attraction effects should be observable at both of these levels. Then the cases in (2a)–(2d) and perhaps (1), would instantiate cases where attraction occurs between corresponding segments, while those in (2e) and (13) may instantiate attraction between corresponding allomorphs (note that several of the latter cases involve more significant structural changes than those in (2a)–(2d)).

4.1.3 Serial Optimality Theory Wolf (2008) develops an approach to NDEB based on the serial OT of McCarthy (2007), in which different potential sequences of operations (“candidate chains”) are evaluated by constraints on derivations. This approach receives its primary motivation from phonological opacity (counterfeeding and counterbleeding effects; see chapter 74: rule ordering). Given, for instance, an opaque derivation like that of Serbo-Croatian /okrugl/ → okrugal → [okrugao] ‘round’ (Kenstowicz 1994: 90f.), in which vocalization of /l/ to [o] counterbleeds the epenthesis of [a] that would break up a final cluster, this approach would postulate a “precedence” constraint Prec(Dep, Ident[consonantal]) prescribing that a violation of Ident[consonantal] (i.e. vocalization) must be preceded by a violation of Dep (i.e. epenthesis), whenever the latter violation would be harmonyimproving (i.e. in final CC environments). The case in (1) would then be handled in this perspective via the schema in (14). (14)

Prec(Ident(stress), Ident(V-length)) >> *ÍCiV >> Ident(V-length)

Derived Environment Effects

13

The top-ranked constraint in (14) will demand that a violation of Ident(V-length) (i.e. lengthening) be preceded in the derivation by a violation of Ident(stress) (i.e. restressing), thus blocking lengthened but un-restressed *’le(viable. While using different theoretical means, this requirement is parallel to that imposed by Ident+(V-length) in (11b) on the basis of the attraction approach (expressing the stronger attraction/Ident requirement if stress is unchanged), while the rest of the schema in (14) remains the same as that of (11b). Major differences lie again in the way this approach relates to phenomena other than NDEB. As noted, it links NDEB directly with phonological opacity, while this link is more indirect in the alternative approaches. In the attraction approach, it is natural to see segmental neutralizations (like the /l/–/o/neutralization of Serbo-Croatian) as attraction effects due to phonetic similarity. In turn, neutralizations (and perhaps other processes) have been shown to yield opacity effects within the theory of Targeted Constraints of Wilson (2000, 2001), which is arguably interpretable as a formalization of segment-level attraction effects (Burzio 2000b). As for the LC approach, it has so far been shown to provide an account of only a subset of the opacity effects, in particular, and as already noted, that of counterfeeding chain shifts, by means of a LC of faithfulness constraints (Kirchner 1996). Like the LC approach, the serial OT approach is potentially challenged by cases where representational distance does not arise through a specific phonological process. In particular, the schema of (14) will not be applicable to the parasitic assimilation of (9), where there is no derivational step that would precede place assimilation and could account for the contrast. Likewise, (14) may not extend to cover the cases in (13), where the relevant (attraction-weakening) differences are suppletive or semantic.

4.2

Morphologically derived environments in boundary contexts

4.2.1 The entailments-based approach Burzio (2000a) analyzes the Finnish case in (3b) above, repeated in (15a), as in (15b). (15)

a. b.

tilat-i → tilas-i ‘order’-past Faith[ti] >> *TI (assibilation) >> Faith[t]

In (15a), the first t in tilat-i is subject to higher-ranked Faith[ti] and thus resists the effects of the assibilation constraint, while the second t is subject only to lowerranked Faith[t], and thus undergoes the assibilation. This analysis assumes that faithfulness only applies to the form of morphemes, not to morpheme combinations. Hence the output form tilas-i in (15a) would only be faithful, independently, to the stem tilat-, and to the past tense affix -i, but not to their concatenation, making the top-ranked constraint in (15b) inapplicable to heteromorphemic t-i. While the analysis in (15b) predates the introduction of the representational entailments of (8) above, the latter can be recruited to improve it. The reason is that the constraint Faith[ti] of (15b) will now consist of the entailment i ⇒ / t __ (i entails a preceding t, or i must be preceded by t), an entailment generated by

14

Luigi Burzio

any surface occurrence of the stem tilat- or its allomorph tilas-. The reason there is no heteromorphemic counterpart to that entailment is that affixal -i occurs in heterogeneous environments, attached to stems of all sorts. Any entailment of a preceding t would be contradicted by entailments generated by other stems that do not end in t, and thus effectively suppressed under algebraic summation of entailment. Of course one must assume that the i involved in such entailment is not just the phoneme i of Finnish. Rather, it must be the i of tilat-, which is in turn entailed by the rest of the representation of the stem tilat-, including its semantics. It is that i which will then entail a t by transitivity of entailments, while suffixal -i will not. See Wolf (2008: 326f.) for a review of other attempts in the literature, some of which bear resemblance to the one being described. The remaining question is still which sequences of segments may count, which now takes the form of “what entailments matter, exactly?” For instance, why shouldn’t an entailment from a to t in (15a) prevent the second t from spirantizing while allowing the first, hence yielding *silat-i? Here, the REH (8) has been shown to yield one other critical effect besides attraction: “binding,” tying together components that are relatively similar to one another. While a formal demonstration is beyond the goals of this chapter, an intuitive grasp can be attained by considering similarity as a sharing of subcomponents. Given two co-occurring components A, B, the REH will prescribe that each of A and B will entail its own internal structure, as well as each other. But when A and B are similar, the former effect will contribute to the latter, resulting in a stronger mutual entailment between A and B. See Burzio (2005: §4.5) and Wayment et al. (2007) for more discussion. Now, if all assimilations are parasitic on similarity, as suggested above, then the “t-i → s-i” process of Finnish will suggest, along with articulatory considerations, that there is some level of similarity between t and i, with a consequent binding effect. A tautomorphemic sequence ti would then be simultaneously subject to two effects. One would be attraction, with the potential for assimilation, and the second the “binding,” or enhanced mutual entailment, of t and i, opposing assimilation and evidently sufficient to block it. On the other hand, heteromorphemic t-i would experience only attraction, thus leading to assimilation. The reason is that “binding” only describes an enhancement effect over the entailment i ⇒ / t __ , an entailment which, however, is effectively false for the heteromorphemic case, for the reasons discussed. While the OT analysis thus remains as in (15b), with Faith[ti] describing the enhanced entailment (binding) and *TI describing attraction, the “grounding” just provided narrows the range of applicability of sequential constraints of type Faith(xy), and hence the range of pathological predictions re segmental inventories.

4.2.2 The “local conjunction” approach Ìubowicz (1999, 2002) proposes a “local conjunction” analysis of this case as well. When applied to the Finnish case in (15a), that analysis would be as in (16). (16)

[*TI (assibilation) & R-Anchor(Stem, q)] >> Ident[cont] >> *TI (assibilation)

The constraint R-anchor(Stem, q) in (16) requires that the right edge of the stem line up with a syllable boundary, and is violated in (15a). Hence the repair in (15a) is triggered by the fact that the conjunction of the latter constraint with the

Derived Environment Effects

15

assibilation constraint dominates the relevant faithfulness constraint, Ident[cont]. No repair affects the initial ti sequence, because in that case violation of the assibilation constraint is not in the same local domain as the violation of the rightanchor constraint, making the local conjunction inapplicable, while the assibilation constraint by itself is ranked lower than Ident[cont] and hence ineffective. The prediction that syllable boundaries play the role characterized by (16) seems disconfirmed by several of the cases in (5) above, however. This is true in particular in the following cases: Chumash pre-coronal laminalization, which alongside the example in (5d) also features /s-is-lusisisn/ → [œ-i7-lu-sisisn] (from Poser 1982, 1993, with presumed syllabification [œi7.lu . . .], with no misalignment); Sanskrit retroflexion (5e), as in /agni-su/ → [agni-:u]; Finnish cluster assimilation (5g), as in /pur-nut/ → purrut; and Mohawk epenthesis (5h) /k-wi’stos/ → [kewi’stos]. It is possible that these cases could be analyzed as processes that affect only affixal material due to a lower-ranked Affix-Faith and that block in underived environments simply because those environments are stems, hence not for the reasons provided by (16). Yet this alternative does not seem viable for the case in (5f), repeated in (17). (17)

Indonesian nasal substitution (Pater 1999) /mHN-pilih/ → [mHm-ilih] ‘to choose’

/Hmpat/

‘four’

Here the stem is affected as much as the affix, and yet a faithful candidate *[mHn-pilih] would feature no misalignment (though in any event a “left” rather than a “right-anchor” constraint would be needed).3 Pater (1999) in fact analyzes the asymmetry in (19) not in terms of LC, but rather in term of a Root Linearity constraint that aims to preserve sequential relations within a root. The mapping /Hmpat/ → *[Hmat] ‘four’ would violate such a constraint with respect to the input sequence /mp/, while a comparable heteromorphemic sequence is not in the scope of the constraint. This type of analysis is in fact essentially subsumed by the above entailment-based discussion (/m/entails /p/).

4.2.3 Serial OT Wolf (2008: ch. 4) provides an account of the now familiar Finnish test case in (15a) based on the ranking schema in (18): (18)

Prec(insert-affix, Ident[cont]) >> *TI (assibilation) >> Ident[cont]

Again, the schema in (18) differs from the entailment-based one in (15b) only by the top-ranked constraint, parallel to the one Wolf deploys in (14) above to handle phonologically derived environments. In both of Wolf’s analyses, the top-ranked constraint imposes a specific order of processes, here demanding that a violation of Ident[cont] (i.e. assibilation) be preceded by affixation. Such constraint is obviously satisfied by the heteromorphemic assibilation t-i → s-i. It would, 3

Ìubowicz (2002: 265) acknowledges this prediction with regard to simple nasal assimilation, which would not be expected to “block” since morpheme and syllable boundaries line up, finding no counterexamples, but does not discuss nasal substitution. However, English nasal assimilation does in fact seem to exhibit NDEB, judging from Finland, Henry, only, compared with illegal, irrational. But these cases too could perhaps be attributed to lower-ranked Affix-Faith.

Luigi Burzio

16

however, be violated by a tautomorphemic assibilation yielding *silas-i. The reason is essentially that in the latter case affixation does not provide any material critical to the assibilation, and is thus taken not to count as establishing the required order in Wolf’s formal system. In sum, this approach provides a unitary account of two of the cases of NDEB in terms of the Prec constraints of Wolf’s framework. We will see in §6 below, however, that it appears not to extend to the third.

5 5.1

Pre-Optimality Theory accounts Strict cyclicity

This subsection reviews the most influential of the early accounts of NDEB, the one based on “strict cyclicity,” as developed by Mascaró (1976) as part of the conception of the phonological “cycle.” Other accounts, carefully reviewed in Kiparsky (1993), include the “(Revised) Alternation Condition” and specific applications of the “Elsewhere Condition,” briefly discussed below. The traditional motivation for the phonological cycle comes from the observation that underlying representations (URs; chapter 1: underlying representations) are insufficient for correct phonological derivations, and that reference to surface forms is also necessary (chapter 85: cyclicity), as in the familiar English example in (19), from Chomsky and Halle (1968: 117). (19)

a. b.

cond[e]nsation comp[H]nsation

cf. con’d[e]nse cf. comp[H]nsate

*com’p[e]nse

The two nouns in (19) would have fully parallel URs, and yet exhibit different degrees of reduction in the bracketed vowels, a difference that seems predictable only by reference to the corresponding verbs in parentheses. The “cycle” thus required the phonology to apply first to inner morphological layers, first calculating con’d[e]nse within cond[e]nsation, and then move on (see Kenstowicz 1994: 204 for exact derivations, and Cole 1995; chapter 85: cyclicity for a full review of the motivations for the cycle). On the other hand, in the parallel approach that OT embraces, it becomes possible to argue that the reference to surface forms that cases like (19) make necessary is also sufficient, thus simply dispensing with the traditional URs, rather than requiring an additional notion like that of the cycle (Burzio 1996, 2000a). On this view, the lexicon is constituted of full surface forms, whose well-formedness, including their morphological relatedness, is calculated in parallel by a grammar that effectively just “checks” them, rather than deriving them step by step. The ability to refer directly to surface forms has been forcefully advocated in the OT literature by way of the notion of output-to-output faithfulness (Benua 1997; McCarthy 2005; and many others), which has then been used to account for the effects of the derivational cycle like those in (16). However, many practitioners have continued in the tradition of considering the lexicon as being constituted of morphemes, which give rise to URs when assembled together. In contrast to cyclicity, “strict cyclicity” effects, to which NDEB effects were in turn related, can perhaps be illustrated with the simple English series in (20),

Derived Environment Effects

17

but see Kenstowicz (1994; §5.3) and Mascaró (1976) for the Catalan data that were actually utilized. (20)

a.

de’si(re

b. de’si(r-able

c. de‘si(r-a’bil-ity

Assume for present purposes that -able is one of the affixes that trigger vowel shortening, as shown for instance by admi(re / admir-able. As discussed above, shortening is only variable rather than systematic in stressed penultimate syllables, whence unshortened (20b) de(’si(ra), assuming an extrametrical syllable , as in (1) above. The point of note is that further derivatives of de’si(rable such as (20c) maintain this choice despite the fact that -ity is itself a shortening affix, witness divin-ity (cf. divi(ne), or promiscu-ity (cf. variant pro(miscuous), and the fact that metrical environments identical to that of (20c) (medial foot (q q)) shorten quite regularly, as in pro(‘nunci)’ation (cf. pronounce). The conclusion drawn from facts of this sort was thus that certain processes “block” in environments that are not “properly” derived. The one in (20c) would be one of them, since the shortening environment is already present in (20b) (let us say), attachment of -ity thus contributing nothing further. But if shortening blocks for those reasons in (20c), it may well block for the same reasons in (20a), or “trisyllabic” i(vory, which are not derived at all. In addition, since it is the definitional property of cyclic rules to apply when there is a new morphological environment, it will only be a matter of strengthening this definition to only when (thus making the cycle “strict”), to derive the blocking of both (20c) and (20a) or i(vory. Along with such cases, Strict Cyclicity would provide presumptive accounts of the cases of §3.2 above (non-boundary contexts), assuming all relevant processes can refer to the added morphological structure so as to activate the cycle and block elsewhere, and would similarly also account for “boundary context” cases of §3.1 above, like Finnish tilas-i of (3b). As for the cases that appear to be phonologically but not morphologically derived, like Finnish /vete/ → [vesi] of (3), a clause like (21b) below was from the inception added to the already established (21a). (21)

Applicability of cyclic rules a. b.

Contexts that are newly created morphologically. Contexts that are newly created phonologically.

This definitional fiat extended the account to phonologically derived cases such as (2) and (3) above (I leave (13) aside), but it did so at a cost. The two forms of NDEB are now predicted to be co-extensive, an incorrect conclusion, as Ìubowicz (2002: 271) points out. While there are cases, like Finnish assibilation and Sanskrit ruki retroflexion, that can be argued to occur in both types of derived environments, others do not. The problem is perhaps best illustrated by the English cases in (1). The case in (1a) re’me(di-able was argued above to be phonologically derived via restressing. Under (21), this would be because of clause (b). But the simultaneous presence of clause (a) would now incorrectly predict lengthening in (1b) *’le(vi-able as well, since both cases involve affixation of -able and hence are both “derived” in one way or another. This problem persists under Kiparsky’s (1982) attempt to reduce Strict Cyclicity to the Elsewhere Condition (Anderson 1969; Kiparsky 1973b; also referred to as the Pa/ini Principle), according to which

Luigi Burzio

18

processes that refer to more specific contexts trump processes that refer to more general ones. This principle was conjoined with the assumption that lexical items (morphemes) undergo identity rules. In a case like Finnish /tilat-i/, cf. above, the assibilation rule would be blocked relative to tautomorphemic /ti/ since the identity rule would apply to /tilat/, a more specific context than just /ti/. In the case of heteromorphemic /t-i/, however, the context of the assibilation rule would not be contained within the context of the identity rule, allowing that rule to apply with no inhibition, yielding [tilasi]. Similarly, in Finnish /vete/ → [vesi], the assibilation rule would find no obstacle once /e/ raised to [i], since the identity rule would apply only to /vete/, which does not contain the assibilation context /ti/. However, again, a case like levi-able of (1b) above would not be accounted for, since the context for CiV lengthening obtains here thanks to combination of morphemes, hence is beyond the reach of an identity rule for levy. Nonetheless, the Elsewhere Condition account appeared to solve the puzzle posed by the stress rule which, while seemingly cyclic in light of cases like (19), did not block in underived environments. This could now be attributed to the fact that, in the case of regular stress systems, lexical items would not contain stress information, which could therefore not be referred to by the identity rules. Still working within a rule-based system, Kiparsky (1993) rejects the Strict Cyclicity account of NDEB, not only because of the incorrect predictions of the disjunction in (23), but also because various other hallmarks of cyclicity failed to correlate with NDEB effects. For instance, one diagnostic of cyclicity would be sequential orders like P1, P2, P1, where a process P1 is found to apply both before and after P2. The cycle would enable such orders so long as the two occurrences of P1 could be placed in different cycles. Other properties attributed to cyclic rules, though essentially by stipulation, were application at “lexical” as opposed to phrasal levels, and their contrastive, as opposed to allophonic, character. Kiparsky shows clearly that NDEB does not correlate with these attributes, and thus proposes the alternative I review next.4

5.2

Underspecification

Kiparsky’s (1993) proposal can be illustrated for the Finnish cases as in (22), where upper-case T is assumed to be underlyingly not specified for continuancy, while lower-case t is fully specified as [−continuant] (chapter 7: feature specification and underspecification). (22)

a. b.

tilaT-i veTe

→ veTi

→ tilas-i → vesi

‘order-past’ ‘water-nom’

(cf. tilat-a ‘order-inf’) (cf. vete-nä ‘water-ess’)

Correct derivations in (22) are ensured by assuming that the assibilation process can fill in the value [+continuant] in the context / __ i, yielding [s], but not change fully specified t. At the same time, one must also assume that [−continuant] can be filled in by a later default rule to any representation that may remain unspecified after the assibilation rule has had a chance, so as to yield t rather than 4

Note that parallel OT does not predict any of the formerly stipulated distinctions between lexical and post-lexical processes. A discussion of this issue is beyond present goals, but see below for some discussion of phonology–morphology interaction.

Derived Environment Effects

19

s in the parenthesized forms on the right, as in /tilaT-a/ → [tilat-a]; /veTe-nä/ → [vete-nä]. On the one hand, this proposal bears some similarity to the one in (15b) above and its entailment-based version, in that immunity of the first /t/ comes from a greater amount of information associated with it. On the other, however, this approach does not appear to be fully workable. If the forms on the right in (22) were the sole determinants of the URs on the far left, then, indeed, the latter URs could be correctly obtained. In the course of acquisition, the i of tilat-a would force full specification of the preceding t, lest the expected form be *silat-a, given knowledge of the assibilation. Elsewhere, however, t (as given in upper case) could remain underspecified, correctly yielding assibilation when a following i shows up. This dynamic is parallel to the one based on the entailments, where a following i would also confer additional resilience to a t. However, as argued in Burzio (2000a), the assumption that the parenthesized forms are privileged sources for the URs cannot be maintained. In divi(ne / divin-ity, the presumed UR div/i(/ne/, with a long vowel, must be inferred from the base, but in dam($) / damn-ation, the UR /dæmn/, with /n/, would only be inferrable from the derivative. On the other hand, in [pærHnt] / [pHrentHl], the full set of underlying vowels is only inferrable from base and derivative combined. The fact of the matter is that, in general, there is no independent principle or a priori restriction on what surface forms can contribute to a UR in a theory that has URs. It totally depends on where neutralization processes occur down the line. This means that hypothetical *tilat-i could also contribute to its UR, yielding full specification of the second t incorrectly, making the account of (22) circular (the initial premises rest on the final results). This liability is not shared by the entailments. As we have seen, affixal -i would not entail a preceding t even in a hypothetical tilat-i, the reason being that, as a past tense affix, -i will entail to its left whatever results from entailment summation over all of its stems, most of which do not end in t. The inadequacy of the underspecification account is even more apparent if one attempts to extend it to other cases of phonologically derived environments beside (22b). For example, to handle reme(diable in (1a), this account would have to underlyingly underspecify the lengthened vowel while fully specifying (as short) the one of leviable (1b). But, again, there is no independent basis for such an asymmetry. The fundamental reason for this inadequacy is that, in general, an underspecification account will predict asymmetries based on some coherent theory of what can be marked vs. default values. Hence, it cannot under any circumstance predict the true generalization, which appears to be whether or not some change has occurred in the same segment or morpheme, as in each of (1), (2), and (13).

6

Morphologically derived environments in non-boundary contexts

I turn now to the case in §3.2 above, whose prototype was taken to be English vowel shortening. Burzio (2000a) argues that such effects simply result from the parallel interaction of morphology and phonology in the absence of any level of UR. Consider in particular that regular/productive morphological systems generally exert an inhibitory effect on phonological processes, as shown in (23).

Luigi Burzio

20 (23)

a. b.

’effort-less-ness beep-ed

[bijpt]

(exceptional stress) (exceptional syllable size)

Stress patterns such as the one in (23a) are unattested among morphologically underived items, as are syllables like the one in (23b), where a long vowel is followed by two consonants. These effects would be types of “DEB,” namely reversals of NDEB. Here, otherwise regular processes “block” exactly in the derived environments. Burzio (2002a) argues in this connection that one can simply interpret the selectional properties of the relevant affixes, expressible for example as in (24), as types of constraints. (24)

-less ⇒ / Noun __

(-less attaches to a noun)

If there are no URs, then the context “Noun” in (24) can only refer to surface forms, affix -less, thus demanding identity between its stem and any such form. Then, cases like (23a) will be accounted for by taking (24) to dominate the constraints responsible for regular stress, the irregular one of (23a) simply coming from identity with that of ’effort, as imposed by (24), and similarly for the past tense affix in (23b) and syllabification constraints. If morphology and phonology compete this way, so that (23a) and (23b) result from the morphology winning, then effects in the opposite direction should result when the morphology loses, and this would be the case of vowel shortening, as illustrated in (25). (25)

a. b.

natur-al *V( >> -al ⇒ / Noun __

(cf. na(ture) (-al attaches to a noun)

In (25b), a general markedness constraint banning long vowels outranks the morphological constraint demanding identity with the independent noun na(ture, resulting in a short vowel. The difference between the high-ranked selectional constraint in (24) and the low-ranked one in (25b) reflects the general difference between (roughly) Germanic and Latinate affixes, termed respectively “Level 2” and “Level 1” in Kiparsky’s (1982) Lexical Phonology framework. This rankingbased characterization of the two morphological systems is independently supported by the fact that the two systems also differ with respect to morphological idiosyncrasy (Burzio 1994, 2002a; Benua 1997), as well as productivity, which can also be correlated with rank, though more indirectly (see Burzio 2006). Latinate affixes like the one in (25) tolerate massive amounts of morphological idiosyncrasy, as in arbore-al (cf. absence of *arbore), crimin-al (cf. crime, not *crimin). By contrast, the Germanic affixes exhibit virtually no idiosyncrasy like hypothetical *arbore-less (cf. tree-less) or *crimin-less (cf. crime-less). Hence there exists an inverse correlation between phonological and morphological regularities (Burzio 2002a), as Germanic affixes exhibit tight morphological regularity along with abundant phonological irregularity as in (23), while the Latinate ones reverse both effects, exhibiting much morphological irregularity along with regular phonology (chapter 103: phonological sensitivity to morphological structure). This includes regular stress aside from a specific range of cases discussed below, as in pa’rent-al (not *’parent-al, which would parallel ’effort-less), regular syllabification, the shortening of (25), and other processes like

Derived Environment Effects

21

the velar softening of (4) above (electri[s]-ity), all absent from the Germanic class (cf. froli[k]-ing, cri(me-less). The overriding generalization is thus in terms of the requirement that a stem be identical to an independent surface form, referred to as output–output (OO) faithfulness in the literature (Benua 1997), which is strong for one class of affixes, but weak for the other (Burzio 1994). Two ingredients are critical to a successful account. One is that morphology (interpretable as OOfaithfulness) must be constraint-based to compete with phonology this way. The other is that there must be no UR. The first ingredient, constraint-based morphology, can be taken to result from the REH, as argued in Burzio (2002b). Constraints like those in (24) and in (25b) and their ranking can be viewed as the result of summation of identical entailments across the lexicon. The second ingredient, absence of a UR level, is a natural hypothesis for a constraint-based system. To see how it is critical to the analysis, consider first that, in the Germanic/Level 2 case of (23), the UR would be simply superfluous. One only needs to assume a high-ranked OO-faithfulness (expressed here by (24)) to express the identity of each stem to the independent words. The effect of a lower-ranked faithfulness to a UR (termed input–output (IO) faithfulness) would be cancelled by the higher-ranked OO-faithfulness. In the case of (25), however, a hypothetical UR is not just superfluous, but false. To account for the long vowel in na(ture, one must assume the ranking “IO-Faith >> *V,” the standard OT schema for marked choices. But if the same UR was input to both na(ture and natur-al, as is the case by the standard definition of UR as the common input to all allomorphs of the same morpheme, then natur-al should also have a long vowel. Just supplementing IO-Faith with OO-Faith is therefore not sufficient in this case. Rather, IO-Faith as faithfulness to a UR must be removed from the scene altogether, and the only principled way to do so is to drop the already unnecessary as well as insufficient UR. Hence, na(ture will be faithful to its own input with whatever rank the language at large has. The form natur-al will also be faithful to its input, but that input (except for -al) is the word na(ture and not a UR, and that ranking is determined by the particular morphological system, not by the language at large. Affixes like -al in (25), which are relatively unproductive and prone to idiosyncrasy, evidently establish relatively weak/low-ranked associations. The notion of “cyclic” derivation would therefore have seemed right here in a way, requiring that na(ture be derived from a UR, while natur-al would be derived from na(ture. But such a notion has turned out to be right only when it reproduces (in a more complicated way) the effects of just dropping the UR from the theory, which forces reference to surface forms directly. In other respects, the cycle, and its cluster of attribute properties, prove to have been incorrect, as Kiparsky (1993) shows for the association “cyclic = blocking in NDE.” The presumed further association “cyclic = lexical” (see discussion in Kenstowicz 1994: 195f.) also fails to hold. Here the notion “lexical” defines the presence of idiosyncrasy, while the property “cyclic” identifies, by its core definition, preservation of phonological structure, as in cond[e]nsation (19a) above. As argued for (23), it turns out that such preservation is massive with Germanic/Level 2 affixes (no changes compared with the base word). This would make them “cyclical” if one took those definitions seriously. But idiosyncrasy is absent in the presence of those affixes, which would make them “non-lexical” (i.e. more like phrasal constructs). Conversely, Latinate/Level 1 affixes would be part of the “lexical” morphophonology by virtue of the noted idiosyncrasies, but the preservation effects

Luigi Burzio

22

they generate are in fact tenuous, cond[e]nsation notwithstanding, making them only marginally “cyclic.” As argued in Burzio (1991, 1993, 1994), such effects consist of only minimal distortions of the standard footing, as in e.g. phe(‘nome)’nology (to match phe’nomenon), or a’merica(‘nistØ) (to match A’merican). As in underived items, adjacent stresses are ruled out, for example as in ‘cata’strophic, ‘infor’mation, ‘consul’tation (ca’tastrophe, in’form, con’sult). This contrasts with the more robust distortion of cases like (23a) ’effort-less-ness. As for the apparent preservation of stress in cond[e]n’sation of (19a), it is argued in Burzio (1994: 185) that in fact it only concerns the details of vowel reduction rather than stress itself, though the relation with con’dense remains relevant. This means that, when faithfulness to a base word specifically refers to stress, it is relatively low-ranked for the Latinate affixes, consistently with (25b) above, but not so low-ranked as to be totally ineffective. This accentual faithfulness, referred to as Metrical Consistency (MC) in Burzio (1994), is what accounts for long vowels in this sector of the lexicon, as argued in §3.2 above. Consider here the contrast between (26a) and (26b), along with the partial hierarchy in (26c). (26)

a. b. c.

de’si(rous, ad’he(rent, ex’tre(mist, di’vi(sive, ’medi‘ta(tive, . . . ’blasphemous, ’aspirant, ’hypnotist, ’relative, ’generative, . . . MC, *V( >> ident(V-length)

The hierarchy in (26c) is taken to be dominated in turn by constraints mandating well-formed metrical feet which exclude, along with adjacent stresses and other degeneracies, stress on light penultimate syllables. This means that, when a long vowel and the stress of the base word both end up in a penultimate syllable in a derivative, as in all of the cases in (26a) and (26b), the two leftmost constraints in (26c) cannot be simultaneously satisfied. Either the vowel will have to fail to shorten, as in (26a), or the stress of the base will be lost, as in (26b). The variation between the cases in (26a) and those in (26b) then reveals that the grammatical system is indeterminate on the relative ranking of the two leftmost constraints in (26c), allowing lexical information to choose outcomes (see Burzio 2006). The effect observed for some of (13) above may also be at work, however. That is, items that are semantically very close to their base, as perhaps those in (26a) are to a greater extent than those in (26b), may end up accentually faithful as well, with no shortening. In sum, the main class of exceptions to vowel shortening finds the principled account in (26c) leading to the conclusion that vowel shortening is – on its own – perfectly general (further scattered exceptions aside) in the Latinate lexicon. On the analysis in (26c), this would then be a case of what is referred to in the OT literature as “The Emergence of the Unmarked” (TETU; McCarthy 2002: 129f.; chapter 58: the emergence of the unmarked). Other approaches to NDEB do not cover this type of case. Wolf (2008) refers to putative cases of this sort, in which the phonological process does not appear to depend on the specifics of the morphological operation, as “pseudo” DE effects, and suggests the existence of analyses that would bring them in line with the non-pseudo cases. Successful re-analysis is critical to his approach, which predicts non-existence of pseudo DE effects. The reasons for this prediction by Wolf’s serial OT approach are effectively the same as the reasons excluding assibilation of tauto-morphemic /ti/ in the Finnish case discussed above. Specifically, and for example, if vowel shortening as in div[>]n-ity was due to an a-contextual

Derived Environment Effects

23

“*V(” constraint, then the same output will be produced whether shortening applies before or after affixation of -ity. But then a mechanism of “chain merger” would remove any ordering between the two operations, so that a putative precedence constraint Prec(insert-affix, Ident[cont]) will always be violated if shortening applies, thus keeping all vowels long if high-ranked, while allowing all to shorten, including in *div[>]ne, if low-ranked.5 However, while Wolf (2008: §4.3.5) does review the above “TETU” analysis of vowel shortening, he provides no alternative to it. He also concedes the existence of other cases of markedness reduction under affixation, such as those in (7e) and (7f) above, which the TETU analysis can handle, but which he attributes to failure of a feature to “percolate” up the morphological structure (2008: 268), an additional mechanism, in his perspective.

REFERENCES Anderson, Stephen R. 1969. West Scandinavian vowel systems and the ordering of phonological rules. Ph.D. dissertation, MIT. Benua, Laura 1997. Transderivational identity: Phonological relations between words. Ph.D. dissertation, University of Massachusetts, Amherst. Bolognesi, Roberto. 1998. The phonology of Campidanian Sardinian: A unitary account of a self-organizing structure. Ph.D. dissertation, University of Amsterdam. Burzio, Luigi. 1991. On the metrical unity of Latinate affixes. In Germán Westphal, Benjamin Ao & Hee-Rahk Chae (eds.) Proceedings of the Eastern States Conference on Linguistics 8, 1–22. Reprinted (1995) in Hector Campos & Paul M. Kempchinsky (eds.) Evolution and revolution in linguistic theory: Essays in honor of Carlos Otero, 1–24. Washington, DC: Georgetown University Press. Burzio, Luigi. 1993. English stress, vowel length and modularity. Journal of Linguistics 29. 359–418. Burzio, Luigi. 1994. Principles of English stress. Cambridge: Cambridge University Press. Burzio, Luigi. 1996. Surface constraints versus underlying representations. In Jacques Durand & Bernard Laks (eds.) Current trends in phonology: Models and methods, vol. 1, 123–141. Salford: ESRI. Burzio, Luigi. 1998. Multiple correspondence. Lingua 104. 79–109. Burzio, Luigi. 2000a. Cycles, non-derived-environment blocking, and correspondence. In Joost Dekkers, Frank van der Leeuw & Jeroen van de Weijer (eds.) Optimality Theory: Phonology, syntax, and acquisition, 47–87. Oxford: Oxford University Press. Burzio, Luigi. 2000b. Segmental contrast meets output-to-output faithfulness. The Linguistic Review 17. 368–384. Burzio, Luigi. 2002a. Missing players: Phonology and the past-tense debate. Lingua 112. 157–199. Burzio, Luigi. 2002b. Surface-to-surface morphology: When your representations turn into constraints. In Paul Boucher (ed.) Many morphologies, 142–177. Somerville, MA: Cascadilla Press. Burzio, Luigi. 2005. Sources of paradigm uniformity. In Downing et al. (2005), 65–106. Burzio, Luigi. 2006. Lexicon versus grammar in English morphophonology: Modularity revisited. Korean Journal of English Language and Linguistics 6. 437–464. Burzio, Luigi. 2007. Phonologically conditioned syncretism. In Fabio Montermini, Gilles Boyé & Nabil Hathout (eds.) Selected proceedings of Décembrettes 5, 1–19. Somerville, MA: Cascadilla Press.

5

Thanks to Matt Wolf (personal communication) for assistance on this point.

24

Luigi Burzio

Burzio, Luigi & Niki Tantalou. 2007. Modern Greek accent and faithfulness constraints in OT. Lingua 117. 1080–1124. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Chung, Sandra. 1983. Transderivational relationships in Chamorro phonology. Language 59. 35–66. Cole, Jennifer. 1995. The cycle in phonology. In Goldsmith (1995), 72–113. Downing, Laura J. 2006. Canonical forms in prosodic morphology. Oxford: Oxford University Press. Downing, Laura J., T. A. Hall & Renate Raffelsiefen (eds.) 2005. Paradigms in phonological theory. Oxford: Oxford University Press. Flemming, Edward. 1995. Auditory representations in phonology. Ph.D. dissertation, University of California, Los Angeles. Published 2002, London & New York: Routledge. Flemming, Edward. 2003. The relationship between coronal place and vowel backness. Phonology 20. 335–373. Flemming, Edward. 2008. Asymmetries between assimilation and epenthesis. Unpublished ms., MIT. Fukazawa, Haruka. 1999. Theoretical implications of OCP effects on features in Optimality Theory. Ph.D. dissertation, University of Maryland at College Park. Goldsmith, John A. (ed.) 1995. The handbook of phonological theory. Cambridge, MA & Oxford: Blackwell. Halle, Morris & K. P. Mohanan. 1985. Segmental phonology of Modern English. Linguistic Inquiry 16. 57–116. Hansson, Gunnar Ólafur. 2001. Theoretical and typological issues in consonant harmony. Ph.D. dissertation, University of California, Berkeley. Hargus, Sharon & Ellen M. Kaisse (eds.) 1993. Studies in Lexical Phonology. San Diego: Academic Press. Hulst, Harry van der & Jeroen van de Weijer. 1995. Vowel harmony. In Goldsmith (1995), 495–534. Inkelas, Sharon & Cemil Orhan Orgun. 1995. Level ordering and economy in the lexical phonology of Turkish. Language 71. 763–793. Itô, Junko. 1990. Prosodic minimality in Japanese. Papers from the Annual Regional Meeting, Chicago Linguistic Society 26(2). 213–239. Iverson, Gregory K. & Deirdre Wheeler. 1988. Blocking and the elsewhere condition. In Michael Hammond & Michael Noonan (eds.) Theoretical morphology, 325–338. San Diego: Academic Press. Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA & Oxford: Blackwell. Kenstowicz, Michael & Jerzy Rubach. 1987. The phonology of syllabic nuclei in Slovak. Language 63. 463–497. Kiparsky, Paul. 1973a. Phonological representations: Abstractness, opacity, and global rules. In Osamu Fujimura (ed.) Three dimensions of linguistic theory, 57–86. Tokyo: Taikusha. Kiparsky, Paul. 1973b. “Elsewhere” in phonology. In Stephen R. Anderson & Paul Kiparsky (eds.) A Festschrift for Morris Halle, 93–106. New York: Holt, Rinehart & Winston. Kiparsky, Paul. 1982. Lexical morphology and phonology. In Linguistic Society of Korea (ed.) Linguistics in the morning calm, 3–91. Seoul: Hanshin. Kiparsky, Paul. 1993. Blocking in nonderived environments. In Hargus & Kaisse (1993), 277–313. Kirchner, Robert. 1996. Synchronic chain shifts in Optimality Theory. Linguistic Inquiry 27. 341–350. Kula, Nancy C. 2008. Derived environment effects: A representational approach. Lingua 118. 1328–1343.

Derived Environment Effects

25

Liljencrants, Johan & Björn Lindblom. 1972. Numerical simulations of vowel quality systems: The role of perceptual contrast. Language 48. 839–862. Ìubowicz, Anna. 1999. Derived environment effects in OT. Proceedings of the West Coast Conference on Formal Linguistics 17. 451–465. Ìubowicz, Anna. 2002. Derived environment effects in Optimality Theory. Lingua 112. 243–280. Ìubowicz, Anna. 2003. Local conjunction and comparative markedness. Theoretical Linguistics 29. 101–112. Mascaró, Joan. 1976. Catalan phonology and the phonological cycle. Ph.D. dissertation, MIT. Distributed 1978, Indiana University Linguistics Club. McCarthy, John J. 2002. A thematic guide to Optimality Theory. Cambridge: Cambridge University Press. McCarthy, John J. 2003. Comparative markedness. Theoretical Linguistics 29. 1–51. McCarthy, John J. 2005. Optimal paradigms. In Downing et al. (2005), 170–210. McCarthy, John J. 2007. Hidden generalizations: Phonological opacity in Optimality Theory. London: Equinox. Oostendorp, Marc van. 2009. The phonology of the unspeakable. Paper presented at the 6th Old World Conference on Phonology (OCP 6), Edinburgh. Pater, Joe. 1999. Austronesian nasal substitution and other NY effects. In René Kager, Harry van der Hulst & Wim Zonneveld (eds.) The prosody–morphology interface, 310–343. Cambridge: Cambridge University Press. Poser, William J. 1982. Phonological representations and action-at-a-distance. In Harry van der Hulst & Norval Smith (eds.) The structure of phonological representations, part II, 121–158. Dordrecht: Foris. Poser, William J. 1993. Are strict cycle effects derivable? In Hargus & Kaisse (1993), 315–321. Prince, Alan. 1975. The phonology and morphology of Tiberian Hebrew. Ph.D. dissertation, MIT. Prince, Alan & Paul Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Unpublished ms., Rutgers University & University of Colorado, Boulder. Published 2004, Malden, MA & Oxford: Blackwell. Rose, Sharon & Rachel Walker. 2004. A typology of consonant agreement as correspondence. Language 80. 475–531. Rubach, Jerzy. 1984. Cyclic and Lexical Phonology: The structure of Polish. Dordrecht: Foris. Rubach, Jerzy. 1993. The Lexical Phonology of Slovak. Oxford: Clarendon Press. Rubach, Jerzy. 1995. Representations and the organization of rules in Slavic phonology. In Goldsmith (1995), 848–866. Smolensky, Paul. 1997. Constraint interaction in generative grammar II: Local conjunction or random rules in Universal Grammar. Paper presented at the Hopkins Optimality Theory Workshop/University of Maryland Mayfest 1997. Steriade, Donca. 1994. Positional neutralization and the expression of contrast. Unpublished ms., University of California, Los Angeles. Steriade, Donca. 1999. Lexical conservatism in French adjectival liaison. In J.-Marc Authier, Barbara E. Bullock & Lisa A. Reed (eds.) Formal perspectives on Romance linguistics, 243–270. Amsterdam & Philadelphia: John Benjamins. Steriade, Donca. 2009. The phonology of perceptibility effects: The P-map and its consequences for constraint organization. In Kristin Hanson & Sharon Inkelas (eds.) The nature of the word: Studies in honor of Paul Kiparsky, 151–179. Cambridge, MA: MIT Press. Wayment, Adam. 2009. Assimilation as attraction: Computing distance, similarity, and locality in phonology. Ph.D. dissertation, Johns Hopkins University. Wayment, Adam, Luigi Burzio, Donald Mathis & Robert Frank. 2007. Harmony versus distance in phonetic enhancement. Papers from the Annual Meeting of the North East Linguistic Society 37(2). 253–266.

26

Luigi Burzio

Wilson, Colin. 2000. Targeted constraints: An approach to contextual neutralization in Optimality Theory. Ph.D. dissertation, Johns Hopkins University. Wilson, Colin. 2001. Consonant cluster neutralisation and targeted constraints. Phonology 18. 147–197. Wolf, Matthew. 2008. Optimal interleaving: Serial phonology–morphology interaction in a constraint-based model. Ph.D. dissertation, University of Massachusetts, Amherst.

89

Gradience and Categoricality in Phonological Theory Mirjam Ernestus

1

Introduction

Within phonological theory, important roles are assigned to the notions of “gradience” and “categoricality.” The opposition qualifies sounds and sound patterns, and is crucial both for the definition of the phonological and the phonetic components of generative grammar, and for the development of alternative types of grammatical models. This chapter discusses the assumptions generative phonology and its direct successors (including Optimality Theory) have made about the role of gradience. Moreover, it presents data supporting or contradicting these assumptions, and discusses new models accounting for the conflicting data. The most important section of this chapter (§2) discusses the opposition between categorical sounds, which are stable and represent clear distinct phonological categories (e.g. sounds showing all characteristics of voiced segments throughout their realizations), and gradient sounds, which may change during their realizations and may simultaneously represent different phonological categories (e.g. sounds that start as voiced and end as voiceless). A shorter section (§3) discusses categorical generalizations over sounds, which are fully productive, and gradient generalizations, which are less productive. The final section (§4) provides a short conclusion.

2 2.1

Sounds Gradience in generative grammar

In the early days of generative grammar, the opposition between categoricality and gradience was assumed to reflect the fundamental distinction between competence and performance. Competence described speakers’ categorical knowledge about their language, abstracted away from performance factors such as vocal tract size, working memory span, articulatory effort, and so on. Performance, in contrast, described speakers’ actual linguistic behavior, which could be gradient, and was not in the direct focus of linguistic research (Chomsky and Halle 1968; following Saussure 1916). The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0089

2

Mirjam Ernestus

The distinction between competence and performance was reflected in the distinction between the phonological and phonetic component. The phonological component contained the speaker’s competence and thus represented cognition. It was believed to be language-specific and to include the phonemes of the speaker’s language and language-specific phonological processes, such as final devoicing (chapter 69: final devoicing and final laryngeal neutralization) and place assimilation (chapter 81: local assimilation). This knowledge was represented in the form of categorical symbols and rules operating on these symbols. Phonetic mechanisms were responsible for the speaker’s performance. These phonetic mechanisms were believed to be universal and the automatic results of speech physiology (Chomsky and Halle 1968: 293; Kenstowicz and Kisseberth 1979). They thus reflected physics and included, for instance, nasalization of vowels preceding nasal consonants, palatalization of consonants preceding high vowels, and shortening of vowels preceding voiceless obstruents. Since the phonetic component did not reflect the speaker’s competence, it was considered not to be part of the grammar proper. This view on the phonological and phonetic components changed very quickly, since various studies showed that the exact realization of an abstract symbol (e.g. a phoneme or a phonological feature) might be different in different languages. Moreover, no part of a realization appeared to be the automatic and unavoidable result of speech physiology (e.g. Keating 1985, 1990a; Kingston and Diehl 1994; see also chapter 17: distinctive features). As a consequence, the traditional definition of the phonetic component as containing only universal processes automatically resulting from speech physiology implied that this component was empty. A new distinction had to be developed which was no longer based on the notions of language-specific vs. language-universal and non-automatic vs. automatic mechanisms. The now widely accepted definitions of the phonological and phonetic components are completely based on the opposition between categoricality and gradience (e.g. Keating 1988, 1990b; Pierrehumbert 1990; Cohn 1993; Zsiga 1997). The phonological component is assumed to deal with categorical, abstract, stable, timeless symbols, such as phonemes and phonological features. Phonological processes refer to these symbols and consequently have categorical effects: they change one symbol (e.g. [+voice]) into another one ([−voice]), or they delete or insert symbols. Phonetic processes translate the abstract symbols into articulatory and perceptual targets. This may lead to sounds with acoustic characteristics that do not perfectly represent categorical phonological symbols, but rather have intermediate values, for instance, when obstruents are partly voiced due to co-articulation. These definitions of the phonological and phonetic components have been adopted in several psycholinguistic models of speech production and comprehension (e.g. Levelt 1989; Norris 1994). Since the distinction between gradience and categoricality is crucial in the definitions of the phonological and phonetic components, it has led to many experimental studies. The following subsections discuss their findings and their implications for phonological theory (see also chapter 5: the atoms of phonological representations; chapter 96: experimental approaches in theoretical phonology). The first subsections discuss the domains (assimilation and segment deletion) where the evidence for gradience is most convincing but can also be relatively easily reconciled with generative grammar: the relevant processes traditionally

Gradience and Categoricality in Phonological Theory

3

characterized as phonological could be reclassified as phonetic. The following subsections (on incomplete neutralization and phonetic detail) discuss evidence for gradience that is less clear but has important theoretical consequences. Generative grammar cannot account for incomplete neutralization without making additional far-reaching assumptions. Further, evidence for a role for fine phonetic detail in speech processing suggests that words are not lexically represented in the form of abstract phonemes but are stored together with their detailed phonetic properties. These data have stimulated the development of accounts based on assumptions other than those of generative grammar.

2.2

Assimilation: Data

One of the first types of processes traditionally characterized as phonological for which researchers found evidence of gradience is formed by connected speech processes, in particular assimilation (chapter 81: local assimilation). Nearly all instances of assimilation are traditionally described as the categorical spreading of a phonological feature from one segment to another segment in the phonological component. The receiving segment is assumed to be subsequently identical to segments with the same features in their underlying specifications. For instance, [m] would have exactly the same surface phonological representation and phonetic characteristics if it results from an underlying /m/ and if it results from place assimilation, as in the phrase gree[m b]oat ‘green boat’. Many articulatory studies have investigated the assumed categorical nature of place assimilation using electropalatography (EPG; Hardcastle 1972), which registers contacts between the tongue and the hard palate, or with the help of an electromagnetic midsagittal articulometer (EMMA; e.g. Perkell et al. 1992), which allows the tracking of individual fleshpoints by means of small transducer coils attached to various points on the speaker’s vocal tract in the midsagittal plane. These studies have provided evidence for the categorical nature of some place assimilation processes. An example is regressive place assimilation in Korean, which is a characteristic of fast colloquial Korean and affects certain consonants preceding certain other consonants. For instance, the phrase /pat¬p’oda/ ‘rather than the field’ can be pronounced as [pap¬p’oda]. Kochetov and Pouplier (2008) showed that this assimilation results in the categorical absence of the gestures for the original articulation place of the assimilated consonant (in this example: for /t¬/) in most tokens. Another example is place assimilation of /n/ to /k/ in Italian, which categorically results in the absence of alveolar gestures (Farnetani and Busà 1994). Other studies strongly suggest that some place assimilation processes are gradient in nature. For instance, assimilation of alveolar obstruents to the palatality of the following segments (as in American English hi/t j/ou) often does not lead to completely palatal segments ([c] in the example), but rather to segments that become more palatal during their realizations (within one and the same token) and that consequently differ in their phonetic detail from underlying palatals (e.g. Barry 1992 for Russian; Zsiga 1995 for post-lexical palatalization in American English). The same type of gradience has been reported for place assimilation of coronal obstruents in American English, as in la/t k/alls (late calls) produced as la[k(]alls. The assimilated obstruents often start with a coronal constriction that gradually assimilates to the articulation place of the following obstruent during

4

Mirjam Ernestus

their realizations (velar in the above example; Nolan 1992). Other gradient place assimilation processes include assimilation of alveolar nasals in American English (e.g. in gree[m b]oat; Ellis and Hardcastle 2002) and of /n/ to following post-alveolars in Italian (Farnetani and Busà 1994). Interestingly, some of these assimilation processes show considerable inter-speaker and intra-speaker variation. For instance, Ellis and Hardcastle (2002) found that four of their eight English speakers showed categorical place assimilation of /n/ to following velars in all tokens, two speakers showed either no or categorical assimilation, and two speakers showed gradient assimilation. Together, the data show that place assimilation processes, at least those applying across morpheme boundaries, may be gradient in nature. These processes cannot simply be accounted for by the categorical spreading of a phonological feature from one segment to another. The evidence for gradience is clearer for place assimilation than for voice assimilation. The main reason is probably that the difference between [+voice] and [−voice] obstruents is cued by many different acoustic, and hence also articulatory, characteristics, including the duration of the preceding vowel, the duration and intensity of the obstruent, and the duration of glottal vibration during the obstruent. Voice assimilation can thus not be studied on the basis of electropalatography alone, and has been mainly investigated on the basis of the acoustic signal instead. For instance, Kuzla et al. (2007) studied progressive voice assimilation in German clusters consisting of a voiceless obstruent and a voiced fricative (e.g. the /tv/ cluster in ha/t v/älder ‘had woods’ produced as [tf]). They showed that assimilation results in shorter stretches of glottal vibration during the cluster, whereas it hardly affects the duration of the fricative, which is the most important perceptual cue to the [±voice] distinction for German fricatives. Assimilation thus does not affect all perceptual cues of the [±voice] distinction equally, and the phonetic implementation of devoiced fricatives differs from the implementation of underlyingly voiceless fricatives. This is difficult to reconcile with an abstract phonological categorical account of voice assimilation, since in such an account voice assimilation results in phonologically voiceless fricatives, which cannot be distinguished from underlyingly voiceless fricatives during phonetic implementation. Other studies have investigated regressive voice assimilation in Dutch, that is, the voiced realizations of obstruents before voiced stops (e.g. we[t] ‘law’ is realized as we[d] in wetboek ‘law book’). Ernestus et al. (2006) and Jansen (2007) showed that glottal vibration, which is the most important cue to the [±voice] distinction in Dutch obstruent clusters (van den Berg 1988), may be completely absent, partly present, or continuously present in clusters subject to regressive voice assimilation, suggesting that regressive voice assimilation in this language is gradient. Ernestus and colleagues (2006) also investigated the effect of a word’s frequency of occurrence (i.e. the word’s relative number of occurrences in speech, independent of its realization) on voice assimilation (see also chapter 90: frequency effects). They found that higher frequencies correlate with shorter obstruent clusters, a perceptual cue for [+voice], but also with shorter periods of glottal vibration and longer release bursts, which are perceptual cues for [−voice]. These data also suggest that voice assimilation may result in sounds that are neither fully voiced nor fully voiceless. In conclusion, the data on assimilation suggest that we often perceive assimilation as categorical because we are used to distinguishing between only two values

Gradience and Categoricality in Phonological Theory

5

of the relevant phonological feature, but that the actual results from assimilation may be gradient rather than categorical. Before discussing the theoretical implications of these data, I first discuss data showing that segment deletion may also be gradient in nature.

2.3

Segment deletion: Data

In addition to assimilation, many studies have investigated the nature of segment deletion (chapter 68: deletion). It is generally assumed that the absence of segments may result from three different sources. First, the lexicon may represent more than one pronunciation variant for at least some words, and segment deletion may result from speakers’ selection of reduced pronunciations from their lexicons. Examples of lexicalized reduced pronunciations include English won’t for will not and Dutch [tyk] for [natyrlHk] ‘of course’ (Ernestus 2000). Second, segments may be absent due to phonological deletion processes operating on the lexically represented unreduced pronunciations. These processes result in phonological surface representations without the absent segments. Both mechanisms (i.e. selection of lexically represented pronunciation variants and phonological processes) result in pronunciation variants that do not contain any acoustic cues for the missing segments, and the absence of these segments is categorical in nature. Alternatively, segments may be absent due to gradient phonetic reduction processes, which reduce the durations and articulatory strengths of segments and make different segments overlap in time (chapter 79: reduction). Segments that are absent due to such reduction mechanisms typically leave some traces in the acoustic signal or in the word’s articulation. In conclusion, the distinction between categoricality and gradience is also relevant for the theory of segment deletion, since it indicates which type of mechanism is responsible for a given type of deletion. This view resulted in several studies investigating the categorical vs. gradient nature of segment deletion processes. Browman and Goldstein (1990) hypothesized that most highly productive casual speech reduction processes result from reduction in and overlap of articulatory gestures. They showed in an X-ray study that, for instance, the /t/ in a phrase like perfect memory may be acoustically absent, but still articulatorily present: speakers may close their lips for the production of the /m/ before the closure of the /t/ is released, which makes the release noise of the /t/ (its most important perceptual cue) inaudible (Browman and Goldstein 1992). Several articulatory and acoustic studies of other highly productive reduction processes support this hypothesis. Thus, Manuel (1992) and Davidson (2006) demonstrated that schwa deletion in American English is gradient (chapter 26: schwa). They reported acoustic differences between consonant clusters resulting from schwa deletion (e.g. [sp] from schwa deletion in support) and underlying consonant clusters (e.g. [sp] in sport). For instance, clusters resulting from deletion may show aspiration, whereas underlying clusters typically do not. Similarly, Russell (2008) showed that the deletion of the first vowel of a sequence of two in Plains Cree is gradient for his two native speakers (vowels may vary in their duration on the full continuum from values typical for accented full vowels to zero, which implies that they may have clear, some, or no traces at all in the acoustic signal). In contrast, several less productive processes appear categorical in nature. Examples are the possibly morphosyntactically governed coalescence of /a+i/ or

6

Mirjam Ernestus

/a(+i/ to [e(] in Plains Cree (Russell 2008) and /e/ deletion in the highly frequent French word combination c’était ‘it was’ (Torreira and Ernestus 2009). Furthermore, segments that may also be absent in more careful speech registers are more probable to be (at least partly) categorically absent. An example is word-medial schwa in French (as in f/H/nêtre ‘window’; see Bürki et al. 2010).

2.4

Gradient assimilation and segment deletion: Theoretical implications

Together, these studies suggest that many productive connected speech processes, such as assimilation and segment deletion, are gradient in nature. If the phonological component contains only categorical processes, as is assumed in traditional versions of generative grammar, these gradient processes should be classified as phonetic, which implies a move of a substantial part of the phonological component to the phonetic component. Theoretical research is needed to understand the consequences of this move. Furthermore, the experimental data suggest that post-lexical processes, in particular, show gradience. Additional detailed articulatory and acoustic studies have to investigate whether this generalization is correct. Finally, we have to investigate why some processes are categorical and others gradient and why some processes show inter-speaker and intra-speaker variation. For instance, we have to exclude the possibility that differences result from how participants deal with the experimental situation in which they are tested, including the tools that are put in their mouths for the recording of their articulation. Some participants may show normal speech behavior, while others may adapt their speech. The evidence for the gradient nature of many connected speech processes has stimulated the development of new theoretical accounts, which do not make a fundamental distinction between the phonological and phonetic components. One of the most influential theories is Articulatory Phonology, developed by Browman and Goldstein (1986, 1992; see also chapter 5: the atoms of phonological representations). This theory assumes that lexical phonological representations consist of strings of articulatory gestures (articulatory scores), which are specified for time and space, and that languages differ in how these gestures may reduce in size and overlap in time. Gradient reduction in gestural size and gradient increase in gestural overlap naturally explain the gradient natures of assimilation and segment deletion processes. For instance, nasal place assimilation in English gree[m b]oat may result from the early onset of the bilabial closure, during the realization of the preceding nasal, which makes this nasal partly bilabial. In addition, Articulatory Phonology can account for categorical connected speech processes, either by incorporating the processes in the lexical representations of the words (e.g. the French word c’était ‘it was’ may have two lexical representations: one with, and one without, the gestures for the vowel /e/), or by processes that reduce gestural sizes to zero and make gestures completely overlap in time. Note that these different types of mechanisms make Articulatory Phonology a very powerful theory, which can basically explain any reduction pattern. More research is necessary to investigate how this theory can account only for those sound patterns that are actually attested. Furthermore, detailed research is necessary to explain how listeners translate the acoustic signal into gestural scores, which are the basic units of the phonological lexical representations in Articulatory Phonology.

Gradience and Categoricality in Phonological Theory

7

While Browman and Goldstein (1986, 1992) proposed Articulatory Phonology as an alternative to theories making a sharp distinction between the phonological and the phonetic components, many researchers (e.g. Byrd and Choi 2010) do not consider this theory as a competitor for these theories. Rather, they incorporate the ideas of Articulatory Phonology (especially the idea of reduction in and overlap of articulatory gestures) into the phonetic component of generative grammar. Obviously, theoretical research is necessary to investigate the consequences of this incorporation.

2.5

Incomplete neutralization: Data

Final devoicing is another phonological process whose possible gradient nature has received a great deal of attention in the literature (see also chapter 69: final devoicing and final laryngeal neutralization). It has usually been assumed (e.g. Booij 1995) to imply a categorical change of voiced obstruents into voiceless ones, and thus a complete neutralization of the distinction between underlyingly voiced and voiceless obstruents in their phonological surface representations and articulatory and acoustic characteristics. Within traditional generative phonology, the output of final devoicing (i.e. final voiceless obstruents) forms the input to other categorical phonological processes (see below). Hence, if final devoicing turns out to lead to incomplete neutralization (i.e. to slightly voiced obstruents) and thus, according to the definitions of generative grammar, to be phonetic in nature, this has consequences for the theoretical accounts of these other phonological processes as well. That is, a gradient nature of final devoicing would have more important theoretical consequences than the gradient nature of the connected speech processes discussed above. Consequently, the possibility of incomplete neutralization has attracted attention from many researchers. Most experimental studies have investigated the nature of final devoicing by comparing the acoustic characteristics of words differing only in the underlying voice specifications of their final obstruents. The acoustic characteristics that are typically investigated are known to correlate with perceived voicing. They include the duration of the vowel preceding the final obstruent, the duration of the final stop’s closure, the duration of this stop’s burst, the complete duration of the final fricative, and the duration of glottal vibration during the final obstruent. For instance, Port and O’Dell (1985) investigated ten minimal word pairs in German (e.g. Rat ‘counsel’ vs. Rad ‘wheel’), read aloud by ten speakers, and showed that all acoustic measures mentioned above provided cues to the underlying voice specification of the final obstruent. In line with this, cluster analysis could correctly classify the underlying voice specifications of the obstruents on the basis of these acoustic measurements for 63 percent of the tokens. Similar studies have provided evidence for incomplete neutralization in Polish (e.g. Slowiaczek and Dinnsen 1985) and Dutch (e.g. Warner et al. 2004). They report acoustic differences between underlyingly voiced and voiceless obstruents in word-final position, but also that that these differences may be very small (e.g. Warner and colleagues observed a difference in vowel duration of only 2.5 msecs). Other studies have cast doubt on these findings. For instance, Port and Crawford (1989) recorded five native speakers of German reading three minimal word pairs in four different contexts. The underlyingly voiced final obstruents differed in their realization slightly from the underlying voiceless final obstruents in all four contexts,

8

Mirjam Ernestus

which is in line with the incomplete neutralization hypothesis. However, speakers differed in which acoustic cues were relevant for the distinction, and, more importantly, in whether acoustic characteristics typically cueing voiced obstruents (e.g. longer preceding vowels) were combined with underlyingly voiced or voiceless obstruents. One possible explanation for (part of) these mixed results may be the nature of one of the minimal pairs (seid, a form of ‘to be’, vs. seit ‘since’), since the final obstruent of the member seid never occurs in onset position in Modern German, and there is consequently no synchronic evidence that this obstruent is underlyingly voiced. Another study showing mixed results was conducted by Charles-Luce (1985), who investigated eight German minimal word pairs. Each of the words appeared in four different sentences, in which it was in sentence-final or medial position. Vowel duration appeared to be the only reliable cue to underlying voicing, distinguishing /t/ and /d/ in both sentence positions, but /s/ and /z/ only in sentence-final position. Several studies have raised the question of whether the reported evidence for incomplete neutralization may result from the experimental tasks speakers had to perform. Participants typically read sentences aloud, and their pronunciation may therefore show spelling effects. Fourakis and Iverson (1984) investigated this possibility by asking their German participants to conjugate strong verbs after having heard the infinitives (e.g. they heard reiten and had to form ritt and geritten). In this task, participants’ attention was not drawn to the spelling of the words to be pronounced. Only 10 percent of the statistical analyses showed a significant difference between the words ending in underlyingly voiced and underlyingly voiceless obstruents. Importantly, the differences were much smaller than those obtained for the same words in a word-reading task performed by the same speakers. Dinnsen and Charles-Luce (1984) addressed the role of spelling by studying five Catalan minimal word pairs whose members differed from each other in the underlying voice specification of the final obstruent, but not in spelling (e.g. /fat/ fat ‘fate’ vs. /fad/ fat ‘silly’). The words were embedded in carrier sentences, and five speakers read the sentences five times. Two speakers showed incomplete neutralization, one in the expected direction (vowels were 10 percent longer before underlyingly voiced obstruents in one context condition), and one in the unexpected direction (15 percent longer closures for underlyingly voiced obstruents). Finally, Warner et al. (2006) addressed the role of spelling by comparing two types of Dutch word pairs consisting of morphologically related homophones that differed underlyingly only in the presence of the singleton /t/ vs. the geminate /tt/. Importantly, only one of these two types of word pairs reflects the underlying difference in spelling. For instance, /het+Hn/ [hetHn] heten ‘are called’ vs. /het+tHn/ [hetHn] heetten ‘were called’ reflects the underlying difference, whereas /het/ [het] heet ‘am called’ vs. /het+t/ [het] heet ‘is called’ does not. The results suggest that only those underlying differences that are reflected in orthography lead to pronunciation differences, and that these pronunciation differences are comparable in size to the pronunciation differences induced by incomplete neutralization resulting from final devoicing. Together, these results suggest that incomplete neutralization may be completely driven by orthography. The nature of final devoicing has also been investigated in several perception studies, addressing the question of whether listeners are sensitive to the minimal acoustic differences assumed to be present between underlyingly voiced and voiceless obstruents. If they are, this supports the hypothesis of incomplete

Gradience and Categoricality in Phonological Theory

9

neutralization. Participants typically listened to words in isolation and indicated which word they heard by selecting the corresponding orthographic representation (e.g. German listeners heard [rat] and indicated whether they had heard Rat ‘counsel’ or Rad ‘wheel’). All studies showed that participants tend to choose the intended orthographic representation at just above chance level (e.g. 59 percent in Port and O’Dell 1985; 62 percent in Warner et al. 2006). In another type of study (Ernestus and Baayen 2007), Dutch participants rated rhymes (i.e. monosyllabic words without their onsets) as 0.7 more voiced on a scale of one to five if the final obstruent was underlyingly voiced compared to voiceless. These studies thus suggest that listeners are sensitive to the minimal cues of incomplete neutralization. It is legitimate to wonder to what extent the results from the perception experiments are simple task effects, reflecting unnatural linguistic behavior. All studies reported above asked participants to choose between orthographic forms, and hence drew participants’ attention to spelling. Moreover, participants could not perform their tasks without taking the acoustic cues to incomplete neutralization into account. Ernestus and Baayen (2006) circumvented this problem by presenting Dutch participants auditorily with non-existing verb stems and asking them to produce the corresponding past tense forms. According to Dutch regular morphology, the appropriate past tense allomorph is -te if the final obstruent of the verbal stem is underlyingly voiceless; otherwise it is -de. Earlier research had shown that participants interpret the final obstruents of nonce words on the basis of the phonologically similar existing words (Ernestus and Baayen 2003). Ernestus and Baayen (2006) showed that, if the final obstruents differ slightly in their voicing, participants interpret these acoustic differences as resulting from incomplete neutralization and use these differences as a cue for their interpretations of the final obstruents as well. They do so even if their interpretations have no consequences for the spelling of these final obstruents. These findings suggest that listeners are sensitive to incomplete neutralization also if this is not necessary for the experimental task and has no consequences for spelling. In conclusion, several experimental studies have shown that final devoicing may be incomplete, and that listeners are sensitive to the resulting minimal acoustic differences between underlyingly voiced and voiceless obstruents. Other studies, however, have cast doubt on these findings. Further research into this issue is necessary.

2.6

Incomplete neutralization: Theoretical implications

The possibility that final devoicing may be gradient is unexpected within generative grammar, since it has always been classified as a phonological process. If final devoicing is phonetic in nature (see e.g. Port and O’Dell 1985, who suggested that final devoicing and incomplete neutralization together form one phonetic implementation process), its output cannot form the input of purely phonological processes. This complicates the theoretical account of several other processes. One example is the devoicing of voiced fricatives following syllable-final obstruents in Dutch (e.g. while maa/n+v/is ‘angelfish’ is pronounced as maa[nv]is, gou/d+v/is ‘goldfish’ is pronounced as gou[tf]is (see e.g. Booij 1995)). In the traditional generative account, this fricative devoicing results from phonological progressive voice assimilation, which is fed by phonological final devoicing

10

Mirjam Ernestus

(i.e. in the example gou[tf]is, final devoicing turns word-final /d/ into [t], which triggers devoicing of the following /v/). If final devoicing is phonetic, we have to assume that the devoicing of fricatives results from a phonological process that precedes and is independent of final devoicing. Another possibility is that progressive voice assimilation is phonetic as well, an assumption for which we do not have any acoustic or articulatory support. A second phonological process that appears to follow final devoicing is resyllabification. In Dutch, word-final obstruents form syllables with following vowel-initial clitics (chapter 84: clitics), and the word-final obstruents then occupy onset positions (e.g. weet ie ‘knows he’ is pronounced as /se-ti/). Importantly, these word-final obstruents are typically voiceless, independently of their underlying voice specification. If final devoicing precedes resyllabification, this is as expected. Hence, if final devoicing is part of the phonetic component, we have to assume that resyllabification is phonetic as well, or we have to assume a phonological process, independent of phonetic final devoicing, which devoices resyllabified obstruents. In summary, if final devoicing is phonetic in nature, we have to assume that other phonological processes are also phonetic, or that there are several phonological processes doing partly the same work as final devoicing. Since both options appear unattractive, Dinnsen and Charles-Luce (1984), as well as Slowiaczek and Dinnsen (1985), suggest that phonetic implementation rules (including final devoicing) may apply before phonological rules. Note that this solution implies that phonetic processes may be of very different types. Traditional phonetic implementation processes translate segments or phonological features into phonetic scores (for articulation) that correspond well with these symbols. Final devoicing, in contrast, would change [+voice] into (almost completely) [−voice]. Given the problems facing a phonetic account of final devoicing, some researchers have proposed that the process is phonological in nature, and that incomplete neutralization results from phonetic implementation processes. These accounts have to solve the question of how the phonetic component can distinguish between obstruents that should be realized as completely voiceless and those that should be slightly voiced. Van Oostendorp (2008) proposes that obstruents may be phonologically specified as voiced ([voice]), as voiceless (no specification for voice), or as devoiced (the feature [voice] is not in a pronunciation relation), and argues that this possibility directly results from assumptions about the phonological component that are necessary for the explanation of unrelated phenomena. A completely different account of incomplete neutralization is proposed by Ernestus and Baayen (2007). Their account is based on the assumption that the mental lexicon contains representations for all words of the language, including morphologically complex words. Thus, the Dutch lexicon contains both the singular man[t] ‘basket’ and the plural man[d]en ‘baskets’. This assumption is supported by the finding that all words of high frequencies of occurrence, including morphologically inflected and derived words, are recognized and produced more quickly and with fewer errors than words of low frequencies (e.g. Baayen et al. 1997; Alegre and Gordon 1999; chapter 90: frequency effects). If the lexicon contains all words of a language, all word-final obstruents can be lexically represented as voiceless. The information that obstruents are voiced in morphologically related words is present in the lexical representations of these related

Gradience and Categoricality in Phonological Theory

11

words themselves. Thus, the Dutch word for ‘basket’ can be lexically represented as man/t/, since the plural man/d/en is stored as well. In this account, incomplete neutralization may be explained in two ways. First, lexical representations may be gradient and contain detailed information about the exact pronunciations of the segments (see also §2.7). Word-final obstruents may thus be represented as slightly voiced. Second, the realization of a word may be affected by the pronunciations of phonologically and morphologically related words. If a stem-final obstruent is voiced in most words, these voiced specifications may affect the pronunciation of the stem-final obstruent in word-final position, which is consequently produced as slightly voiced. This type of lexical analogy would also explain why, in the absence of an abstract mechanism of final devoicing, the final obstruents of new words are always produced as voiceless: this results from the influence of all final voiceless obstruents in the lexicon. In conclusion, incomplete neutralization has attracted much attention in the theoretical literature, framed both within and outside generative grammar. This may be surprising since we saw above that the phenomenon is not yet well established. Note, however, that if future research will show that incomplete neutralization is just an artifact of our experimental paradigms, we still need to explain how these experimental effects can arise in speech production and comprehension. Incomplete neutralization will therefore remain an important theoretical topic.

2.7

Fine phonetic detail in speech processing

Within generative grammar, lexical representations are categorical in nature, as they consist of strings of phonemes or phonological features, abstracting away from phonetic detail which is not necessary to distinguish between these units. In contrast, several researchers now consider the hypotheses that lexical representations are gradient in nature and reflect fine phonetic detail (see, for example, the account of incomplete neutralization of Ernestus and Baayen 2007, mentioned above) and that one and the same word may have many lexical representations reflecting slightly different pronunciations. These hypotheses are based on the findings that phonetic detail may play an important role in speech comprehension (chapter 98: speech perception and phonology). First, experimental data show that listeners are sensitive to phonetic detail providing information about upcoming segments. For instance, in several languages, the relative duration of a vowel is a cue to the presence of additional syllables within the same word, as vowels are typically shorter if they are followed by more syllables. Listeners use these durational cues and predict that syllables like ham and dive, produced with relatively short vowels, are part of longer words (i.e. hamster and diver; e.g. Davis et al. 2002; Kemps et al. 2005). Similarly, listeners use fine phonetic cues in syllable onsets to predict the presence of /r/ or /s/ in syllable codas (e.g. Heinrich and Hawkins 2009). Second, several experiments have shown that listeners remember voice characteristics and that these memory traces may affect speech processing. For instance, participants are faster in determining whether two words in a sequence are identical if these two words are presented in the same voice than if they are presented in different voices (Cole et al. 1974). Participants tend to complete morphological stems with those suffixes that result in words they have just heard before,

12

Mirjam Ernestus

especially if these complex words were produced by the same voice as the stems (Schacter and Church 1992). Furthermore, participants tend to mimic previously heard pronunciations in their phonetic detail (Goldinger 1998). These phonetic detail effects can be accounted for within generative grammar by means of the phonetic component and performance factors. The phonetic component may translate long stretches of phonological segments, rather than single segments, into acoustic signals. Likewise, listeners may analyze acoustic signals to extract not only their segments but also information on following segments (see e.g. Norris and McQueen 2008). This would explain the existence and perceptual relevance of acoustic cues distributed over longer stretches of speech. The effects of voice characteristics may result from the storage of acoustic signals in short-term memory. In addition, these data may be accounted for by assuming that the detailed phonetic properties of a word are stored in the mental lexicon together with all other information about that word. Thus, the lexical representation diver may contain the information that the first vowel is relatively short. Episodic models (e.g. Goldinger 1998) assume that the mental lexicon contains such detailed representations for all tokens of all words that a speaker has ever encountered (such representations are called exemplars). These models can easily explain the processing effects of voice characteristics: if a lexicon contains a word token with the characteristics of a given speaker, the mapping of a new token of that word produced by that same speaker with the exemplars in the mental lexicon is easier than if the mental lexicon does not already contain a token by that speaker. Episodic models are especially popular in psycholinguistics. So far, two purely episodic models have been developed and computationally implemented for speech processing: Johnson’s (1997) XMOD and Goldinger’s (1998) MINERVA. The XMOD model is based on the Lexical Access from Spectra (LAFS) model developed by Klatt (1979), and assumes that the incoming speech signal is transformed into a sequence of spectra. MINERVA was originally developed by Hintzman (1986) and applied to speech by Goldinger. Both XMOD and MINERVA assume that during the recognition process, exemplars respond to an acoustic input in proportion to their similarities to this input, and that their activations spread to the abstract word nodes (XMOD) or to the working memory (MINERVA), which enables recognition. In addition to these purely episodic models, several hybrid models have been formulated, which assume both abstract lexical representations (strings of phonemes or features) and exemplars. These models can account for all experimental evidence supporting abstract lexical representations (including categorical perception, e.g. Liberman et al. 1957) and for the role of fine phonetic detail in speech processing. In addition, they can account for the recent finding that speaker characteristics affect speech processing only if for some reason processing is slow. McLennan and Luce (2005) as well as Mattys and Liss (2008) showed that tokens produced by the same voice are recognized more quickly than tokens produced by different voices only if the experimental task produces delayed responses (e.g. a shadowing task with a long set response time, or a lexical decision experiment that is difficult because of the many word-like pseudowords). An important hybrid model for speech production is proposed by Pierrehumbert (2002). She assumes that speech production involves the activation of abstract representations, the application of abstract phonological rules (e.g. Prosodic Final

Gradience and Categoricality in Phonological Theory

13

Lengthening), and the activation of exemplar clouds of phonological units (e.g. phonemes and phoneme sequences). Two hybrid models for word recognition are Goldinger’s (2007) Complementary Learning System and the model that McLennan et al. (2003) developed on the basis of the Adaptive Resonance Theory (Grossberg and Stone 1986). Both models assume that the incoming signal is first analyzed into abstract phonological units, which are matched with the abstract representations in the lexicon, and only then is the signal matched with the stored exemplars. Another hybrid model for word recognition is PolySP (Polysystemic Speech Perception), developed by Hawkins and Smith (Hawkins and Smith 2001; Hawkins 2003). This model assumes that a memory trace does not only contain acoustic information, but also multi-medial context, for instance, visual information about the speaker’s gestures. In addition, the model assumes that the analysis of an acoustic input into its linguistic units (phonemes, etc.) may precede (and contribute to) or coincide with or follow word recognition or not take place at all, depending on the circumstances. In conclusion, experimental evidence suggests that gradient acoustic characteristics play a role in speech processing. More research is necessary showing which types of acoustic characteristics are relevant, how this gradient information is accessed under which conditions, and how the role of this type of information should be accounted for in speech production and comprehension models.

2.8

Conclusion

Gradience appears to be a much more important characteristic of speech sounds than is traditionally assumed. Place and voice assimilation, segment deletion, and final devoicing often result in sounds showing incomplete neutralization, i.e. they result in sounds that contain characteristics of more than one phoneme or that are only partly absent. Since generative grammar assumes that gradience is a characteristic of the phonetic component, these data suggest that within this theory many processes that have always been classified as phonological actually belong to the phonetic component. Particularly in the case of final devoicing, this reclassification has consequences for the classification of other speech processes as well. Alternative theories have been developed, which assume that the phonological primitives are articulatory gestures or that lexical representations reflect the gradient nature of speech sounds. These theories are supported by data showing that fine phonetic detail affects speech processing.

3 3.1

Productive sound patterns Introduction

Gradience does not only play a role in the discussion of the phonological and phonetic components and of the nature of lexical representations, but also in the theoretical discussion of the nature of productive (morpho)phonological processes. Within traditional generative phonology, a productive process applies always and to all inputs that satisfy its structural description. Productive processes are thus categorical in nature. Recent research suggests, however, that some productive processes show gradience. The following two subsections discuss evidence for

14

Mirjam Ernestus

gradient phonological processes, their implications for generative phonology, and alternative theories that account for the gradient data.

3.2

Phonotactic constraints

The first type of phonological processes whose categorical nature has been seriously questioned is phonotactic generalizations. Within traditional generative phonology, all illegal sequences are considered equally illegal, all legal sequences as equally legal, and there is no gradience in legality. If this assumption is correct, differences in the frequencies of occurrence of phonemes and phoneme sequences are based on coincidence. Pierrehumbert (1994) studied the frequencies of consonants and consonant clusters at the beginning of words, at the end of words (excluding the phonological appendix), and in syllable onset and coda positions within morpheme-internal consonant clusters (e.g. the frequency of [n] in words like vanquish, where it is in syllable coda position within a consonant cluster, and the frequency of [st] in words like lobster, where it is in syllable onset position in a consonant cluster) in an American English dictionary. If the consonants are randomly distributed over the positions, the frequencies of a given consonant (cluster) in the different positions should be unrelated. This appeared not to be the case. The frequency of a morpheme-internal cluster appears highly correlated to the frequency of its first part (i.e. the consonants in coda position) in word-final position and to the frequency of its second part (i.e. the consonants in onset position) in word-initial position. Phonemes and phoneme sequences structurally differ in their frequencies in a language. Crucially, language users reflect these frequencies in their well-formedness judgments of nonce words and parts of words (chapter 86: morpheme structure constraints). Speakers typically judge high-frequency rhymes as “phonologically” better than low-frequency rhymes (Treiman et al. 2000), phonotactically legal nonce words as better if they contain phoneme sequences of high frequency (e.g. Vitevitch et al. 1997; Frisch et al. 2000), and nasal–obstruent clusters as better if these clusters are more frequent (Hay et al. 2004). Thus, blick is rated as a good English word, bnick as an impossible word and bwick is rated in between. Importantly, these gradient well-formedness judgments are obtained both if participants are allowed to provide gradient responses and if they have to provide categorical judgments, with the judgments being averaged over participants (Frisch et al. 2000). This strongly suggests that phonotactic constraints are gradient rather than categorical. Language users’ judgments of a nonce word are also affected by the phonological distance of this word from existing words (chapter 87: neighborhood effects). Thus, participants rate a nonce word as more well-formed if it differs in fewer phonemes from an existing word (Greenberg and Jenkins 1964; Ohala and Ohala 1986). In addition, their well-formedness judgments are related to the size of a word’s phonological neighborhood (Bailey and Hahn 2001; Hammond 2004), which is typically defined as the number of existing words that can be changed into that word by the substitution, addition, or deletion of a single phoneme. Importantly, the effect of the word’s phonological neighborhood is independent of the effects of the frequencies of the word’s constituents (i.e. the effect is also present if words with small and larger neighborhoods are matched in the frequencies of their constituents). This shows again that well-formedness judgments are not

Gradience and Categoricality in Phonological Theory

15

categorical (i.e. it is not the case that a word is either completely well formed or completely ill formed). Rather, these judgments are gradient between completely well formed and completely ill formed. Importantly, the measures affecting well-formedness judgments also play a role in other (psycho-)linguistic tasks. The frequencies of the phonemes and phoneme sequences in a word have been shown to affect speech production, recognition, and learning. For instance, participants are better at repeating nonce words made up of high-frequency rather than low-frequency phoneme sequences (Vitevitch et al. 1997) and at transcribing such words orthographically (Hay et al. 2004). Participants tend to interpret ambiguous fricatives as the most probable ones given the preceding and following segments (Pitt and McQueen 1998). Ninemonth-old infants prefer to listen to words consisting of high-frequency rather than low-frequency phoneme sequences (Jusczyk et al. 1994). Furthermore, when both eight-month-old infants and adults are presented with continuous speech froma non-existing (artificial) language, they extract the words of this language on the assumption that frequent phoneme sequences form (parts of) words, while the less frequent ones span word boundaries (Saffran et al. 1996a; Saffran et al. 1996b). Similarly, speech production and comprehension are affected by a word’s phonological neighborhood. Thus, participants recognize words with large neighborhoods more slowly in auditory lexical decision (e.g. Luce and Pisoni 1998) and produce them with more expanded vowel spaces (Munson and Solomon 2004), while pre-school-aged children produce such words more quickly and with fewer errors in picture-naming tasks (Arnold et al. 2005). Several generative linguists have assumed that the gradience of well-formedness judgments may be merely a task effect, resulting from performance factors (for a discussion, see Schütze 2005). This account is in line with the finding that the variables affecting well-formedness ratings also play roles in speech production, perception, and learning, which are certainly modulated by performance factors. In addition, there is a continuum of accounts which differ in their assumptions about the contributions of the phonological component and the mental lexicon. The models at one end of the continuum assume that the gradience of wellformedness judgments results from the gradient nature of the phonological component itself. This component would be gradient due to the probabilistic nature of its constraints or rules. For instance, Hammond (2004) frames his account of gradient well-formedness judgments within Probabilistic Optimality Theory, which is based on Stochastic Optimality Theory, developed by Boersma (1998). The idea is that the ranking of constraints is variable, and that a given (markedness or faithfulness) constraint outranks some other constraint with a certain probability. If this probability is smaller than 1, the phonological component shows variation, sometimes favoring one form and sometimes another, which results in gradient well-formedness rankings. The probability of a given ranking (and consequently the judgment of a given form) may be co-determined by the frequencies of phoneme sequences and by the exact contents of the mental lexicon. Models at the other end of the continuum assume that well-formedness judgments for a given word result only from the comparison of that word with all words in the mental lexicon and their constituents. The visual or auditory presentation of a word leads to the activation of all (phonologically) similar words in the lexicon and their constituents, and a higher total lexical activation leads to a higher well-formedness rating. In these analogical models, there is thus no

16

Mirjam Ernestus

role for an abstract phonological component with hardwired phonological constraints or rules (e.g. Bailey and Hahn 2001). Models positioned between the two ends typically assume that the effects of constituent frequencies result from phonotactic knowledge, and the effects of phonological neighborhood from lexical knowledge. Phonotactic knowledge is permanently stored in the phonological component, while lexical knowledge is deduced from the mental lexicon if necessary (e.g. Bailey and Hahn 2001; Albright 2009). In summary, the evidence for gradience in well-formedness judgments is undisputed. Detailed research is necessary in different domains of phonology to establish the best theoretical account.

3.3

Allomorphy

A second type of productive phonological process that appears gradient is those involved in morphological processing. These morphophonological processes select affixes on the basis of the phonological properties of the words’ stems (chapter 99: phonologically conditioned allomorph selection). For instance, Dutch regular past tense forms consist of a verbal stem and the suffix -te or -de. According to the traditional literature (which follows Dutch orthography), the correct allomorph is -te if the verbal stem ends in an underlyingly voiceless obstruent (e.g. sta/p+t/e ‘stepped’), otherwise it is -de (e.g. kra/b+d/e ‘scratched’). It has been shown recently that at least some of these apparently perfectly categorical generalizations do not do justice to the full data. One example is the above-mentioned regular past tense formation in Dutch. Ernestus and Baayen (2004) show that the description of the selection of the past tense allomorph given in the literature is too simplistic. Speakers tend to choose the non-standard allomorph for verbal stems that are special, in that the underlying voice specification of their final obstruent is unexpected given the other stems ending in the same type of final rhyme in the lexicon. For instance, speakers often choose the non-standard allomorph for kra/b/ (creating kra/b+t/e), which is one of the few Dutch verb stems ending in a short vowel and a voiced (instead of voiceless) bilabial stop. The pattern “short vowel–underlyingly voiceless bilabial stop” is much more common (e.g. sto/p/ ‘stop’, kla/p/ ‘bang’, me/p/ ‘slap’, ni/p/ ‘sip’) than the pattern “short vowel–underlyingly voiced bilabial stop,” and speakers tend to add the allomorph that is correct for the majority of verbs ending in a short vowel and a bilabial stop to the minority of verbs for which it is incorrect (i.e. verbs ending in a short vowel and an underlyingly voiced bilabial stop). These findings can easily be incorporated in all types of theoretical accounts, since the only adaptation necessary is that the broad generalizations are replaced or supplemented by generalizations that are more specific for the precise phonological properties of the words. Apparently, Dutch requires a generalization stating that stems ending in short vowels and bilabial stops tend to select -te. Importantly, however, the facts are more complex. First, Ernestus and Baayen (2004) observe that, if participants select the standard allomorph, they do so more quickly for verbs following the majority patterns than for exceptional verbs (i.e. they produce forms of the type stapte more quickly than forms of the type krabde). Second, Ernestus and Baayen (2003, 2004; see also Ernestus 2006) find that speakers show stochastic behavior; they often do not agree with each other,

Gradience and Categoricality in Phonological Theory

17

and the same speaker may choose -te for some verbs and -de for other verbs of the same type. Similar results have been found, among others, for past tense formation in English (Albright and Hayes 2003), the choice of the English indefinite article (a vs. an; Skousen 1989), and vowel harmony in Hungarian (Hayes and Londe 2006). Apparently, the morphophonological processes that have to replace or supplement the traditional broad generalizations are not simple categorical rules that apply whenever their structural description is met. The processes are gradient in nature. Speakers’ probabilistic behavior has been accounted for in the two types of approaches (forming a continuum) that also explain the gradience of wellformedness ratings (see above). The first approach holds that constraints or rules are probabilistic in nature. Thus, in Stochastic Optimality Theory (Boersma 1998), constraint rankings are stochastic, and in the rule-based account proposed by Albright and Hayes (2003) rules differ in their confidence intervals. Both accounts assume that the probability of a constraint ranking or rule (and thus of a given form) is determined by the exact contents of the mental lexicon. While this approach can account well for the observed probabilistic effects, additional assumptions are necessary to explain why speakers are slower in selecting the standard allomorph if it receives less lexical support than the other allomorph (for a discussion, see Ernestus 2006). The second approach to speakers’ stochastic behavior assumes that, when speakers select an allomorph for a word, they check all words in their lexicons online. The probability that they select a given allomorph is proportional to its support from the words in the lexicon, with words that are more similar to the target word being more influential. If the target word itself is in the lexicon as well and supports a different allomorph from the one receiving the greatest lexical support from the other words, this may result in severe competition between the two allomorphs, which may lead to the selection of the non-standard allomorph and longer response latencies (Ernestus and Baayen 2004). In conclusion, phonologically driven allomorphy also strongly suggests that gradience is an important characteristic of phonology. The generalizations formulated in the generative literature appear too coarse-grained, given that speakers show probabilistic behavior. Several models can account for the obtained observations so far. More data are necessary to tease the different accounts apart.

4

Conclusion

In the early days of generative grammar, phonology was assumed to be completely categorical in nature. The present chapter has provided a summary of different types of corpus-based and experimental studies which strongly suggests that many processes traditionally classified as phonological are in fact gradient in nature. Sounds may contain characteristics of different categories, and speakers may show probabilistic behavior. These data have given rise to modifications of traditional generative phonology and to the development of new theories, including theories assuming different types of phonological primitives and phonological representations, and theories challenging the role of abstract generalizations. Further research is necessary to obtain a more detailed view of the role of gradience in phonology and to tease different theoretical accounts apart. Until then, we have to conclude that gradience is an important challenge for phonology.

18

Mirjam Ernestus

REFERENCES Albright, Adam. 2009. Feature-based generalisation as a source of gradient acceptability. Phonology 26. 9–41. Albright, Adam & Bruce Hayes. 2003. Rules vs. analogy in English past tenses: A computational/experimental study. Cognition 90. 119–161. Allegre, Maria & Peter Gordon. 1999. Frequency effects and the representational status of regular inflections. Journal of Memory and Language 40. 41–61. Arnold, Hayley S., Edward G. Conture & Ralph N. Ohde. 2005. Phonological neighborhood density in the picture naming of young children who stutter: Preliminary study. Journal of Fluency Disorders 30. 125–148. Baayen, R. Harald, Ton Dijkstra & Robert Schreuder. 1997. Singulars and plurals in Dutch: Evidence for a parallel dual route model. Journal of Memory and Language 36. 94–117. Bailey, Todd M. & Ulrike Hahn. 2001. Determinants of wordlikeness: Phonotactics or lexical neighborhoods? Journal of Memory and Language 44. 568–591. Barry, Martin C. 1992. Palatalisation, assimilation and gestural weakening in connected speech. Speech Communication 11. 393–400. Berg, Rob J. H. van den. 1988. The perception of voicing in Dutch two-obstruent sequences. Enschede: Sneldruk Enschede. Boersma, Paul. 1998. Functional phonology: Formalizing the interactions between articulatory and perceptual drives. The Hague: Holland Academic Graphics. Booij, Geert. 1995. The phonology of Dutch. Oxford: Clarendon Press. Browman, Catherine P. & Louis Goldstein. 1986. Towards an articulatory phonology. Phonology Yearbook 3. 219–252. Browman, Catherine P. & Louis Goldstein. 1990. Tiers in articulatory phonology, with some implications for casual speech. In Kingston & Beckman (1990), 341–376. Browman, Catherine P. & Louis Goldstein. 1992. Articulatory phonology: An overview. Phonetica 49. 155–180. Bürki, Audrey, Mirjam Ernestus & Ulrich Frauenfelder. 2010. Is there only one fenêtre in the production lexicon? On-line evidence about the nature of phonological representations. Journal of Memory and Language 62. 421–437. Byrd, Dani & Susie Choi. 2010. At the juncture of prosody, phonology, and phonetics: The interaction of phrasal and syllable structure in shaping the timing of consonant gestures. In Cécile Fougeron, Barbara Kühnert, Mariapaola D’Imperio & Nathalie Vallée (eds.) Laboratory phonology 10, 31–59. Berlin & New York: Mouton de Gruyter. Charles-Luce, Jan. 1985. Word-final devoicing in German: Effects of phonetic and sentential contexts. Journal of Phonetics 13. 309–324. Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row. Cohn, Abigail C. 1993. Nasalisation in English: Phonology or phonetics. Phonology 10. 43–81. Cole, Ronald A., Max Coltheart & Fran Allard. 1974. Memory of a speaker’s voice: Reaction time to same- or different-voiced letters. Quarterly Journal of Experimental Psychology 26. 1–7. Davidson, Lisa. 2006. Schwa elision in fast speech: Segmental deletion or gestural overlap. Phonetica 63. 79–112. Davis, Matthew H., William D. Marslen-Wilson & M. Gareth Gaskell. 2002. Leading up the lexical garden path: Segmentation and ambiguity in spoken word recognition. Journal of Experimental Psychology: Human Perception and Performance 28. 218–244. Dinnsen, Daniel A. & Jan Charles-Luce. 1984. Phonological neutralization, phonetic implementation and individual differences. Journal of Phonetics 12. 49–60.

Gradience and Categoricality in Phonological Theory

19

Ellis, Lucy & W. J. Hardcastle. 2002. Categorical and gradient properties of assimilation in alveolar to velar sequences: Evidence from EPG and EMA data. Journal of Phonetics 30. 373–396. Ernestus, Mirjam. 2000. Voice assimilation and segment reduction in casual Dutch: A corpus-based study of the phonology–phonetics interface. Utrecht: LOT. Ernestus, Mirjam. 2006. Statistically gradient generalizations in phonology. The Linguistic Review 23. 217–234. Ernestus, Mirjam & R. Harald Baayen. 2003. Predicting the unpredictable: Interpreting neutralized segments in Dutch. Language 79. 5–38. Ernestus, Mirjam & R. Harald Baayen. 2004. Analogical effects in regular past tense production in Dutch. Linguistics 42. 873–903. Ernestus, Mirjam & R. Harald Baayen. 2006. The functionality of incomplete neutralization in Dutch: The case of past-tense formation. In Louis M. Goldstein, Douglas Whalen & Catherine T. Best (eds.) Laboratory phonology 8, 27–49. Berlin & New York: Mouton de Gruyter. Ernestus, Mirjam & R. Harald Baayen. 2007. Intraparadigmatic effects on the perception of voice. In van de Weijer & van der Torre (2007), 153–172. Ernestus, Mirjam, Mybeth Lahey, Femke Verhees & R. Harald Baayen. 2006. Lexical frequency and voice assimilation. Journal of the Acoustical Society of America 120. 1040–1051. Farnetani, Edda & M. Grazia Busà. 1994. Italian clusters in continuous speech. Proceedings of the 3rd International Conference on Spoken Language Processing, 359–361. Yokohama: Acoustical Society of Japan. Fourakis, Marios & Gregory K. Iverson. 1984. On the “incomplete neutralization” of German final obstruents. Phonetica 41. 140–149. Frisch, Stefan A., Nathan R. Large & David B. Pisoni. 2000. Perception of wordlikeness: Effects of segment probability and length on the processing of nonwords. Journal of Memory and Language 42. 481–496. Goldinger, Stephen D. 1998. Echoes of echoes? An episodic theory of lexical access. Psychological Review 105. 251–279. Goldinger, Stephen D. 2007. A complementary-systems approach to abstract and episodic speech perception. In Jürgen Trouvain & William J. Barry (eds.) Proceedings of the 16th International Congress of Phonetic Sciences, 49–54. Saarbrücken: Saarland University. Greenberg, Joseph H. & James J. Jenkins. 1964. Studies in the psychological correlates of the sound system of American English. Word 20. 157–177. Grossberg, Stephen & Gregory Stone. 1986. Neural dynamics of word recognition and recall: Attentional priming, learning, and resonance. Psychological Review 93. 46–74. Hammond, Michael. 2004. Gradience, phonotactics, and the lexicon in English phonology. International Journal of English Studies 4. 1–24. Hardcastle, W. J. 1972. The use of electropalatography in phonetic research. Phonetica 25. 197–215. Hawkins, Sarah. 2003. Roles and representations of systematic fine phonetic detail in speech understanding. Journal of Phonetics 31. 373–405. Hawkins, Sarah & Rachel Smith. 2001. Polysp: A polysystemic, phonetically rich approach to speech understanding. Rivista di Linguistica 13. 99–188. Hay, Jennifer, Janet B. Pierrehumbert & Mary E. Beckman. 2004. Speech perception, wellformedness and the statistics of the lexicon. In John Local, Richard Ogden & Rosalind Temple (eds.) Phonetic interpretation: Papers in laboratory phonology VI. Cambridge: Cambridge University Press. 58–74. Hayes, Bruce & Zsuzsa Cziráky Londe. 2006. Stochastic phonological knowledge: The case of Hungarian vowel harmony. Phonology 23. 59–104.

20

Mirjam Ernestus

Heinrich, Antje & Sarah Hawkins. 2009. The effect of r-resonance information on intelligibility. In Uther et al. (2009), 804–807. Hintzman, Douglas L. 1986. “Schema abstraction” in a multiple-trace memory model. Psychological Review 93. 411–428. Jansen, Wouter. 2007. Dutch regressive voicing assimilation as a “low level phonetic process”: Acoustic evidence. In van de Weijer & van der Torre (2007), 125–151. Johnson, Keith. 1997. Speech perception without speaker normalization: An exemplar model. In Keith Johnson & John W. Mullennix (eds.) Talker variability in speech processing, 145–165. San Diego: Academic Press. Jusczyk, Peter W., Paul A. Luce & Jan Charles-Luce. 1994. Infants’ sensitivity to phonotactic patterns in the native language. Journal of Memory and Language 33. 630–645. Keating, Patricia. 1985. Universal phonetics and the organization of grammars. In Victoria A. Fromkin (ed.) Phonetic linguistics: Essays in honor of Peter Ladefoged, 115–132. Orlando: Academic Press. Keating, Patricia. 1988. Underspecification in phonetics. Phonology 5. 275–292. Keating, Patricia. 1990a. Phonetic representations in a generative grammar. Journal of Phonetics 18. 321–334. Keating, Patricia. 1990b. The window model of coarticulation: Articulatory evidence. In Kingston & Beckman (1990), 451–470. Kemps, Rachèl, Lee H. Wurm, Mirjam Ernestus, Robert Schreuder & R. Harald Baayen. 2005. Prosodic cues for morphological complexity: Comparatives and agent nouns in Dutch and English. Language and Cognitive Processes 20. 43–73. Kenstowicz, Michael & Charles W. Kisseberth. 1979. Generative phonology: Description and theory. New York: Academic Press. Kingston, John & Mary E. Beckman (eds.) 1990. Papers in laboratory phonology I: Between the grammar and physics of speech. Cambridge: Cambridge University Press. Kingston, John & Randy L. Diehl. 1994. Phonetic knowledge. Language 70. 419–454. Klatt, Dennis H. 1979. Speech perception: A model of acoustic–phonetic analysis and lexical access. Journal of Phonetics 7. 279–312. Kochetov, Alexei & Marianne Pouplier. 2008. Phonetic variability and grammatical knowledge: An articulatory study of Korean place assimilation. Phonology 25. 399–431. Kuzla, Claudia, Taehong Cho & Mirjam Ernestus. 2007. Prosodic strengthening of German fricatives in duration and assimilatory devoicing. Journal of Phonetics 35. 301–320. Levelt, Willem J. M. 1989. Speaking: From intention to articulation. Cambridge, MA: MIT Press. Liberman, Alvin, Katherine S. Harris, Howard S. Hoffman & Belver C. Griffith. 1957. The discrimination of speech sounds within and across phoneme boundaries. Journal of Experimental Psychology 54. 358–368. Luce, Paul A. & David B. Pisoni. 1998. Recognizing spoken words: The neighborhood activation model. Ear and Hearing 19. 1–36. Manuel, Sharon Y. 1992. Recovery of “deleted” schwa. In Olle Engstrand & Catharina Kylander (eds.) Perilus XIV: Papers from the Symposium on Current Phonetic Research Paradigms for Speech Motor Control, 115–118. Stockholm: University of Stockholm. Mattys, Sven L. & Julie M. Liss. 2008. On building models of spoken-word recognition: When there is as much to learn from natural “oddities” as artificial normality. Perception and Psychophysics 70. 1235–1242. McLennan, Conor T. & Paul A. Luce. 2005. Examining the time course of indexical specificity effects in spoken word recognition. Journal of Experimental Psychology: Learning, Memory and Cognition 31. 306–321. McLennan, Conor T., Paul A. Luce & Jan Charles-Luce. 2003. Representation of lexical form. Journal of Experimental Psychology: Learning, Memory and Cognition 29. 539–553. Munson, Benjamin & Nancy P. Solomon. 2004. The effect of phonological neighborhood density on vowel articulation. Journal of Speech, Language, and Hearing Research 47. 1048–1058.

Gradience and Categoricality in Phonological Theory

21

Nolan, Francis. 1992. The descriptive role of segments: Evidence from assimilation. In Gerard J. Docherty & D. Robert Ladd (eds.) Papers in laboratory phonology II: Gesture, segment, prosody, 261–280. Cambridge: Cambridge University Press. Norris, Dennis. 1994. Shortlist: A connectionist model of continuous speech recognition. Cognition 52. 189–234. Norris, Dennis & James M. McQueen. 2008. Shortlist B: A Bayesian model of continuous speech recognition. Psychological Review 115. 357–395. Ohala, John J. & Manjari Ohala. 1986. Testing hypotheses regarding the psychological manifestation of morpheme structure constraints. In John J. Ohala & Jeri Jaeger (eds.) Experimental phonology, 239–252. Orlando: Academic Press. Oostendorp, Marc van. 2008. Incomplete devoicing in formal phonology. Lingua 118. 1362–1374. Perkell, Joseph S., Marc H. Cohen, Mario A. Svirsky, Melanie L. Matthies, Iñaki Garabieta & Michael T. T. Jackson. 1992. Electromagnetic midsagittal articulometer systems for transducing speech articulatory movements. Journal of the Acoustical Society of America 92. 3078–3096. Pierrehumbert, Janet B. 1990. Phonological and phonetic representation. Journal of Phonetics 18. 375–394. Pierrehumbert, Janet B. 1994. Syllable structure and word structure: A study of triconsonantal clusters in English. In Patricia Keating (ed.) Phonological structure and phonetic form: Papers in laboratory phonology III, 168–188. Cambridge: Cambridge University Press. Pierrehumbert, Janet B. 2002. Word-specific phonetics. In Carlos Gussenhoven & Natasha Warner (eds.) Laboratory phonology 7, 101–139. Berlin & New York: Mouton de Gruyter. Pitt, Mark A. & James M. McQueen. 1998. Is compensation for coarticulation mediated by the lexicon? Journal of Memory and Language 39. 347–370. Port, Robert F. & Penny Crawford. 1989. Incomplete neutralization and pragmatics in German. Journal of Phonetics 17. 257–282. Port, Robert F. & Michael O’Dell. 1985. Neutralization of syllable-final voicing in German. Journal of Phonetics 13. 455–471. Russell, Kevin. 2008. Sandhi in Plains Cree. Journal of Phonetics 36. 450–464. Saffran, Jenny R., Richard N. Aslin & Elissa L. Newport. 1996a. Statistical learning by 8-month-old infants. Science 274. 1926–1928. Saffran, Jenny R., Elissa L. Newport & Richard N. Aslin. 1996b. Word segmentation: The role of distributional cues. Journal of Memory and Language 35. 606–621. Saussure, Ferdinand de. 1916. Cours de linguistique générale. Lausanne & Paris: Payot. Schacter, Daniel L. & Barbara A. Church. 1992. Auditory priming: Implicit and explicit memory for words and voices. Journal of Experimental Psychology: Learning, Memory, and Cognition 18. 915–930. Schütze, Carson T. 2005. Thinking about what we are asking speakers to do. In Stephan Kepser & Marga Reis (eds.) Linguistic evidence: Empirical, theoretical, and computational perspectives, 457–484. Berlin & New York: Mouton de Gruyter. Skousen, Royal. 1989. Analogical modeling of language. Dordrecht: Kluwer. Slowiaczek, Louisa M. & Daniel A. Dinnsen. 1985. On the neutralizing status of Polish word-final devoicing. Journal of Phonetics 13. 325–341. Torreira, Francisco & Mirjam Ernestus. 2009. Vowel elision in connected French: The case of vowel /e/ in the word c’était. In Uther et al. (2009), 448–451. Treiman, Rebecca, Brett Kessler, Stephanie Knewasser, Ruth Tincoff & Margo Bowman. 2000. English speakers’ sensitivity to phonotactic patterns. In Michael B. Broe & Janet B. Pierrehumbert (eds.) Papers in laboratory phonology V: Acquisition and the lexicon, 269–282. Cambridge: Cambridge University Press. Uther, Maria, Roger Moore & Stephen Cox (eds.) 2009. Proceedings of the 10th Annual Conference of the International Speech Communication Association (Interspeech 2009). Brighton: Causal Productions.

22

Mirjam Ernestus

Vitevitch, Michael S., Paul A. Luce, Jan Charles-Luce & David Kemmerer. 1997. Phonotactics and syllable stress: Implications for the processing of spoken nonsense words. Language and Speech 40. 47–62. Warner, Natasha, Allard Jongman, Joan A. Sereno & Rachèl Kemps. 2004. Incomplete neutralization and other sub-phonemic durational differences in production and perception: Evidence from Dutch. Journal of Phonetics 32. 251–276. Warner, Natasha, Erin Good, Allard Jongman & Joan A. Sereno. 2006. Orthographic vs. morphological incomplete neutralization effects. Journal of Phonetics 34. 285–293. Weijer, Jeroen van de & Eric Jan van der Torre (eds.) 2007. Voicing in Dutch: (De)voicing – Phonology, phonetics, and psycholinguistics. Amsterdam & Philadelphia: John Benjamins. Zsiga, Elizabeth C. 1995. An acoustic and electropalatographic study of lexical and postlexical palatalization in American English. In Bruce Connell & Amalia Arvanti (eds.) Papers in laboratory phonology IV: Phonology and phonetic evidence, 282–302. Cambridge: Cambridge University Press. Zsiga, Elizabeth C. 1997. Features, gestures, and Igbo vowels: An approach to the phonology–phonetics interface. Language 73. 227–274.

90

Frequency Effects Stefan A. Frisch

1

Introduction

This chapter provides a review of evidence that occurrence frequency has an influence on phonological patterns. The examination of quantitative or statistical patterns in phonology, and grammar more generally, pre-dates the development of modern linguistic theory. For example, Zipf (1965) examined statistical properties of texts and noted a variety of effects that still have relevance in current theoretical discussions, such as the tendency for reduction of high-frequency words. Bolinger (1961) and Herdan (1962) discussed gradience and statistical distribution as a natural middle ground between the grammatical extremes of the acceptable and the unacceptable. However, given the limitations both in the understanding of language structure and in the computational resources for conducting quantitative studies, this type of research did not gain much traction in the field of linguistics more generally. With the availability of lexical and usage corpora now available, and armed with the descriptive and theoretical advances of modern linguistic theory, a variety of authors are now arguing for the influence of frequency on phonological patterns in synchronic phonology and morphophonology, phonological acquisition, and diachronic phonology. In many ways, phonology is the ideal domain in which to study the potential role of frequency effects in grammar (Herdan 1962). The set of basic phonological units (phonemes or features) is relatively limited, these units are routinely combined in a reasonably static set of fixed forms (the lexicon), and productive combinations of these units in morphology are also limited in their variety of combination by that same fixed set of units. Morphophonological changes are triggered by phonological environments defined by features (chapter 17: distinctive features) or phonemes (chapter 11: the phoneme) and the results of the changes are within the same set of featural or phonemic varieties.

1.1

Definition of phonology

In this chapter, I take phonology to be any aspect of language sound structure that can vary systematically between languages or dialects. This would therefore include most of what might traditionally be called phonetics. In the debate over The Blackwell Companion to Phonology. Edited by Marc van Oostendorp, Colin J. Ewen, Elizabeth Hume, and Keren Rice. © 2011 John Wiley & Sons, Ltd. Published 2011 by John Wiley & Sons, Ltd. DOI: 10.1002/9781444335262.wbctp0090

2

Stefan A. Frisch

the phonetics–phonology interface, or lack thereof, it has been shown by the laboratory phonology enterprise over the last 20 years that phonetic patterns often vary systematically between languages (Pierrehumbert et al. 2000). Articulatory and co-articulatory patterns are not fully determined by physical limitations, but are instead under some degree of (usually unconscious) control in the individual (e.g. Clumeck 1976; Manuel 1990; Beddor et al. 2002). Similarly, phonetic perception is tuned by language experience, and not wholly determined by auditory physiology (e.g. Werker and Tees 1984; Kuhl et al. 1992; Best and McRoberts 2003). While expanding the domain of phonology to include physically quantitative phonetic effects also expands the potential domain of variability to be dealt with by phonological analysis, this expansion is a necessary step in developing a complete science of language sound structure. While frequency effects can be demonstrated over purely symbolic phonological patterns, expanding the study of the frequency effects on phonology to include traditionally phonetic dimensions may help to integrate synchronic phonology with sociolinguistic and diachronic studies of language sound structure, where phonetic dimensions are often relevant. Similarly, this chapter will touch on various aspects of morphophonology, where sound patterns vary systematically in morphologically complex words. Demonstrating frequency effects in this domain supports the argument for frequency effects as part of the theory of grammar regardless of the phonetics/phonology issue. In morphophonology, the data are indisputably grammatical in nature, and some aspect of cognitive computation is required. Frequency effects in morphophonology help to show that frequency effects are not merely a residue of performance, memory, general cognition, or diachrony. To the extent that there is a phonological system distinct from other cognitive structures, a variety of evidence has been gathered in support of the use of quantitative frequency information as part of the operating parameters of grammar.

1.2

Definition of frequency

Frequency is the rate of occurrence of a phonological unit, and is unrelated to acoustic frequency. But there are still many possible frequencies, depending on what is taken to be the domain over which occurrences are counted. In studies of language using corpora of language usage, frequency is usually the frequency of occurrence in the corpus. This type of frequency is referred to as token frequency or usage frequency. In English, for example, the token frequency of the phonemes /Ï/ and /v/ is relatively high, due to their presence in frequently used words like the and that, and of and very. Token frequency for words affects phonetic reduction and the resistance of lexical or morphophonological forms to diachronic change (e.g. Bybee 2002), as detailed in later sections. Abstracting away from repeated usages of a word, phonological patterns can also be examined on the basis of the number of times the pattern is used across different words. This frequency is referred to as type frequency or lexical frequency. The type frequency of the phonemes /Ï/ and /v/ in English is relatively low, as they are used in relatively few words. An example of a consonant with a high type frequency is /b/, which is the most common word onset in English. Some consonants, such as /s/ and /t/, have both high token frequency and high type frequency, being used in many words, many of which are common. Other consonants, such as /h/ and particularly /Ú/, have both low token frequency

Frequency Effects

3

and low type frequency, as they are used in few words, most of which are not common. A consonant cluster with a high type frequency is /st/, which is found in many different words. The token frequency of /st/ is also high, as many of these words are commonly used. Type frequency has been shown to influence metalinguistic judgments for novel word forms (also known as non-words, e.g. Frisch et al. 2000), repetition accuracy for non-words (e.g. Vitevitch 2002), and the propensity for phonological or morphophonological generalization for regular or irregular forms (e.g. Bybee 1995; Pierrehumbert 2001; Albright and Hayes 2003). A few other variants of frequency have been examined in particular cases. Transitional frequency (or transitional probability) is the frequency of one form following another in sequence. Transitional probability for intersyllabic consonant phoneme sequences has been shown to influence the parsing of non-words as simple or morphophonologically complex by adults (e.g. Hay and Baayen 2002), and repetition accuracy for subsequences within non-words in children (e.g. Munson 2001). Neighborhood density is a commonly used measure of word pattern frequency that combines the concepts of frequency and similarity (see also chapter 87: neighborhood effects). It is typically measured as the number of words that differ from a target word by a single phoneme substitution or a limited number of phoneme substitutions, insertions or deletions (e.g. Goldinger et al. 1989; Frisch et al. 2000). High neighborhood density has been shown to inhibit word recognition (presumably through competition for lexical access; see Luce and Pisoni 1998), and to facilitate non-word repetition (presumably through activation of frequently used phoneme sequences; Vitevitch 2002). Neighborhood density can be seen as a more specific application of the general concept of analogy between phonological words or phonological forms. Like neighborhood density, analogy combines the ideas of frequency of occurrence and similarity. Presumably the influence that a phonological pattern could have via analogy is determined in some way by the frequency of the phonological pattern and the similarity of the pattern to the target context (e.g. Bybee 1995; Davidson 2006; see also chapter 83: paradigms). It is also conceivable that analogy could have an influence at a variety of phonological levels, from the phonetic to the phonemic, syllabic, lexical, or morphophonological. The frequency of a phonological pattern at any of these levels may be different, and there is no reason to assume a priori that only one frequency or only one measure of frequency could be relevant. It is the goal of this chapter to demonstrate that more than one frequency is relevant to phonology, that different frequencies are relevant to phonology in different ways, and that different levels of phonological generalization are relevant, also potentially in different ways (Bybee 2007). Overall, the theoretical position is that of the “ladder of abstractions,” where phonemic categories are generalizations over phonetic patterns, sub-syllabic, and syllabic categories are generalizations over phonemic patterns, lexical and morphophonological categories are generalizations over phonemic, sub-syllabic, and syllabic patterns, and so forth (Pierrehumbert 2003; Beckman and Edwards 2010). Under this view, higher-level phonological categories emerge as systematic generalizations over lower-level categories, where the lowest-level category is physical/articulatory/acoustic experience with language. This experience is parsed into more general, abstract categories as repeated similar experiences occur. Frequency may play a role in supporting these generalizations. Token frequency creates robust, entrenched, well-defined categories through frequent exposure. Type frequency leads to grammatical

4

Stefan A. Frisch

generalization, as categories occur in a variety of contexts, promoting generalization and parsing of units, presumably for cognitive efficiency (Bybee 2007). Note also that this view includes frequency information as part of the grammar, and not just as a tool that can be used to analyze a lexicon or corpus for phonologically relevant patterns (e.g. as in McCarthy 1994 or Hammond 1999; see the introduction in Bod et al. 2003 for discussion).

2

Frequency matching in metalinguistic judgments

Evidence for the relevance of frequency in phonology is found in behavioral experiments in which the frequency or probability of phonological units in stimuli is manipulated or examined, and native-speaker judgments are consulted. It has been found that native speakers are sensitive to phonological frequency in a variety of lexical and morphophonological patterns. In the absence of traditional phonological constraints, it has been shown that baseline phonotactic probability is an influence on well-formedness judgments for novel non-words. This suggests that native speakers have encoded frequency information that is present in language data. Whether this knowledge is properly a part of their grammatical competence, as opposed to extra-linguistic information that is accessed as a part of linguistic performance, is a topic of debate. In connection with data presented in later sections that frequency has an influence over the distribution of possible forms in the language, the influence of frequency on native-speaker performance in metalinguistic experiments has been used to argue that frequency information is part of the knowledge of language sound structure that speakers have, i.e. that frequency effects are psychologically real (Frisch et al. 2004; Zuraw 2007).

2.1

Novel phonotactic combinations

Frisch et al. (2000) presented recordings of novel non-words to naive undergraduate listeners. These non-words were constructed by combining attested onset and rime constituents into multisyllabic non-words with no obvious phonotactic violations. The onset and rime constituents (chapter 33: syllable-internal structure) used were of relatively high or low frequency of occurrence, creating novel non-words that were relatively high or low in cumulative expected probability, but with the frequency of occurrence of any particular onset or rime constituent balanced across the experiment. Example non-words are given in (1). (1)

/s>œHp/ /zujehÁs/ /s>rHsenHn/

Frisch et al. (2000) found a significant, but moderate, correlation between cumulative expected probability and native English speaker well-formedness judgments (r ~ 0.4). Well-formedness judgments were collected in two different experiments (chapter 96: experimental approaches in theoretical phonology). In one experiment, speakers were asked to judge the wordlikeness of the novel non-words on a scale from 1 to 7 (1 = impossible, can’t be a word of the English; 2, 3 = unlikely, doesn’t sound much like a word of English; 4 = neutral, sounds somewhat like a

Frequency Effects

5

word of English; 5, 6 = likely, sounds like it could be a word of English; 7 = definitely, sounds just like a word of English). In the other experiment, speakers were asked to judge whether a non-word was acceptable or unacceptable. In both cases, there was evidence for gradient acceptability of the novel non-words as a function of phonotactic probability (chapter 89: gradience and categoricality in phonological theory). For acceptability judgments, gradient acceptability was apparent when judgments of acceptability were aggregated across participants. For very low probability non-words, very few speakers judged them to be acceptable. For very high probability non-words, most speakers judged them to be acceptable. For non-words of intermediate probability, some speakers judged them to be acceptable and some did not. For wordlikeness judgments, where individual participants were able to use a gradient scale to make their judgments, the analysis of individual speaker data found significant correlations with non-word expected probability for most participants. These findings are replicated in Frisch and Brea-Spahn (2010), where the non-words were constructed slightly differently, by random combination of onset and rime constituents (screening out any categorical phonotactic violations that were created by this random process, such as the creation of a geminate across the syllable boundary; chapter 37: geminates). Bailey and Hahn (2001) presented monosyllabic non-words to British English speakers in two experiments, one orthographic and the other auditory. Their non-words were selected to differ from real English words by either one or two phonemes (e.g. drump and drolf ) but otherwise violated no categorical phonotactic constraints. Participants rated the novel items on a 1 to 9 wordlikeness scale. Bailey and Hahn (2001) examined a variety of predictors for judgments, focusing on probabilistic phonotactics and lexical neighborhood density. They found that each factor provided an independent influence, and thus they argued that both direct lexical information and abstract probabilistic phonotactic information are used in the wordlikeness judgment task. Frisch et al. (2000), in a post hoc examination of their polysyllabic non-word data, found somewhat similar results. While polysyllabic words generally have fewer lexical neighbors, the highest probability non-words did show some evidence for lexical neighborhood effects. However, a recent study by Shademan (2006) failed to replicate the lexical neighborhood effects of Bailey and Hahn (2001). Given that phonotactic probability and lexical neighborhood density are confounded with one another, it is perhaps not surprising that it has been difficult to differentiate the two and demonstrate clear influences of both in metalinguistic experiments. Frisch et al. (2000) was a replication of a study by Coleman and Pierrehumbert (1997). However, the Coleman and Pierrehumbert (1997) study was less systematic in its construction of non-words, and also included non-words with phonotactic violations in onset consonant clusters. They used an acceptability judgment task, and examined aggregate acceptability across the participants in the study. Coleman and Pierrehumbert (1997) found a similar correlation between non-word expected probability and acceptability. They also found that non-words with phonotactic violations were judged as more or less acceptable depending on the frequency of the other (non-violating) constituents in the word. In other words, high frequency elsewhere in a novel non-word could mediate the detriment to well-formedness caused by a phonotactic violation. The findings of Coleman and Pierrehumbert (1997) are compatible with models of phonological grammar that use a cumulative or aggregate well-formedness in evaluating the output of the grammar, but they

6

Stefan A. Frisch

are difficult to capture in a model of grammar where only grammatical violations or only the greatest grammatical violation is relevant to the output. Albright (2009) also examined well-formedness judgments for monosyllabic English non-words containing a variety of phonotactically legal and illegal sequences, with variation in frequency of the phonotactically legal sequences. Albright used a model based on transitional probability between natural classes (feature groupings; chapter 17: distinctive features) in an attempt to create a more linguistically grounded model that might extend frequency patterns in attested sequences to linguistically related unattested sequences. The goal of the model was to differentiate unattested onset clusters like /bw bn bz/ that have been shown to vary in their wordlikeness (chapter 55: onsets). While the Albright model was successful, he also found that the transitional frequencies between natural class generalizations and the transitional frequencies between segments each contribute to predicting participant judgments, and do not overlap completely. This suggests that multiple levels of generalization may be relevant to participant performance in well-formedness judgment tasks. Frisch and Stearns (2006) examined onset consonant clusters, presenting a variety of CC initial monosyllabic non-words auditorily to participants (cf. similar non-words presented to child participants by Scholes 1966, reported in Albright 2009). The stimuli included attested English onset clusters with varying frequency, and a few unattested clusters that might be expected to occur in English but do not, based on relatively simple phonological categorization (/sr tl dl hl/). For the attested clusters, cluster frequency was a significant predictor of well-formedness judgments. Judgments of the unattested clusters were surprisingly high, however, and some follow-up investigation has suggested that these clusters were frequently misperceived (e.g. /tl/ heard as /pl/). When orthographic supports were provided (in an unpublished replication study), well-formedness judgments for the non-attested clusters were lower, though not necessarily very different from low-frequency attested clusters such as /gw sf dw/, which might be thought of as less consistent with the overall grammar of consonant clusters in English (Hammond 1999).

2.2

Frequency matching in morphophonology

It has also been shown that phonological frequency influences participant behavior in metalinguistic tasks that more directly reflect linguistic productivity. For example, Ernestus and Baayen (2003) presented novel verbs to Dutch speakers and asked them to produce past-tense forms. Dutch is one of many languages with a process of word-final devoicing. Dutch verbs contain a variety of examples where stem-final obstruents alternate, as in (2) (see also chapter 80: mergers and neutralization). (2)

/verseit/ /verseidHn/

‘widen (3sg pres)’ ‘widen (inf)’

There are also example