On the Architecture of Words. Applications of Meaning Studies 8436267346, 9788436267341

In light of today's extensive use of digital communication, this volume focuses on how to understand and manage the

405 101 9MB

English Pages 283 [281] Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

On the Architecture of Words. Applications of Meaning Studies
 8436267346, 9788436267341

Citation preview

13/2/15

10:12:59

On the Architecture of Words. Applications of Meaning Studies

In light of today’s extensive use of digital communication, this volume focuses on how to understand and manage the various types of linguistically-based products that facilitate the use and extraction of information. Including conceptual and terminological databases, digital dictionaries, thesauri, language corpora, and ontologies, they all contribute to the development and improvement of language industries, such as those devoted to automatic translation, knowledge management, knowledge retrieval, linguistic data analysis, and so on. As the theoretical background underlying these applications is outlined in detail in the earlier chapters of the book, the reader is able to establish the necessary links between the various but related kinds of linguistic –and, in particular, semantic– applications. A general review of several theories and linguistic models that influence the practical application of Meaning studies to the new technologies is also included. This book is aimed at students and researchers of Linguistics, as well as those with a basic knowledge of Linguistics and Semantics who are interested in the on-going development of the handling of meaning and its practical usage.

6402413GR01A01

6402413GR01A01.pdf

C

M

Y

CM

MY

CY

CMY

K

ISBN: 978-84-362-6734-1

Editorial

02413

colección Grado 9 788436 267341

6402413GR01A01

GR

On the Architecture of Words. Applications of Meaning Studies Margarita Goded Rambaud Ana Ibáñez Moreno Veronique Hoste

On the Architecture of Words Applications of Meaning Studies

MARGARITA GODED RAMBAUD ANA IBÁÑEZ MORENO VERONIQUE HOSTE

UNIVERSIDAD NACIONAL DE EDUCACIÓN A DISTANCIA

ON THE ARCHITECTURE OF WORDS. APPLICATIONS OF MEANING STUDIES Quedan rigurosamente prohibidas, sin la autorización escrita de los titulares del Copyright, bajo las sanciones establecidas en las leyes, la reproducción total o parcial de esta obra por cualquier medio o procedimiento, comprendidos la reprografía y el tratamiento informático, y la distribución de ejemplares de ella mediante alquiler o préstamos públicos. © Universidad Nacional de Educación a Distancia Madrid 2015  www.uned.es/publicaciones © Margarita Goded Rambaud, Ana Ibáñez Moreno, Veronique Hoste ©  Ilustración de cubierta y capítulos: Sil Mattens y Peter de Coninck ISBN electrónico: 978-84-362-7022-8 Edición digital: febrero de 2015

TABLE OF CONTENTS

Introduction to this volume  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to use this book  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The authors  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11 13 15

Chapter 1.  Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 19 19 22

1. Objectives  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. The use of a metalanguage  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Language dependent and language independent knowledge representation   . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1. Language dependent applications  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. Language independent applications  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3. Conclusions about knowledge representation  . . . . . . . . . . . . . . . . . . . . . . . 5. Different approaches to meaning interpretation  . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1. Cognitivism and ontologies  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Predication in linguistics and in ontologies  . . . . . . . . . . . . . . . . . . . . . . . . . 5.3. Are conceptual and semantic levels identical?  . . . . . . . . . . . . . . . . . . . . . . 5.4. The philosophical perspective  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. Lexical description theories  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1. Introduction.  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2. Structuralism  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1. European Structuralism  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2. American Structuralism  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3. Functional Models  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4. Formal Models  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5. Cognitive Models  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6. Conclusions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Further recommended readings  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. Keys to exercises  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24 25 28 29 29 30 31 33 34 35 35 35 35 38 39 43 44 47 49 50 54

7

On

the architecture of words.

Applications

of

Meaning Studies

Chapter 2.  Words and word boundaries  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Interaction between semantics and morphology  . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Setting the grounds: review of basic concepts  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. Words and word boundaries: lexeme, stem, lemma  . . . . . . . . . . . . . . . . . . . . . . . 4.1. Lemma versus lexeme: on-going discussions  . . . . . . . . . . . . . . . . . . . . . . . . 4.2. Prandi’s (2004) view of lexical meanings: lexemes, the lexicon and terminology  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. The levels of language analysis and their units  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. Further recommended readings  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Some lexicological sources  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. Keys to exercises  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 3. On the architecture of words  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. What is the scope of morphology? How to separate down a word  . . . . . . . . . 3. Lexical change and word formation processes  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Word formation processes  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1. Compounding  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2. Inflection  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2.1.  Typological classification of languages  . . . . . . . . . . . . . 3.2.2.2.  The index of synthesis of a language  . . . . . . . . . . . . . . . 3.2.3. Derivation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4. Other word-formation phenomena  . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5. Transparent and opaque word formation phenomena  . . . . . . . . . . 4. On word grammar: approaches to the study of morphology  . . . . . . . . . . . . 4.1. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. Paradigmatic and syntagmatic relationships  . . . . . . . . . . . . . . . . . . . . . . . . 4.3. Some recent proposals for a model of morphology in grammar  . . . . 5. Further recommended readings  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Keys to exercises  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 4.  What is lexicography?  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. Objectives  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

57 59 59 65 66 68 74 75 85 86 86 88 97 99 99 112 112 112 112 113 115 116 118 119 122 123 123 123 126 143 144 148 159 161

Table

of contents

2. Introduction to the writing of dictionaries  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Meaning and dictionary entries: where meaning abides  . . . . . . . . . . . . . . . . . 3.1. Lexicography and linguistic theory  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2. Lexicology and lexicography  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3. Dictionaries, thesauri and glossaries  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4. Types of dictionaries  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5. Dictionary entries  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. The difference between meaning definition and dictionary definition  . . . . 5. Dictionary writing and corpus annotation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1. Text encoding and annotation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Text annotation formats  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3. Types of annotation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4. Parsing  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5. Semantic annotation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6. Lemmatization  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7. Pragmatic annotation  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8. Concordances and collocations  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9.  The concept of legomenon  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. Further recommended readings  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. Keys to exercises  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161 162 162 164 166 169 170 172 175 175 177 178 179 182 185 186 186 187 187 189 192

Chapter 5.  Meaning, knowledge and ontologies  . . . . . . . . . . . . . . . . . . . . . . . . . . . .

203 205 205 205 206 210 211 216 216 221 222 227 235 236 237 238

1. Objectives  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1. Background for relevant distinctions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. What do we mean by concepts?  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3. Conceptualization is a basic process for ontologies  . . . . . . . . . . . . . . . . 3. Different perspectives in ontology definition and description  . . . . . . . . . . . 4. Natural languages as a source of data for ontology building  . . . . . . . . . . . . 4.1. Data bases, dictionaries, terminologies and ontologies  . . . . . . . . . . . . 4.2. Standards  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3. Ontologies and related disciplines  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Types of ontologies from multiple perspectives  . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. Ontology applications  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Ontologies and artificial intelligence  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. Knowledge bases  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. Further recommended readings  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

On

the architecture of words.

Applications

of

Meaning Studies

10. References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. Keys to exercises  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

239 241

Chapter 6. Terms and terminological applications  . . . . . . . . . . . . . . . . . . . . . . . . . 1. Objectives  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. Introduction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. Defining terms  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. The acquisition of terminology  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. Terminology extraction  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. Storing terms  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. Concluding remarks  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. Further recommended readings  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. References  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. Keys to exercises  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

247 249 249 251 253 254 262 263 264 264 266

Glossary of terms  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

269

10

INTRODUCTION TO THIS VOLUME This is an introductory book dealing with practical uses of meaning studies, with special focus on ontologies, dictionaries and terminological applications. The book can be divided into two parts. The first one includes chapters 1, 2, and 3, it is more theoretically oriented and its objective is to provide the reader with the theoretical background knowledge required for the understanding of the most important meaning applications. Special emphasis is given to those aspects considered as the most influential and useful to develop certain linguistic applications such as the construction of dictionaries, ontologies and terminologies. Because of this, only semantic and morphological approaches are dealt with in this first part of the volume. The second part includes chapters 4, 5, and 6, and it gives information of specific applications of meaning studies. As regards part 1, in the first chapter of this volume some basic facts about differences, similarities and overlappings among words, concepts and their respective relations with the codification of meaning are introduced. The question of how ontological material is represented in order to be implemented in computational linguistic applications is approached, including differences and similarities between general linguistic representations and specific representations for linguistic applications. A selection of lexical description theories is also presented, so as to give readers an overall view of the most influential theoretical approaches affecting the analysis of the lexicon. Because dictionaries are made up of words, we need to understand how words are formed. Thus, in chapter 2 some basic notions about the different levels of linguistic analysis are developed, with special attention given to morphology and its interactions with semantics. In addition, this interconnection can be applied to

11

On

the architecture of words.

Applications

of

Meaning Studies

linguistic studies of different kinds. Thus, it is seen how the morphological analysis of languages relates with the creation of corpora, thesauri and lexical and conceptual databases. Some basic morphological notions are also introduced, as a key to understand word formation phenomena. Besides, in chapter 3 a deep analysis of the interconnection between morpho­ logy and semantics, important to establish the position of morphology in the fields of computational linguistics and computational semantics, is attempted. It is also explained how morphology is applied to building up lexicological databases and/or corpora, taking Nerthus, a lexicological database of Old English, as example. The second part of the volume includes chapters 4, 5 and 6. Lexicography, as the practical side of lexicology, is introduced in chapter 4. Basic notions related to the writing of dictionaries and the problems involved in the definition of a lexical entry are introduced here. In addition, the links between the shape of a dictionary entry and different theories of meaning are also discussed. In chapter 5 ontologies are approached focusing on the relationships and differences between lexical representation and meaning representation and exploring the limits and overlappings of language and knowledge repre­sentation for natural language processing. A selection of easily available ontologies is reviewed to analyze how they represent both meaning and knowledge. Finally, chapter 6 offers an introduction to terminological applications and it explores statistical approaches to terminological analysis. Terminology is very practically oriented, and it does not focus so much on linguistic analysis as in the handling of concepts for the use of these applications in everyday life. It is shown how a centralized terminology management system can be used for the efficient production and standardization of (multilingual) content and for “intelligent information retrieval”.

12

HOW TO USE THIS BOOK This book contains 6 chapters. Each of them can be read separately, depending on the reader’s interests. The topics are interconnected to each other but they are not fully dependent. Thus, for example, if someone wants to know more on terminology extraction, he/she could directly go to chapter 6. In the same fashion, if someone is interested in the difference between language dependent and language independent applications, he/she should read chapters 1 and 5, and someone wanting to know more about dictionaries should go directly to chapter 4. Each chapter is made up of a theoretical descriptive approach, dealing with issues related to lexically-based language applications, or with the features of language that conform these applications. The reader can also find some exercises scattered around the contents, whose answer keys are also included at the end of each chapter. Additionally, some further recommended readings are included at the end of each chapter, as well as the full references of all the works that are mentioned or quoted along the texts. All these elements aim at making this book as practical as possible, both for the undergraduate semantics student and for the general linguistics reader. It is also theoretical, in the sense that the theory behind each semantic application is largely described, and for those who want it there are resources to help them put it into practice. Besides, an extensive glossary of terms related to applied semantics is included at the end of the book, containing the main terms and concepts relevant for the understanding of morphology, terminology and semantics and, above all, of applications based on the study of words and meaning.

13

THE AUTHORS Margarita Goded is a Senior Lecturer in semantics at the Department of Modern Languages in the Spanish National University of Distance Education, UNED (Spain). She holds a Master’s degree in English and Old English from the University of Sydney (Australia) and a PhD in English Linguistics from the Complutense University Madrid. She lectured linguistics, English linguistics and writing at the Madrid Universidad Autonoma. She was tenured at the UNED in 2002, where she lectures semantics and applications to meaning studies for ontologies and dictionaries. She has been the main researcher of one Spanish research group and has participated in various other European and Spanish research groups. Her main research interests are related to the precodificational components of linguistic expressions for computational treatment and she has developed a descriptive algorithm for this purpose. Ana Ibáñez Moreno is a Lecturer at the Department of Modern Languages in the Spanish National University of Distance Education, UNED (Spain). She holds a PhD in English Linguistics from the University of La Rioja, on A Functional Approach to Derivational Morphology: The Case of Verbal Suffixes in Old English (2007). Her current areas of research are Functional Linguistics, with special attention to semantics and morphology and the teaching and learning of foreign languages. She has long experience as emeritus researcher in the Department of Spanish of the Faculty of Applied Linguistics of the University College at Ghent (Belgium), now at Ghent University, where her main topics are error analysis, the development of communicative strategies when learning Spanish and the use of audio description as a didactic tool in the classroom of Spanish as a foreign lan-

15

On

the architecture of words.

Applications

of

Meaning Studies

guage. She has collaborated in several research projects dealing with lexical and lexicological databases, such as Nerthus. See www.nerthusproject.com. Veronique Hoste is Professor of Computational Linguistics at the Faculty of Arts and Philosophy at Ghent University. She is also the director of the LT3 language and translation team at the same department and is co-director of the association research group LTCI on Language Technology and Computational Intelligence. She holds a PhD in computational linguistics from the University of Antwerp (Belgium) on Optimization issues in machine learning of coreference resolution (2005). She has a strong expertise in machine learning of natural language, and more specifically in coreference resolution, word sense disambiguation, multilingual terminology extraction, classifier optimization, etc. She has published about 50 papers related to different research projects, has different PhD students under her guidance. In the last five years, Veronique Hoste has received external funding for different research projects among which projects sponsored by the Flemish government, the Dutch Language Union, the Belgian Science Foundation and by industrial partners. For an overview of publications, projects and professional activities, see http://www.lt3.ugent.be. We are endowed to architects Sil Mattens and Peter de Connick (Belgium), who are the authors of the images of the book and the cover. We are also endowed to proofreaders Emilie Collinge and Heather Forth (UK) for carefully reviewing this book.

16

1. Objectives 2. Introduction 3.  The use of a metalanguage 4. Language dependent and language independent knowledge representation 4.1.  Language dependent applications 4.2.  Language independent applications 4.3.  Conclusions about knowledge representation 5.  Different approaches to meaning interpretation 5.1.  Cognitivism and ontologies 5.2.  Predication in linguistics and in ontologies 5.3.  Are conceptual and semantic levels identical? 5.4.  The philosophical perspective 6.  Lexical description theories 6.1. Introduction 6.2. Structuralism 6.2.1.  European Structuralism 6.2.2.  American Structuralism 6.3.  Functional Models 6.4.  Formal Models 6.5.  Cognitive Models 6.6. Conclusions 7.  Further recommended readings 8. References 9.  Keys to exercises

1. OBJECTIVES In this chapter we deal with words and concepts. More specifically, we shall learn some facts about differences, similarities and the overlapping areas among words, concepts and their respective relations with the codification of meaning. We will also be introduced to how ontological material is represented for linguistic applications and the differences and similarities between general linguistic representations and specific representations for linguistic applications proper.

2. INTRODUCTION In this chapter some basic semantic concepts are refreshed, with an emphasis on those which have a particular impact on the development of both ontologies and dictionaries. The contribution of the different levels of linguistic analysis to the construction of meaning focuses on the different aspects of language: how the units of human-produced sounds are organized in words so as to be meaningful is something studied in phonetics; how words are made up and further combined in higher meaningful units is studied by both morphology and syntax; semantics basically studies how meaning is codified and how it does so from these different linguistic perspectives. Because dictionaries focus on the meaning of words, lexical and morphosyntactic perspectives in linguistic analysis are important. The understanding of a word meaning by the users of a certain language, and their capability to explain it by putting it into other words, are preconditions for the creation of dictionaries.

19

On

the architecture of words.

Applications

of

Meaning Studies

Because ontologies focus on how concepts are captured and how they are codified in words, semantic analysis is also a kind of precondition for the further construction of applications such as programs called “ontologies”. The working perspective taken here is basically one that presupposes an online use. This means that both products, dictionaries and ontologies, are seen as digital products and therefore subject to computational treatment. Ontological and lexical representations in language applications are introduced to highlight their coincidences and overlapping areas. Then, the points where they coincide and overlap are noted. Focusing on lexicography, Hanks (2003) provides a brief review of its basic aspects by linking them to their historical background. This is useful in helping us connect the present developments of language technologies with their roots and also in identifyng their most important varieties and initial developments. As Hanks (2003: 48) states, lexicographical compilations are inventories of words that have multiple applications and are compiled out of many different sources (manually and computationally): An inventory of words is an essential component of programs for a wide variety of natural language processing applications, including information retrieval, machine translation, speech recognition, speech synthesis, and message understanding. Some of these inventories contain information about syntactic patterns and complementation associated with individual lexical items; some index the inflected forms of a lemma to the base form; some include definitions; some provide semantic links to ontologies and hierarchies between the various lexical items. Finally, some are derived from existing human user dictionaries.

However, he concludes that none of them are completely comprehensive and none of them are perfect. As with most aspects of our everyday life, dictionary compilation such as the craft of lexicography has been revolutionized by the introduction of computer technologies. On the other hand, new insights have been obtained by computational analysis of language in use, providing new theoretical perspectives.

20

Introduction

Concepts and words are related, because for concepts to be transmitted we, as humans, need words, and this is why it is important to differentiate between concepts and words. Understanding the similarities, differences and interrelations between them in the present situation of massive use of the internet, where we interact with machines such as computers all the time, becomes more and more important. And this is why we will try to differentiate between those applications that are more heavily dependent on language and those which, being of a more abstract nature, can be “described” as language independent. This concept-word relationship concerns the process of conceptualization. As Prevot, et al. (2010: 5) explain: The nature of a conceptualization greatly depends on how it emerged or how it was created. Conceptualization is the process that leads to the extraction and generalization of relevant information from one’s experience. Conceptualization is the relevant information itself. A conceptualization is independent from specific situations or representational languages, since it is not about representation yet. In the context of this book, we consider that conceptualization is accessible after a specification step; more cognitive oriented studies, however, attempt at characterizing conceptualizations directly by themselves (Schalley and Zaefferer 2006).

Precodification of entities or relations that usually lead to the lexicalization of nouns and verbs is a specification step. This is the marking of either an entity or a relation in the notation of an ontology. Let us illustrate this: For example, the verb run as in “She runs the Brussels Marathon” is precodified as a ‘predicate’ and thus as a ‘relation’, and the nouns she, Brussels and marathon are precodified as entities. On the other hand, a possible example of direct cognitive type of conceptualization in the sense of Schalley and Zaefferer (2006) could be the famous one of asking for food in the context of a restaurant. In fact, both types of conceptualization are compatible. What is the objective of an ontology? Basically, it is to conventionalize concepts in order to handle meaning and knowledge efficiently. As Prevot, et al. (ibidem) explain: Every conceptualization is bound to a single agent, namely, it is a mental product which stands for the view of the world adopted by that agent; it is by means of ontologies, which are language-specifications of those mental

21

On

the architecture of words.

Applications

of

Meaning Studies

products, that heterogeneous agents (humans, artificial, or hybrid) can assess whether a given conceptualization is shared or not, and choose whether it is worthwhile to negotiate meaning or not. The exclusive entryway to concepts is by language; if the lay­person normally uses natural languages, societies of hybrid agents composed by computers, robots and humans need a formal machine-understandable language. To be useful, a conceptualization has to be shared among agents, such as humans, even if their agreement is only implicit. In other words, the conceptualization that natural language represents is a collective process, not an individual one. The information content is defined by the collectivity of speakers.

There are two —opposed and complementary— ways to access the study of words and concepts: the onomasiological approach and the semasiological approach. The first one, whose name comes from the Greek word ὀνομάζω (onomāzo), ‘to name’, which comes from ὄνομα, ‘name’, adopts the perspective of taking the concept as a starting point. Onomasiology tries to answer the question how do you express x? As a part of lexicology, it starts from a concept (an idea, an object, a quality, an activity etc.) and asks for its names. The opposite approach is the semasiological approach: here one starts with the word and asks what it means, or what concepts the word refers to. Thus, an onomasiological question is,what is the name for medium-high objects with four legs that are used to eat or to write on them? (Answer: table), while a semasiological question is, what is the meaning of the word table? (Answer: mediumhigh object with four legs that is used to eat or to write). The onomasiological approach is used in the building of ontologies, as we will see in depth in chapter 5, and the semasiological approach is adopted for the construction of terminologies, banks of terms, to be applied in different areas, as we will see in chapter 6.

3.  THE USE OF A METALANGUAGE A much debated issue in relation to these matters is the use of a metalanguage. Saeed (2003) defines semantics as the study of meaning communicated

22

Introduction

through language. Since there are quite a number of languages and since meaning and knowledge are, to some extent, interchangeable terms, we can say that knowledge representation is fairly connected to the particular language on which the referred knowledge is expressed. Consequently, in his preliminary discussion of the problems of semantics this author suggests that the use of a metalanguage could be a possible solution to the problem of the circularity of the meaning of a word in a dictionary. Setting up a metalanguage might also help to solve the problem of relating semantic and encyclopedic knowledge, since designing meaning representations of words, involves arguing about which elements of knowledge should be included. But metalanguages also present problems for lexical representation. After all, most linguistic models of all kinds (generative, functional etc.) have designed a metalanguage of their own, more or less based on linguistic signs, to represent what the linguist in question considers to be the set of foundational concepts upon which their subsequent linguistic representations are built. Generally, a metalanguage will be necessary to build up any ontology, especially if it is aimed to be applicable to more than one language. There are two kinds of components in an ontology that make use of a metalanguage: the represented categories and relations, and the represented procedures. Sometimes the represented procedures are just the relations themselves. An example of a metalanguage combining meaning postulates and thematic frames for the event +ANSWER_00, as in Periñán Pascual and Mairal (2010: 20) is: (1) Thematic Frame: (x1: +HUMAN_00) Theme (x2) Referent (x3: +HUMAN_00) Goal Meaning Postulate: PS: +(e1: +SAY_00 (x1) Theme (x2) Referent (x3) Goal (f1: (e2: +SAY_00 (x3) Theme (x4: +QUESTION_00) Referent (x1) Goal)) Scene) The thematic frame of the event +ANSWER_00 belongs to the higher frame of the metaconcept #COMMUNICATION, to which the metaconceptual unit is assigned a prototypical thematic frame. In this case, the thematic frame of communicative situations, from which we obtain all other conceptual units related to this metaconcept of #COMMUNICATION is:

23

On

the architecture of words.

Applications

of

Meaning Studies

(2) (x1) Theme (x2) Referent (x3) Goal The Thematic Frame in (1) derives from this general one in (2), as well as the Meaning Postulate, which gives more detailed conceptual information about the specific event +ANSWER_00, which can be paraphrased as “someone (x1) say something (x2) to somebody (x3) related to a question (x4) that x3 said to x1”. All the symbols used (+, #, x, numbers, etc.) are part of the metalanguage used in COREL1 (which stands for Conceptual Representation Language) for the representation of concepts. o-o-o-o-o-o-o Exercise 1: Build up your own metalanguage: propose a small ontology -four concepts or so- for concepts related to a specific conceptual field (for example verbs of emotions, or verbs of movement) and select one or two of these concepts. Then, “invent” a series of symbols that you would use in order to represent the entities and relations involved in these concepts. You can use some parts of English as a pro-metalanguage (as Dik 1997 or Van Valin 2005 do), or you can suggest new symbols. o-o-o-o-o-o-o

4. LANGUAGE DEPENDENT AND LANGUAGE INDEPENDENT KNOWLEDGE REPRESENTATION The representation knowledge based on language or based on concepts differs in a number of aspects. It is a circular issue and it always affects the creation of ontologies and dictionaries. The concept of prototypicality is a highly influential one in both lexicographic and ontological studies, and it works in both directions. As 1  COREL, which stands for Conceptual Representation Language, is the language used within the Lexical Constructional Model of language and by the project group FunGramKB (Functional Grammar Knowledge Base) in order to build up a whole conceptual ontology. See: http://www.fungramkb.com/ default.aspx

24

Introduction

Geeraerts (2007: 161) notes, how prototypicality effects in the organization of the lexicon blur the distinction between semantic information and encyclopedic information. This does not mean that there is no distinction between dictionaries and encyclopedias but that the references to typical examples and characteristic features are a natural thing to expect in dictionaries. On the other hand, in the construction of ontologies prototypical examples of categories tend to be linked to higher categories or inclusive categories, usually taking the lexical form of hyperonyms.

4.1.  Language dependent applications Language dependent knowledge representation is based on the way in which a certain language codifies a certain category —for example a certain state of affairs— because it affects the representation of this particular piece of world knowledge. For example, a debated issue is how the different languages of the world and English in particular, lexicalize (with more or less detail) certain aspects of their external world that affect their speakers. Within the general studies of semantics we see how there are certain linguistic categories that are highly dependent on the language in which they are identified. Remember, for example, the pronominal system in Arabic languages contrasting with that of English or Spanish, where the former codifies a dual pronoun whereas the latter two only codify singular and plural. As it is well known, there are more or less universal linguistic features that all languages codify with various syntactic, morphological or lexical devices to refer to the addressee. An example of language independent knowledge or conceptual representation could be the lexical terms for numbers, or the mathematical symbols and the symbols of logic (see Chomsky 1975, Dik 1997, Van Valin 2005, and so many others, for different examples). Language is a conceptual phenomenon, as postulated by Lakoff (1987) and others. This means that different languages lexicalize with more or less detail those aspects (concepts) of the external world affecting their speakers in a particular way. For example, the different types of snow are lexicalized with different words in Eskimo languages, in the same way that the differ-

25

On

the architecture of words.

Applications

of

Meaning Studies

ent types of winds are given different names in many cultures, where the type and intensity of the wind directly affects people’s everyday lives. An example is the Spanish word, chirimiri or sirimiri, deriving from the Basque (txirimiri) used to refer to a kind of rain characterized by water drops that are very small and in abundance, so that you are anaware that you are getting wet but in fact you are. In www.wordreference.com it is translated as “fine drizzle”, but there is not a single unique term in English that can represent this specific kind of rain that is typical of the Basque Country. Still another example is taken from our urban kind of life, where we have lexicalized two different terms for human beings depending on whether such human being is or is not driving (driver /pedestrian). In addition, the words rider or caballero lexicalize the fact that a human being is or is not riding a horse, because such a difference was important before the invention of the car. Nowadays, a rider is also someone riding a bike, also in opposition to a pedestrian, who is not using any other way of moving but his/her own legs. Therefore, the way a certain language codifies a certain category —for example a certain state of affairs— affects the representation of this particular piece of world knowledge because it selects some elements instead of others. As already mentioned, the codification of a certain state of affairs, for instance, affects the types of constructions that a certain language may produce. For example, in English the codification of a resultative element in a certain state of affairs leads to constructions like the famous wipe the slate clean study in Levin (1991). Knowledge representation in language applications A particular kind of knowledge representation is based on lexical organization. This organization can take many forms. For example, a network is a group of words that are not so tightly organized. Sets can be defined as organized and bounded groups of words, while hierarchies are organized groups of words usually following a certain semantic relationship (e.g.: hyponymy). In inheritance systems the main link is a certain feature (semantic or syntactic) that can be identified as recurrent at different levels of a structure. Understanding how words are organized in a certain format is important for both dictionaries and ontologies.

26

Introduction

Meaning representation in language applications Lexical representation is, after all, meaning representation. It must be noted, however, that lexical meaning is just one part of the whole meaning transmitted in verbal communication, where the total content of information transmitted is usually more than purely verbal communication. The point is that in order to facilitate transmission between speakers, language is organized using a limited number of formal structures of different kinds (lexical, syntactic, morphologic, phonetic), which are complemented with a whole repertoire of ontological and situational (sensory-perceptual) information simultaneously processed. Formal codification of purely linguistic input is inevitably limited because of processing constraints, but it is complemented with additional non-linguistic information that enters the processing human system via other non-linguistic means. The difference between trying to represent meaning and trying to represent other linguistic levels is that it is easier to represent something with a more or less tangible side such as the lexicon, syntax, morphology, or phonetics of a certain language. Trying to represent something like meaning, highly dependent on contextual information, is not an easy task. Part of the difficulty is that it is precisely the lexicon, syntax, morphology, or phonetics of a certain language that we use to convey meaning. Only a small part of the more salient and socially engraved aspects of social behavior constitutes contextual information codified in human languages in many different ways, and which can be labeled with different kinds of pragmatic parameters. In addition, other non-pragmatically codified information is missing in linguistic representation as such, and must enter the system through other non-linguistic-processing-input-systems. The paradox is that in order to other codify all that non-linguistic information we sometimes need a language. Whether this language is a conventional language, a metalanguage or another symbolic system is a different question to be addressed. Sometimes it is very difficult to think of fully language-independent representations. In what follows, we will deal with this topic in more detail.

27

On

the architecture of words.

Applications

of

Meaning Studies

Lexical representation in language applications A typical example of lexical representations for language applications is a common parser, and the clearest example of a language dependent linguistic application is a dictionary of any kind.

4.2.  Language independent applications An easy example of a language independent representation could be the figures for numbers or the mathematical symbols that mathematicians of all languages use. In this book, language independent applications will be studied at a very basic level and under the perspective that knowledge is partly organized independently of the language in which it is put into words and partly organized in a sort of network. A knowledge network is understood as a collection of concepts that structures perceived information and allows the user to organize it. So an example of language independent application is mathematical notation. What is represented in a mathematical formula is simple: a series of mathematical concepts and a series of relations linking them: (3) (5 + 3). 5 = 45 Here we have two types of quantities: one grouped in two sub entities (5 and 3) and the other is the whole amount, just by itself. And the relations linking these quantities are shown below: (4) [( )] represents a set. [+] represents the addition operation of natural numbers. [.] represents the multiplication operation of natural numbers. [=] represents the result of both operations.

28

Introduction

Finally, the organization of concepts into hierarchies is also relevant. Each concept is studied as related to its super- and sub-concepts. The inheritance of defined attributes and relations from more general to more specific also affects complex conceptualizations.

4.3.  Conclusions about knowledge representation Knowledge representation has taken the form of printed and electronic (both on and off-line) dictionaries and ontologies. It is self-evident that dictionaries are one possible kind of language-dependent instrument of knowledge representation in the sense that dictionaries compile all or most parts of the lexical information of a particular language. Ontologies, on the other hand, compile other than lexical information. However, it is also becoming evident, as said above, that in order to codify non-lexical information, lexical means are needed. Ontologies can be defined as knowledge networks. A knowledge network is a collection of concepts that structure information and allows the user to view it. In addition, concepts are organized into hierarchies where each concept is related to its super- and sub-concepts. All this forms the basis for inheriting defined attributes and relations from more general to more specific concepts. This is seen more in depth in chapter 5.

5.  DIFFERENT APPROACHES TO MEANING INTERPRETATION According to Prevot et al. (2010), the topic of the interface between ontologies and lexical resources is a re-examination of traditional issues of psycholinguistics, linguistics, artificial intelligence, and philosophy in the light of recent advances in these disciplines and in response to a renewed interest in this topic due to its relevance for the Semantic Web major applications. These studies also recognize the importance of a multidisciplinary approach for lexical resources development and knowledge representation and the influential contributions to the field of Hobbs et al. (1987),

29

On

the architecture of words.

Applications

of

Meaning Studies

Pustejovsky (1995), Guarino (1998), Sowa (2000), Nirenburg and Raskin (2004), among many others. In Prevot et al. (2010) it is also shown how this ground is an opportunity for the current research in Natural Language Processing (NLP), knowledge representation, and lexical semantics. The interface between formal ontology, lexical resources or linguistic ontologies is also seen as highly relevant. For these authors, formal ontologies are understood as ontologies motivated by philosophical considerations, knowledge representation efficiency and re-usability principles. Lexical resources or linguistic ontologies have a structure motivated by natural language, and more particularly by the lexicon.

5.1.  Cognitivism and ontologies The influence of cognitivism on the design of ontologies has to do with categorization. Because of this, studies on categorization have received a lot of attention from the cognitive side. These studies explain how componential semantics is developed (Katz and Fodor, 1963), in terms of the category of a word defined as a set of features (syntactic and semantic) that distinguishes this word from others. Studies on categorization also explain how this model fits extremely well with the Formal Concept Analysis (FCA), which was developed by Ganter and Wille (1997) and first applied to lexical data in Priss (1998, 2005), and how componential semantics is nowadays in use in several ontological approaches as we will see in later chapters. However, cognitive approaches see componential semantics as limited by various developments centered on the notion of prototypicality (Rosch 1973, 1978). It has been empirically established that the association of words and even concepts within a category is a scalar notion (graded membership). The problem of category boundaries, of their existence and of their potential determination, is therefore a serious one. Contextual factors have been said to influence these boundaries. Another issue is the use of a feature list that has been considered too simplistic and that raises the question of the definition of the features themselves. Besides the issue of prototypicality, another common ground is the investigation of the models for concept types (sortal, relational, individual, and functional concepts), category structure, and their respective relation-

30

Introduction

ships to ‘frames’. On the other hand, there is wide converging evidence that language has a great impact on categorization. When there is a word labelling a concept in a certain language, it makes the learning of the corresponding category by children much faster and easier. Authors such as Wierzbicka (1990), Murphy (2003), Croft and Cruse (2004: 77-92), and Schalley and Zaefferer (2006), have studied and discussed these approaches and their limitations at length. As explained in Goded Rambaud (2010: 194), referring back to Cruse (2000: 132), who in turn drew his analysis from Rosh and Mervis (1975), prototypicality measured by Goodness of Example (GOE) scores, correlates strongly with vocabulary learning. For example, extensive research in this line has shown that children in later stages of language acquisition, when vocabulary enlargement is directly affected by explicit teaching, learn new words better if they are given definitions focusing on prototypical examples, than if they are only given abstract definitions, even if these better reflect the meaning of the word.

5.2.  Predication in linguistics and in ontologies The concept of predicate structure is a key one in the study of semantics. As it is well known, a predicate is a relational concept that links one or several entities. Languages frequently lexicalize predicates in the form of verbs and adjectives. Predication has also been acknowledged as an important issue by the above-mentioned authors and many others in the field. Due to the fact that it affects sentence interpretation, a large body of work has predication in linguistics as its focus. These works were pioneered by Fillmore (1976), who proposed that we should analyze words in relation to each other according to the frame or script in which they appear. He focuses on relations expressed by case grammar (Fillmore 1968). In this domain, essential contributions on argument structure (Grimshaw 1990), thematic roles, selectional restrictions (Dowty 1990), and type coercion (Pustejovsky 1995) have been made over the past few years. This field of research has resulted in the creation of resources such as FrameNet (Baker et al. 1998) and VerbNet (Kipper et al. 2000).

31

On

the architecture of words.

Applications

of

Meaning Studies

The classic example, which also shows the effects of context in language disambiguation, is given below: (5) She saw them She saw them with the binoculars/microscope/ Also, in the restaurant script you cannot say: (6) * I’d rather take the deer, but you are forced to say: (7) I’d rather take the game. Or, drawing from your background stored information, based on where you are at the time of speaking, you can say in Spanish, (8) Hay mucho pescado / Hay muchos peces depending on whether you are at the fish market, the restaurant or on a fishing ship. All contextual information configures the background knowledge which is organized in a mental ontology that organizes and links eating places, and places where edible materials can be obtained or processed such as areas of farmland, water, ships, food factories, etc., together with places where food is normally consumed (dining rooms, restaurants, staff cantines, etc.). All this encyclopedic knowledge or background knowledge is stored in our minds. The challenge for language representation and, after that, for computational linguistics too, is first how to model it, and then how to transfer this modeling to a machine-readable program. This is how we go from a linguistic model to a computational program. o-o-o-o-o-o-o

32

Introduction

Exercise 2: Go to FrameNet (https://framenet.icsi.berkeley.edu/fndrupal/) and search for a word such as say: (https://framenet.icsi.berkeley.edu/fndrupal/ framenet_search). Describe the information you obtain from this, and give one possible application of such information. o-o-o-o-o-o-o

5.3.  Are conceptual and semantic levels identical? Many proponents of the cognitive approach to languages postulate that semantics is equated with the conceptual level. Jackendoff (1983) explains that surface structures are directly related to the concepts they express by a set of rules. These concepts include more information associated with world knowledge (or encyclopedic knowledge). Since, according to him, it is impossible to disentangle purely semantic from encyclopedic information without losing generalizations (Jackendoff 1983), semantic and conceptual levels must form a single level. However, Levinson (Gumperz and Levinson 1996; Levinson 1997, 2003) advanced serious arguments, involving pragmatics in particular, in favor of highlighting the distinction between semantic and conceptual representations. These differences are explained by the differing views on language inherited from different theoretical perspectives. For example, while Jackendoff focuses on Chomsky’s I(nternal)­language, Levinson insists on the importance of the social nature of language and therefore takes care of the rather neglected E(xternal)-language in Jackendoff’s account. Language primarily understood as a tool for communicating rather than as a tool for representing knowledge (in someone’s mind) corresponds to these different perspectives. From the applied perspective on methodological grounds, Bateman (1997) argues for the need of an interface ontology between the conceptual and surface levels. He specifies that such a resource should neither be too close to nor too far from the linguistic surface and details the GeneralUpper-Model (Bateman et al. 1995) as an example of such a balanced inter-

33

On

the architecture of words.

Applications

of

Meaning Studies

face ontology. This is also the line followed by Nirenburg and Raskin (2004) and Prevot et al. (2010). Pustejovsky and his colleagues, on the other hand, prefer a single structure, though highly dynamic, as in the generative lexicon (Pustejovsky 1991, 1995).

5.4.  The philosophical perspective Prevot et al. (2010) claim that determining a system of ontological categories in order to provide a view as to What kinds of things exist? is one of the essential tasks of metaphysics. These proposals, from Aristotle to contemporary authors, can differ strongly on both the nature of the ontological category (i.e. How exactly are the categories defined?) and on actual ontological decisions (e.g. What are artefacts, organizations, holes...?). In this context, the focus has been mainly on the upper level of the knowledge architecture and on finding an exhaustive system of mutually exclusive categories (Thomas 2004). Prevot et al. (ibidem) observe that the lack of consensual answers on these matters has resulted in a certain skepticism regarding such an ontological system. However, recent approaches —aimed at taking the best of philosophy while avoiding its pitfalls— are rendering philosophical considerations worth the exploration. These authors also state that the crucial area in which philosophy can help the ontology builder might not be the positioning of such and such a concept in a ‘perfect’ taxonomy, but rather the methodology for making such decisions. The second important aspect that Huang et al. recognize is the distinction between revisionary and descriptive metaphysics. They hold the view that the material accessible to the philosopher is not the real world but rather how the philosopher perceives and conceptualizes it. In contrast, a revisionary approach uses rigorous paraphrases for supposedly imperfect common-sense notions. See, for example, a discussion about the difficulties that such a revisionary approach meets when trying to deal with objects as simple as “holes” (Casati and Varzi 1996).

34

Introduction

That is, how could you describe the hole in a doughnut without referring to the doughnut itself? Revisionary approaches tend to discard natural language and common-sense intuitions as serious sources for ontological analysis. On the other hand, the descriptivist stance is presented as being philosophically safer. It also provides a solid methodological starting point for practical ontological research. More precisely, by allowing different ontologies (as different descriptions of the world) to co-exist, it is possible to avoid never-ending ontological considerations on the real nature of the objects of the world. This move leads to the distinction between Ontology as a philosophical discipline and ontologies as knowledge artefacts we are trying to develop (Guarino 1998). Modern ontology designers are not looking for a perfect ontology, but instead consider many potential ontologies concerning different domains and capturing different views of the same reality. As a conclusion, Niremburg and Raskin (2004: 149), claim that “the real distinction between ontologies and natural languages is that languages emerge and are used by people, while ontologies are constructed for computers”.

6.  LEXICAL DESCRIPTION THEORIES 6.1. Introduction The description of the lexicon requires a bit of history. In this section, structural, functional, cognitive and formal theories are mentioned with different degrees of detail, touching only their most influential descriptive aspects. 6.2. Structuralism 6.2.1.  European Structuralism De Miguel et al. (2008) recognize four main models in the description of the lexicon. They are called structural, functional, cognitive and formal. These models account to a large extent for equivalent proposals in general language description.

35

On

the architecture of words.

Applications

of

Meaning Studies

Lexical structural models are part of the general cultural movement which took place in Europe and the USA during the 20th century, involving various human sciences and covering anthropology, psychoanalysis, philosophy and of course linguistics. The so-called structuralism has LévyStraus as one of its main representatives and as the founder of modern anthropology and Saussure (1916) not only as a structuralist, but also as the real founder of modern linguistics. In De Miguel et al.’s (2008) view, one of the most marked features of structuralism is its claim for scientific rigor, a heideggerian antihumanism, and antihistoricism. Villar Díaz (2008) sets the beginning of the decline of this movement at around 1966. Structural linguistics is, thus, more a reaction against 19th century historicism in philological studies and related comparative grammars. Against this approach based on the analysis of isolated units, structuralism supports the idea that there are no isolated linguistic units but that they only exist and make sense in relation with other units. Among Saussure’s many relevant contributions, the notion of a system is of particular importance. For him language is a system in which its units are determined by their place within the system rather than by the other external reference. That is, it is their differential value or opposition that characterizes them. He claimed that concepts or meaning in language are purely differential. They are defined not in positive terms (by their content), but rather negatively (by their relation to other meanings in the language). The main outcome of his notion of a system was the development of structural phonology in the Prague Circle, whose members Mathesius, Makarovsky, and Vachec, along with the French Benveniste, Martinet, and Jakobson and Hjelmslev, from Denmark, revolutionized the field of linguistics. While the Prague Circle primarily developed structural phonology, the Copenhagen School focused strongly on glosematics. This discipline is more closely related to mathematics than to linguistics proper. Hjelmslev, claimed for the higher possible abstraction in underlying linguistic structures, which were, according to him devoid of experience. In his rather visionary way, he presented a formal logic model of language based on mathematical correlations, anticipating the present developments of computer sciences and its applications. Greimas, one of Hjelmslev followers, developed a structural model of lexico-semantic analysis of meaning.

36

Introduction

Initially, structural semantics or lexematics, as Coseriu labelled it, was just a direct application of phonological methods with subsequent problems derived from the fact that phonemes and words are quite different types of units which possibly called for different ways and methods of analysis. The lack of regularity which characterizes the lexicon and the lousy structure of semantic units gave base for Coseriu’s (1981:90) three problems of structural lexemantics, in which he focuses on complexity, subjectivity and imprecision. Coseriu (ibidem) also held that structural semantics, or lexematics, belongs to the systematic level of the lexicology of contents, and he regrets that most lexical studies are based on the relations among meanings or, more frequently, on the correlation between a significant and a signified; that is a semasiological perspective. Or the opposite: an onomasiological perspective between a signified or concept and its corresponding significant or word. In his view, lexical analysis should only concentrate on the relations among meanings. Lexematic structures, according to him, include paradigmatic and syntagmatic perspectives. Among paradigmatic structures, what he defined as lexical fields is one of the most important ones. It is based on the idea that the meaning of one lexical piece depends on the meaning of the rest of pieces in its lexical field. That is, a lexical field can be defined as a set of lexemes or minimal semantic units, which are linked by a common shared value and simultaneously opposed minimum content meaning or minimal distinctive features. In her analysis of lexical fields, Villar Diaz (2008: 233) identifies one of the main problems this concept faces: where does exactly one lexical field finish and where does the next start? As an answer, she advocates Coseriu’s explanation (1981: 175) that the lexicon of a language is not a unified and ordered classification, made of successive layers as with scientific taxonomies, but instead it is a series of different and simultaneous classifications which may correspond to various archilexemes simultaneously. In this context, the principles upon which the lexemes that make up a lexical field are selected need to be established. In this sense, the so-called distinctive feature analysis is the methodology applied in the construction of lexical fields.

37

On

the architecture of words.

Applications

of

Meaning Studies

Other linguists such as Pottier showed a tendency to base linguistic distinctions on the real world rather than on the language itself. And, while Greimas proceeds from general to particular in a highly analytic and abstract integration, Coseriu adopts a much more purely linguistic approach and suggests going bottom up and starting from basic oppositions within a lexical field as a better option. He was the first linguist who tried to offer a complete typology of lexical fields based on the lexematic opposition that operated within them. He divides lexical fields into two classes: unidimensional and pluridimensional ones. The former is divided into three possible categories. Antonimic fields (high/low), gradual fields (cold/cool/ lukewarm/ tepid/hot) and serial fields. These last ones are based on equipollent oppositions. That is, oppositions that are logically equivalent and in which each element is equally opposed to the rest (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday). 6.2.2.  American Structuralism The studies of ethnologists and ethno-linguists such as Franz Boas and Edward Sapir paved the way for a more fully-fledged linguistic structuralism, with Leonard Bloomfield as its most relevant proponent. He was also the most relevant representative of the Yale School. One of the main differences between European and American structuralism is the type of languages they focused on. While Europeans concentrated on well-known languages —old or modern— American structuralists studied American Indian languages with an important focus on distributionalism. Their studies were much more sintagmatically oriented and Zellig Harris (1951: 365), another eminent member of this branch, holds that the use of meaning can only be used heuristically, as a source of hints to determine criteria to be used in distributional studies. However, Villar Diaz (2008: 241) holds that Harris’ importance is related to his introduction of the notion of transformation applied to syntax, which gave way to Chomsky’s transformational grammar. American structuralists were also heavily influenced by psychological behaviorist theories. It is perhaps because of this that they lacked interest in the study of meaning.

38

Introduction

6.3.  Functional Models The functional group of theories share a number of basic aspects, making them contrast with, for example, the formalist and cognitivist approaches to language description and to the description of the lexicon in particular. Among these aspects, the most widely recognized concerns the importance given to communication as the driving force of linguistic organization. A second important aspect is related to the role given to cognitivism in lexical descriptions. As a reaction to the Chomskian claim that language is a defining human competence, without mentioning the pourpose of this competence, the first aspect shared by most functional approaches is the claim that communication is the most basic function of language. Language’s representational power is less important for them than its communicative one. On the other hand, cognitive linguistics, while sharing this emphasis on communication, has been less influential on lexical studies. Only when allied with functionalist models is a cognitive approach to language description relevant for identifying of the role of the lexicon in language modelling. One important development of the European functional trend, which was influential in terms of the study of the lexicon, took place as a discussion of Firth’s (1957) seminal paper and was initiated by Halliday (1966). The most prominent developments within this functional trend are summarized in Butler (2009), where the first approach discussed is Dik’s Functional Grammar (FG) (1997a, 1997b). The most recent editions of functional grammars are Functional Discourse Grammar (FDG), as in Hengeveld and Mackenzie (2006, 2011); Role and Reference Grammar (RRG), as in Van Valin and La Polla (1997) and Van Valin (2005); and Systemic Functional Grammar (SFG) as in Halliday and Matthienssen (1999, 2004). Also recognized by Butler (2009) for being within the functional paradigm are Langacker’s Cognitive Grammar, as in Langacker (1987, 1991) and Fillmore’s Construction Grammar, as in Fillmore and Kay (1995) and Goldberg (1995, 2006). However, it seems that —from the lexical perspective— Langaker’s Cognitive Grammar relationship to the lexicon is weaker than that of Hengeveld, Mackenzie, Van Valin, Fillmore and Goldberg.

39

On

the architecture of words.

Applications

of

Meaning Studies

For example, Goldberg’s works are based on how constructions are learned and processed and their own internal relationships, apart from general cognitive processes such as categorization and various aspects of social cognition. Her constructionist approach to language can also shed light on questions related to linguistic productivity and cross-linguistic generalizations in typical and autistic populations. Therefore, as with many other linguists, her work can be included within both the functional and the cognitive paradigms. Butler (2009, 2012) identifies at least eleven approaches within the functional paradigm. However, even if Butler’s detailed and thorough analysis is used to trace back the origins of the Lexical Constructional Model among other well described models, in this section only those trends which are more heavily lexically based are taken into consideration. The influence of the lexicon as the storage place for syntax can be found in the work of Vossen (1994, 1995); Schack Rasmusen (1994); Hella Olbertz (1998); Mingorance (1995, 1998); Faber and Mairal (1998, 1999). Special attention should be given to these Spanish functionalists who first followed Coseriu’s lexicology and who were also inspired by Martín Mingorance’s work. They developed a combined model of lexical representation with strong ontological and cognitive connections. Ricardo Mairal and Pamela Faber are keystone names in this line of work. In Mairal and Faber (2002: 40), the relationship between functionalism and the role of lexicon in language modeling and description is first established and linked to ontologies. They recognize the importance of such links and provide a classification of theories depending on the scope of three-group lexical representation. In the first group they only include those lexical representations with purely syntactically relevant information. For example, Van Valin and La Polla’s (1997) and Van Valin’s (2005), Role and Reference Grammar (henceforth RRG) with their logical structures and Rappaport and Levin’s (1998) lexical templates. In the second group of theories they place those whose lexical representations feature a rich semantic component and provide an inventory of syntactic constructions forming the core grammar of a language. These con-

40

Introduction

structions use a decompositional system and only capture those elements that have a syntactic impact. This paradigm is closely linked to Cognitive Linguistics and to the work of Lakoff (1987), Goldberg (1995) and, Langacker (1987, 1999). Finally, Mairal and Faber (ibidem) recognize a third group of linguists which they define as ontologicaly-driven lexical semanticists, in which they include Niremburg and Levin (1992), as part of an important background against which Dik’s theory could be tested. More recent accounts of the developments of these functional theories can be found in Martín Arista (2008: 117-151), where he provides a thorough overview of the foundation and background of the Lexical Constructional Model (LCM). An important development of Simon Dik’s Funtional Grammar is Hengeveld and Mackenzie (2008), with Functional Discourse Grammar. It features two relevant aspects: it incorporates and emphasizes the level of discourse in the model; and it claims that this model only recognizes linguistic distinctions that are reflected in the language in question. Because of this, the conceptual component of the model restricts itself to those aspects with immediate communicative intention. Hengeveld and Mackenzie (2011: 5-45) stress the importance of not postulating universal categories of any nature —be it pragmatic, semantic, morphosyntactic or phonological— until their universality has been empirically demonstrated. This emphasis of FDG on not generalizing anything without a strong empirical backing is something that should be valued in a context of proliferation of linguistic models based on little empirical background. The lexical component in FDG is not individualized and separate from other components. Butler (2012) recognizes the need for a more developed lexical component. However, the presence of the lexicon permeates all levels of description precisely because this is only natural, having inherited Dik’s emphasis on the Fund as a lexical repository. An analysis of the ontological perspective in FDG as in Hengeveld and Mackenzie (2008) can be found in Butler (2012) too. After an initial proposal for a Functional Lexematic Model that developed into a Lexical Constructional Model, where lexical templates were first included, Mairal’s later developments include a full incorporation of ontologies. This takes the form of a language of conceptual representation

41

On

the architecture of words.

Applications

of

Meaning Studies

(COREL) as in the development of his model in Mairal Usón and Periñan Pascual (2010) and Periñán-Pascual and Mairal Usón (2011), where the initial 2002 lexical templates have turned into meaning postulates and where a fully-fledged knowledge representation system is included to integrate cognitivist modellization. The inclusion of cognitivism has two different but related sides. On the one hand, conceptual schemes include a semantic memory with cognitive information about words, a procedural memory about how events are perceived, and an episodic memory storing information about biographical events. On the other hand, these three types of knowledge are projected into a knowledge base. This, in turn, includes a Cognicon featuring procedural knowledge in the form of scripts, an Ontology focusing on semantic knowledge that is represented in meaning postulates, and a general Onomasticon with names captured in two microstructures, where pictures and stories are stored. This language of conceptual representation (COREL) features a series of conceptual units. These, in turn, include metaconcepts, basic concepts and terminal concepts, thus configuring a classic ontology. Again, COREL, as most ontologies do, provides a collection of hierarchically-organized terms and a notational system. The authors claim that most of the concepts are interpreted as having a lexical motivation in the sense that the concept is lexicalized in at least one language. In COREL, such concepts are not semantic primitives as formulated in Goddard and Wierzbicka’s (2002) Natural Semantic Metalanguage. And finally, these concepts have semantic properties captured in Thematic Frames and Meaning Postulates. Thematic Frames specify the participants that typically take part in a cognitive situation, while Meaning Postulates include two kinds of features: those with a categorization function and those with an identification function. The first determine class inclusion whereas the second are used to exemplify the most prototypical members of the category. Very much in the “Dikean” tradition, it is made up of one or more predications with each one representing only one feature. Again, features are reduced to two classes: nuclear and exemplar. The former have a categorizing function and the latter have an identifying one.

42

Introduction

To sum it up, Mairal et al. (2008) recognize the relevance of the lexical component in Van Valin’s RRG, the study of its logical structures, thematic roles, and macro roles. In addition, they position themselves in relation to Dik’s Fund, his Predicate Frames, his stepwise lexical decomposition, and the notion of lexical templates. It should also be noted that the role played by the lexicon in Mairal’s works is increasingly related to an ontological perspective and tends to fit into this type of format. However, while the inclusion and development of an ontological perspective in linguistic modelization seems fruitful, the attempt to include the role of cognition in functional linguistic descriptions leads to a dead end. The general human linguistic processing and linguistic elicitation is indeed influenced by the general rules of human cognition, but the actual linguistic codification and marking of meaningful items is probably determined by the basic communicative nature of language. As Nuyts (2004) explains, communication and cognition are two fundamental human abilities that have different functionalities. The following quotation illustrates much of this theoretical positioning: The functional requirements for a central information processing and storage system, which conceptualization is, are completely different from the functional requirements of a communication system such as language. So it is only natural that they have different shape and organization.

The point to which these abilities influence each other is still an open question. 6.4.  Formal Models Mendikoetxea (2008), in De Miguel (2008)’s Panorama de la Lexicología, explains how the division of labor between lexicon and syntax has always been a major issue in the Chomskian tradition. Within this approach, the lexicon is considered a storage of lexical entries with arbitrary semantic/ phonological pairs together with idiosyncratic information about the lexical entries. In addition, and more importantly, a series of formal features are translated into syntactic instructions that specify and determine the syntactic behavior of each lexical unit.

43

On

the architecture of words.

Applications

of

Meaning Studies

Mendikoetxea’s chapter is a very thorough account of the contributions of the generative tradition to lexical studies. Based on the work of some of the most relevant experts in the generative field such as Hale and Keyser (1993, 2002) and Borer (2005a, b) and Levin and Rappaport Hovav (1995), she summarizes the type of problems that the lexical-syntax interface has to deal with. There are three aspects: firstly, the levels at which lexical items are inserted in all these models; secondly, the nature of lexical representations and thirdly, the relationship between lexical-semantic and syntactic properties of predicates. 6.5.  Cognitive Models Although the philosophical work of Wittgenstein is crucial in understanding 20th century linguistics and while he identified important pro­ blems in the traditional Aristotelian categorization, cognitive approaches to linguistic analysis started in earnest in the mid-sixties with the referential work of psychologist Eleanor Rosch and her colleagues and their studies of human categorization. Partly based on her work, Lakoff (1980, 1987, 1999) and Langacker (1987), established the basis for an approach that focuses on how human perception, and the human body in general, affect language. One of the most widely accepted views among cognitive linguists is the idea that there is no separation of linguistic knowledge from general thinking. In this sense they strongly oppose the influential views of other linguists, such as Chomsky and Fodor who, linked to a truth-based approach in logic and in linguistics, see linguistic behavior as just another separate part of the general cognitive abilities which allow learning and reasoning. Formal and functional approaches to linguistics are usually linked to certain views about the acquisition of language and cognition. For instance, generative grammar is generally associated with the idea that language acquisition forms an autonomous module or faculty independent of other mental processes of attention, memory and reasoning and independent from its communicative functionality. This external view of an independent linguistic module is often combined with a view of internal modularity so that different levels of linguistic analysis, such as phonology, syntax, and semantics, form independent modules.

44

Introduction

In Saeed’s view, as in Saeed (2003), cognitivists identify themselves more readily with functional theories of language than with formal theories. This approach implies a different view of language altogether. That is, cogniti­ vism holds that the principles of language use embody more general cognitive principles. Saeed (ibidem) explains that under the cognitive view the difference between language and other mental processes is one of degree but not one of kind, and he adds that it makes sense to look for principles shared across a range of cognitive domains. Similarly, he argues that no adequate account of grammatical rules is possible without taking the meaning of elements into account. Because of this, cognitive linguistics does not differentiate between linguistic knowledge and encyclopedic real world knowledge. From an extreme point of view, the explanation of grammatical patterns cannot be given in terms of abstract syntactic principles but only in terms of the speaker’s intended meaning in particular contexts of language use. The rejection of objectivist semantics as described by Lakoff is another defining characteristic of cognitive semantics. ‘Doctrine’ for Lakoff is the theory of truth-conditional meaning and the correspondence theory of truth which holds that truth consists of the correspondence between symbols and the states of affairs in the world. Lakoff also rejects what he, again, defines as the ‘doctrine’ of objective reference, which holds that there is an objectively correct and unique way to associate symbols with things in the world. However, conceptualization can also be seen as an essential survival element that characterizes the human species. Born totally defenseless, it would have been very difficult for the human offspring to survive if it were not for this powerful “understanding of the real world” feature. The conceptualization that language allows has been essential for us to survive and dominate other less intellectually-endowed species. If someone hears ‘Beware of snakes in the trail’, it is the understanding of the concept [SNAKE] that the word snake triggers that allows the hearer to be aware of potential danger. It is this abstraction potential of concepts that helps us to navigate the otherwise chaotic world which surround us. Because speaker and hearer share a category [SNAKE], communication between them has been possible. How this so-called abstraction potential works, either under

45

On

the architecture of words.

Applications

of

Meaning Studies

a logical, truth-conditional perspective or under a cognitive perspective, is an entirely separate matter. Cruse (2004: 125), explains how concepts are vital to the efficient functioning of human cognition, and he defines them as organized bundles of stored knowledge which represent an articulation of events, entities, situations, and so on, in our experience. Cruse (ibidem) argues that if we were not able to assign aspects of our experience to stable categories, the world around us would remain disorganized chaos. We would not be able to learn because each experience would be unique. It is only because we can put similar (but not identical) elements of experience into categories that we can recognize them as having happened before, and we can access stored knowledge about them. Shared categories can be seen then as a prerequisite to communication. One alternative proposal within the cognitive linguistics framework is called experientialism, which maintains that words and language in general have meaning only because of our interaction with the world. Meaning is embodied and does not stem from an abstract and fixed correspondence between symbols and things in the world but from the way we human beings interact with the world. We human beings have certain recurring dynamic patterns of interaction with the outside world through spatial orientation, manipulation of objects, and motor programming which are based on in the way we are physically shaped. These patterns structure and constrain how we construct meaning. Embodiment as proposed by Johnson (1987), Lakoff (1987) and Lakoff and Johnson (1999), constitutes a central element in the cognitive paradigm. In this sense, our conceptual and linguistic system and its respective categories are constrained by the way in which we, as human beings, perceive, categorize and symbolize experience. Linguistic codification is ultimately grounded in experience: bodily, physical, social and cultural. What cognitivist approaches contributed to linguistic analysis is the important role that human cognition plays in linguistic codification. In other words, how general human cognition with its characteristic operations such as comparison, deduction, synthesis, etc., affects linguistic codification at many different levels. However, these approaches sometimes tend to confuse the explanation of how meanings are understood with how they are codified. Not all meaning that is understood or expressed by language is

46

Introduction

actually linguistically codified in empirically identifiable terms, phonetically and morpho-syntactically speaking. There is a lot of information that does not need to be linguistically codified in linguistic interactions because it can be retrieved from the knowledge repositories and its ontological organization. This is already present in the human mind and continually updated by new information. Nuyts (2007: 548) attempts to draw a line between cognitive linguistics and functional linguistics and explains the origins of both, their overlapping areas of influence and their specific areas of interest. He also emphasizes their respective main contributions. For example, Nuyts (2007: 551) stresses the fact that while cognitive linguistics is much more interested in purely semantic dimensions and it emphasizes those which have to do with the way we conceptualize and categorize the world around us, it deals less extensively with the role of communicative dimensions such as interactional and discursive features of language or with the role of shared knowledge and its effects in information structuring. When cognitive linguistics focuses on aspects such as information structure or perspectivization, which usually adopts the form of “construal” operations, it does so in terms of how the speaker conceptualizes a situation, not in terms of how the information in an utterance relates to its context. Because of this, linguistic models should differentiate between communication and cognition as two different human functionalities (Nuyts 1996, 2004, 2010), which, although related, are different systems and should be explained differently when included as parts of a linguistic model. The consequences of this distinction affects the design of both linguistic models and their subsequent applications. 6.6. Conclusions The first main conclusion to be drawn is that some basic consensus among functional, generative and cognitive paradigms seems to be emerging. And that it concerns the ontological recognition of the structure of the predicate, presented under different formats, notations and emphasis.

47

On

the architecture of words.

Applications

of

Meaning Studies

The concept of predicate frames, as “the” basic scaffolding concept, capturing the basic differences between entities and relations, is the key concept in Dik (1987, 1997a, b) and, under different formats and models, it is present in most models of linguistic description. Along the lines of Dik’s functionalism, Hengeveld and Mackenzie (2011) stress the fact that relevant grammatical distinctions should be empirically identifiable in the first place. In addition, Connoly (2011) fully integrates context in his model. Consequently, context is understood in pragmatic terms. That is in the hand­ ling of linguistic concepts such as, for example, theme-rheme positioning, or emphasis, which are usually lexically and collocationally identifiable, can be pragmatically understood. With a more computational orientation, Mairal Usón (2002, 2010), Periñan Pascual and Mairal Usón (2010), include a functional grammar knowledge base in the construction of a computational environment. In their analysis of functional approaches to the study of the lexicon, Mairal and Ruiz de Mendoza (2008), identify a number of problems that all linguistic theories face in dealing with the challenges posed to the analysis of the lexicon. The first one is the definition of a metalanguage able to spell out what both formal and functionalist lexicologists call constants. The second is how to express the internal structure of a conceptual ontology and its relations with the lexicon. The third has to do with the identification of factors affecting the argument structure, being them lexical, external or both. And finally, the last one is the formulation of the exact mechanisms underlying polysemy. These constants support the basic entity/ relation distinction. On the other hand, generativism has always recognized this entities/ predicate basic distinction. The logic form underlies the structure of the clause from the very beginning of the Chomskian formalizations. Cognitive Grammar, as in Langacker (1991: 283), recognizes how objects and interactions relate and how they are conceptually dependent. In Langacker (1987: 215) he explains that one cannot conceptualize interconnections without also conceptualizing the entities that are interconnected. However, not all of the above models or trends account for the basic difference between communication and cognition as two separate functionali-

48

Introduction

ties as in Nuyts (1996, 2004, 2010), which, although related, constitute different systems. Aside from the approaches of functionalists, generativists and cognitivists, Parikh (2006, 2010) is one of the few approaches that treats language as an information system. In Parikh’s theory of equilibrium semantics, he provides a statistical and computational-based model of language and he proposes a logic-mathematical description of its functioning, again differentiating both entities and relations as basic concepts. In Parikh (2010: 43) his situation theory recognizes infons as a main relation that hold a number of constituents. The above mentioned basic agreement has had profound effects in the linguistic backgrounding of present ontology developments. Even if, in computational terms, both entities and relations are frequently labelled as “objects” in the sense that, once they are computationally “translated”, they are just “objects” subject to computational treatment, the distinction is always kept. As a result, the vast majority of linguistic theories recognize this difference as a very basic underlying ontological differentiation. The second main conclusion concerns the increasingly relevant role that context takes in linguistic description and in linguistic formalization. Therefore, a general agreement of the relevance of this underlying differentiation between entities and relations and the role given to the treatment of context are two basic common aspects that most models of linguistic description intend to account for.

7.  FURTHER RECOMMENDED READINGS Hanks, P. 2003. Lexicography. In The Oxford Handbook of Computational Linguistics. Oxford University Press. Singleton, D. 2000. Introduction: The lexicon-words and more. In Language and the Lexicon. An Introduction. Arnold& Oxford University Press. Singleton, D. 2000. Lexis and meaning. In Language and the Lexicon. An Introduction. Arnold & Oxford University Press. Vossen, Piek. Ontologies. In The Oxford Handbook of Computational Linguistics. Chapter 25. Oxford University Press.

49

On

the architecture of words.

Applications

of

Meaning Studies

8. REFERENCES Butler, C. S. and J. Martín Arista. 2008. Deconstructing Constructions. Amsterdam. Philadelphia: John Benjamins Publishing Company. Casati, R. and Varzi, A. C., 1999. Parts and places: the structures of spatial representation. MIT Press. Coseriu, E. 1981. Principios de semántica estructural. Madrid: Gredos. Cruse, A. 2002. Meaning in Language. An Introduction to Semantics and Pragmatics. Oxford: OUP. Chu Ren Huang, Ontology and the Lexicon. A Natural Language Processing Perspective. New York: Cambridge University Press. De Miguel, E.: 2008. Panorama de la Lexicología. Barcelona: Ariel. Dik, Simon. 1997. The Theory of Functional Grammar. Berlin/New York: Mouton de Gruyter. Firth, J. R.: 1957. Modes of Meaning. In J. R. Firth, Papers in Linguistics 19341951. London: London University Press. Fillmore, Charles J. 1968. The case for case. In Universals in linguistic theory, ed. by Emmon Bach & Robert T. Harms. New York: Holt, Rinehart & Winston. Fillmore, Charles J. & Paul Kay. 1987. The goals of Construction Grammar. Berkeley Cognitive Science Program Technical Report n.º 50. University of California at Berkeley. Fillmore, Charles J. & Paul Kay. 1993. Construction Grammar Coursebook. Manuscript, University of California at Berkeley Department of linguistics. Ganter, B. and R. Wille. Applied lattice theory. Formal concept analysis. In G. Grätzer editor: General Lattice Theory. Birkhäuser. Gartner. 2008. Press release: “Gartner Identifies Seven Grand Challenges Facing IT” in Analysts Examine Technologies That Will Have a Broad Impact on All Aspects of People’s Lives Emerging Trends Symposium/ ITxpo 2008, April 6-10, in Las Vegas. In http://www.gartner.com/newsroom/id/643117 Geeraerts, D. 2007. Lexicography. In D. Geeraerts and H. Cuyckens: The Oxford Handbook of Cognitive Linguistics. USA: OUP. Goddard, C. and Anna Wierzbicka (eds.). 2002. Meaning and Universal Grammar. Theory and Empirical Findings (2 volumes). Amsterdam/Philadelphia: John Benjamins. Goded Rambaud, M. 1996. Influencia del tipo de syllabus en la competencia comunicativa de los alumnos. Madrid: CIDE Centro de Investigación y Documentación Educativa. ISBN, 84-369-2822-9.

50

Introduction

Goded Rambaud, M. 2010a. Can taggers and disambiguators be theory free? The search for a unified approach in lexical representation. 1091-1102. In Caballero, R. y M. J. Pinar Ways and Modes of Human Communication. Ediciones de la Universidad de Castilla-La Mancha. ISBN: 978-84-8427-759-0. Goded Rambaud, M. 2010b. LEXVIN and the food and wine lexicons. Proceedings of the First International Workshop on Linguistic Approaches to Food and Wine Description. Madrid UNED University Press. Goldberg, Adele. 1995. Constructions. A Construction Grammar approach to argument structure. Chicago: University of Chicago Press. Guarino, N. 1998. Formal Ontology and Information Systems. Proceedings of FOIS’98, Trento, Italy, 6-8 June 1998. Amsterdam, IOS Press, pp. 3-15. Hale, K. and J. Keyser. 1993. “On argument structure and the lexical expressions of syntactic relations” in K. Hale and J. Keyser (eds.), The view from building 20: essays in honor of Sylvan Bromberger, Cambridge Mass.: The MIT Press. Hale, K. and J. Keyser. 2002. Prolegomenon to a Theory of Argument Structure. Cambridge Mass.: The MIT Press. Hanks, P. 2003. Lexicography in The Oxford Handbook of Computational Linguistics. OUP. Harris, Z. 1951. Methods in Structural Linguistics.(1963 edition). Chicago: Chicago University Press. Hengeveld, K. and J. L. Mackenzie 2008. Functional Discourse Grammar: A typologically-Based Theory of Language Structure. Oxford and New York: Oxford University Press. Halliday, M. A. K. 1966. Lexis as a linguistic level. In C. E. Bazell, J. C. Catford, M. A. K. Halliday and R. H. Robins (eds) In Memory of J. R. Firth, London: Longman. Halliday, M. A. K. 1994. An Introduction to Functional Grammar. London: Edward Arnold. Halliday. M. A. K. and Christian Matthiessen. 1999. 2013. Introduction to Functional Grammar. Routledge Chapman & Hall. Hengeveld, K. and J. L. Mackenzie. 2011. La Gramática Discursivo-Funcional. Moenia. Revista Lucense de Lingüística y Literatura, 17 (2011), 5-45. Katz, J. and J. Fodor. 1963. The Structure of a Semantic Theory. Language, vol. 39, n.º 2. (Apr.-Jun., 1963), pp. 170-210. Langacker, R. 1987, 1999. The Foundations of Cognitive Grammar. Volume I and II. USA: Stanford University Press. Lakoff, G. 1987. Women, Fire and Dangerous Things. MIP Press.

51

On

the architecture of words.

Applications

of

Meaning Studies

Levin, B. and M. Rappaport Hovav. 1991. Wiping the Slate Clean: A Lexical Semantic Exploration. In B. Levin and S. Pinker (eds.) 1991. Special Issue on Lexical and Conceptual Semantics, Cognition 41. Reprinted as B. Levin and S. Pinker, eds. (1992) Lexical and Conceptual Semantics, Blackwell, Oxford, 123-151. Levin, B and Rappaport Hovav, M. 1995. Unaccusativity at the Syntax-Lexical Semantics Interface. Cambridge, Mass.: The MIT Press. Levinson, S. 2012. Language, Thought and Reality (collected papers of Benjamin Lee Whorf). New edition, MIT Press (edited with P. Carroll & P. Lee). Levinson, S. 2003. Space in language and cognition. Cambridge: CUP. Mairal Usón, R. and R. D. Van Valin, Jr. 2001. What Role and Reference Grammar can do for Functional Grammar. In M. J. Pérez Quintero (ed.) Challenges and Developments n Functional Grammar. Revista Canaria de Estudios Ingleses 42, 137-66. La Laguna: Tenerife. La Laguna University Press. Mairal Usón, R. and P. Faber. 2002. Functional Grammar and lexical templates in Mairal Usón R. and M. J. Pérez Quintero, eds. New Perspectives in Argument Structure in Funcional Grammar. Berlin. New York: Moutón de Gruyter. Mairal Usón, Ricardo and F. Ruiz de Mendoza. 2008. Levels of description and explanation in meaning construction. In Ch. Butler & J. Martín Arista (eds.) Deconstructing Constructions. Amsterdam/ Philadelphia: John Benjamins. 153-198. Mairal Usón, Ricardo and Carlos Periñán-Pascual. 2009. Role and Reference Grammar and Ontological Engineering. In Volumen Homenaje a Enrique Alcaraz. Universidad de Alicante. Mairal Usón, R. and C. Periñán-Pascual. 2010. Teoría lingüística y representación del conocimiento: una discusión preliminar. In Dolores García Padrón & Maria del Carmen Fumero Pérez (eds.). Tendencias en lingüística general y aplicada. Berlin: Peter Lang. 155-168. Mendikoetxea Pelayo, A. 2008. Modelos formales in E. de Miguel (ed.): Panorama de la Lexicología. Barcelona: Ariel Letras. Mitkov, R. 2003. The Oxford Handbook of Computational Linguistics. USA: Oxdord University Press. Niremburg, Sergei and Víctor Raskin, 2004. Ontological Semantics. Cambridge, Massachusets. London, England: MIT Press. Nuyts, J. and E. Pederson (eds.). 1997. Language and conceptualization. Cambridge: Cambridge University Press. Nuyts, J. 2004. Remarks on layering in a cognitive-functional language production model, in Mackenzie, J. Lahlan and M. de los Ángeles Gómez-González (eds.).

52

Introduction

A New Architecture of Functional Grammar. Berlin. New York: Mouton de Gruyter. Nuyts, J., P. Byloo and J. Diepeveen. 2010. On deontic modality, directivity, and mood: The case of Dutch mogen and moeten. Journal of Pragmatics 42, 16-34.1, 135-149. Parikh, P. 2006. Radical Semantics: A New Theory of Meaning. Journal of Philosophical Logic. Volume 35, 2006. Parikh, P. 2010. Language and Equilibrium. The MIT Press. Periñán-Pascual, Carlos and Ricardo Mairal Usón. 2009. Bringing Role and Reference Grammar to natural language understanding. Procesamiento del Lenguaje Natural, vol. 43. 265-273. http://www.fungramkb.com/resources/ papers/007.pdf Periñán-Pascual, Carlos and Ricardo Mairal Usón. 2010. La Gramática de COREL: un lenguaje de representación conceptual. In Onomazein 21.11-45: http://www.fungramkb.com/resources/papers/012.pdf Periñán Pascual, Carlos and Arcas Túnez, Francisco. 2011. Introducción a FunGramKB. In Anglogermanica online 2011. 2-15. http://anglogermanica.uv. es:8080/Journal/Viewer.aspx?Year=2011&ID=periarcas.pdf Periñán-Pascual, J. C. and Ricardo Mairal Usón. 2011a. The Coherent Methodology in FunGramKB. In Onomazein 2010/1.11-45. Periñán-Pascual, Carlos and Mairal-Usón, Ricardo. 2011b. La dimensión computacional de la Gramática del Papel y la Referencia: la estructura lógica conceptual y su aplicación en el procesamiento del lenguaje natural. In R. Mairal Usón, L. Guerrero and C. González (eds.): El funcionalismo en la teoría lingüística. La Gramática del Papel y la Referencia. Introducción, avances y aplicaciones. Akal: Madrid. Prevot, L.; Chu-Ren Huang, Nicoletta Calzolari, Aldo Gangemi, Alessandro Lend, and Alessandro Oltramari. 2010. Ontology and the lexicon: a multidisciplinary perspective in Chu Ren Huang et al. (eds.) Ontology and the Lexicon. A Natural Language Processing Perspective. New York: Cambridge University Press. Pustejovsky, J. 1991. “The Syntax of Event Structure”, Cognition, 41(1-3): 47-81. Saussure, F. 1916. Cours de Linguistic Generale. Paris: Payot. Singleton, D. 2000. Introduction: The lexicon-words and more in. Language and the Lexicon. An Introduction. Arnold& Oxford University Press.

53

On

the architecture of words.

Applications

of

Meaning Studies

Schalley, Andrea and Dietmar Zaefferer edts. 2007. Ontolinguistics. How Ontological Status Shapes the Linguistic Coding of Concepts. Berlin: Mouton de Gruyter. Sowa, John F. 2000. Knowledge Representation. Logical, Philosophical, and Computational. Foundations, Brooks Cole Publishing Co., Pacific Grove, CA. Van Valin , Robert and La Polla. 1997. Syntax. Structure, meaning and function. Cambridge University Press. Van Valin, Robert D., Jr. 2005. Exploring the syntax-semantics interface. Cambridge: Cambridge University Press. Villar Díaz, M. B. 2008. Modelos estructurales. In Elena de Miguel (ed.): Panorama de la Lexicología. Barcelona: Ariel.

9.  KEYS TO EXERCISES

Exercise 1:

Suggested answer:













On the Architecture of Words: applications of meaning studies

Suggested answer:

Like [find.pleasurable' (x, y)]

Love [find.pleasurable & have.+.feelings(x,y)]

Dislike [NOT.find.pleasurable' (x, y)]

Likes and dislikes

Hate [NOT.find.pleasureable&have.‐ .feelings(x,y)]

Note that, for instance, for Hate, another notation could be the use of NOT in order to negate the + of positive feelings, thus being something like NOT.find.pleasurable&NOT.have.+.feelings. As can be seen, notations in metalanguage are conventionalized as well as the linguistic signs.

Exercise 2: Suggested answer:

54

First step is making a query in the section “FrameNet index for Lexical Units”: if you search for say there, you obtain something like the following image:

Introduction

Note that, for instance, for Hate, another notation could be the use of NOT in order to negate the + of positive feelings, thus being something like NOT.find.pleasurable&NOT.have.+.feelings. As can be seen, notations in metalanguage are conventionalized as well as the linguistic signs. Exercise 2: Suggested answer: First step is making a query in the section “FrameNet index for Lexical Units”: if you search for say there, you obtain something like the following On the Architecture of Words: applications of meaning studies image:

Figure 1: results in the index for index lexicalforunits. Figurefor 1. say Results forFrameNet say in the FrameNet lexical units.

From there, according to what you want to find, you will obtain the different frameworks in which say can appear. Youyou can getobtain the concept of say From there, according to what you want to find, will the different say can appear. You can the concept of say in the frameworks in which in the framework of statements, by clicking on get Statement: framework of statements, by clicking on Statement: Definition: Definition: This frame contains verbs and nouns that communicate the act of a This frame contains averbs and nouns that communicate act of a Speaker Speaker to address Message to a particular Addresseethe using language. A to address a Message to some Addressee using language. A number of the number of the words can be used performatively, such as declare and insist. words can be used performatively, such as declare and insist. “I now declare you members of this Society.” 55

You can also access this information quickly, if in the homepage of Framenet you search for say:

On

the architecture of words.

Applications

of

Meaning Studies

“I now declare you members of this Society.” You can also access this information quickly, by searching in the home page of Framenet for say:On the Architecture of Words: applications of meaning studies

2.  for Exhaustive results for say in FrameNet Figure 2: exhaustiveFigure results say in FrameNet.

All of this information, given in the form of a conceptual net, is useful All instance this information, the form a conceptual net,tends is useful for for to obtain given a list ofinterms with of which the word say to occur. instance to obtain a list of terms with which the word say tends to occur. As As for conceptual purposes, it can help you create an ontology of all the for conceptual purposes, it can help you create an ontology of all the concepts that are related to it. It is also useful to obtain precise meanings of conceptslexical that are related It the is also useful of to obtain meanings of certain items, andtotoit.see tendency a wordprecise meaning one thing certain lexical items, and to see the tendency to mean one thing or another or another depending on the context in which it is used. It can be useful depending on the context in which they are used. It can be useful, then, to then to write a dictionary of specific terms, to create an ontology, or to crewrite a dictionary of specific terms, to create an ontology, or to create a bank ate a bank of terms. of terms.

56

1. Introduction 2.  Interaction between semantics and morphology 3.  Setting the grounds: review of basic concepts 4.  Words and word boundaries: lexeme, stem, lemma 4.1.  Lemma versus lexeme: on-going discussions 4.2. Prandi’s (2004) view of lexical meanings: lexemes, the lexicon and terminology 5.  The levels of language analysis and their units 6.  Further recommended readings 7.  Some lexicological sources 8. References 9.  Keys to exercises

1. INTRODUCTION As dictionaries consist of words, we need to understand how words are formed. In this chapter some basic notions about the different levels of linguistic analysis will be outlined. Special attention will be paid to morphology and its interaction with semantics revised. In addition, we will tentatively see how this interconnection can be applied to various kinds of linguistic studies. For example, the morphological analysis of languages is involved with creation and development of corpora, thesauri, and lexical and conceptual databases. Some basic morphological notions will also be introduced, as a key to understanding word formation phenomena.

2.  INTERACTION BETWEEN SEMANTICS AND MORPHOLOGY The semantic analysis of derivation processes addresses the issue of interaction between semantics and morphological studies. Precisely because a change in meaning is often achieved by a general manipulation of morphological resources available in each particular language, this connection is worth studying. As shown in chapter 4, the exploitation of morphological analysis for the development of parsers in corpora is a well-known example of this connection. Morphology is interconnected to all the other disciplines of linguistics. In this respect, Dieter Kastovsky (1940-2012), who received his Dr. Phil. in 1967 for his thesis Old English deverbal substantives derived by means of a zero morpheme, has been a pioneer of the idea of the important role of morphology in language analysis and studies. In the 70’s, thanks to him and others, such as his mentor Hans Marchand —considered one of the fathers of modern morphological theory— morphology broke out of its marginal role and started to have a more important position in linguistics. Stekauer (2000: 29) mentions the relevance of Hans Marchand for linguistics with regard to morphology:

59

On

the architecture of words.

Applications

of

Meaning Studies

Hans Marchand is considered, especially in Europe, to be the ‘father’ of the modern word-formation theory. In 1960, he published The Categories and Types of Present-Day English Word-formation, a “truly monumental work” (Zandvoort 1961: 120), which “…represents a milestone in the development of the methods of approach to the vast subject of word-formation; it is the decisive step from the traditional diachronic-comparative to a synchronicdescriptive treatment…” (Pennanen 1971: 9). The Categories has become a watershed in the history of research into word-formation not only in English, especially thanks to two facts. First, it delimited word-formation as a linguistic discipline of its own, and secondly, it provided an account of fundamental problems of word-formation in English in a systematic and comprehensive way. Based on the knowledge of word-formation at that time, and drawing on the structuralist traditions, Marchand elaborated his own, original theory making use of an immense empirical material. The analysis of individual word-formation processes is very profound, and supported by the history of individual types, including numerous examples. This book (as well as its 1969 revised edition) exerted considerable influence on many other linguists, chiefly Marchand’s Tübingen students, including D. Kastovsky, H. Brekle, K. Hansen, L. Lipka, G. Stein, etc., who developed Marchand’s theory and enriched it with a number of valuable ideas. (…) Two basic methods are applied in the science of language: the synchronic and the diachronic methods. Marchand’s method is both synchronic and diachronic. The chief purpose is synchronic —the description of presentday English word-formative types— At the same time, Marchand describes the growth of these types in the past states of the language (…).

The fact that word-formation studies are connected to diachronic methods of language analysis is obvious: word formation phenomena happen over time. They are a direct consequence of language change. However, we can study them in the synchrony (that is, in the now and then, analyzing present day English) because in the present language we have both formal and concrete proof of the effects of word-formation processes in the language2. Sometimes 2  As a Buddhist principle says: if you want to know about the consequences of your past actions (the so-called karma), observe yourself in the present moment, because effects manifest themselves in the here and now. If you want to know what will happen to you in the future, observe the present, because the result of your present actions will manifest themselves in the future. This principle can be applied to language: the present day language can be observed in order to obtain information about the past language and its evolution over time.

60

Words and word boundaries

it is easy to observe language change whenever word-formation processes are overtly realized, such as in drive > driver, driving. This is what we call transparent word-formation phenomena. However, there are also what we call opaque word-formation phenomena. These are cases in which we have a terminal word with no overtly realized derivative morphemes, as in the case of the verb and the noun saw. Which one was first? The noun or the verb? Marchand called this process zero derivation, as described in chapter 3. In such a case, how do we know which one is the base and which is the derived form? In order to see the derivative chain of non-transparent derivation phenomena, we have two types of analysis: diachronic analysis and synchronic analysis. A diachronic analysis will give us the answer by looking at older stages of the language and by seeing which of these words appeared first in texts and/or in lexicographical compilations. A synchronic analysis will make use of other devices, normally principles deriving from other areas of language, such as semantics. In this case, we can refer to some of Marchand’s (1966: 12, 14) content-dependent criteria: these are The principle of semantic dependency and The principle of semantic range, which read as follows: (1) a.  Principle of semantic dependency: The word that is dependent for its analysis on the content of the other pair member is automatically the derivative. b. Principle of semantic range: Of two homophonous words exhibiting similar sets of semantic features, the one with the smaller field of reference is the derivative. That is, the more specific word is the derivative. o-o-o-o-o-o-o Exercise 1: Explain the reasons why the verb saw must be derived from the noun saw according to the two principles in (1). Give another example of zero derivation and explain which one is the base word (derivative in Marchand’s terms) and which is the derived word, referring to the same criteria.

61

On

the architecture of words.

Applications

of

Meaning Studies

o-o-o-o-o-o-o Marchand (1966) also developed other criteria that linked morphology to other levels of language, such as phonology and morphology, which he called form-related criteria. They are outlined in (2) below: (2) a. Principle of Phonetic shape: A certain phonetic shape may put a word into a definite word class. b. Principle of Morphological type: Morphological type is indicative of the primary or derived character of composite words. c. Stress criterion: With composites, stress is sometimes indicative of a directional relationship between a substantive and a verb. Following Stekauer (2000: 35), typical examples of criterion (2.a) would be typical endings for nouns, such as -ation, -action, -ension, or -ment. Thus, verbs ending in -ment could be considered derived from nouns in -ment because this suffix is prototypically a nominal suffix, unless other criteria clash with this one. Examples would be to torment, or to ferment. As for criterion (2.b), this refers to the distinction between single or simple forms and derived and compound forms. Examples of this criterion are the compound nouns blacklist (adj/noun), snowball (noun/noun), screwdriver (noun/noun), whose base forms are list, ball, and driver, respectively. These bases are nouns, as are the compound forms that they form. As for criterion (2.c), Stekauer puts forward the case of compound verbs with locative particles: they have the basic stress pattern “middle stress/heavy stress”, such as oùtlíve, ùnderéstimate, ìnteráct, whereas compound nouns carry the heavy stress on the first element, as oúthoùse, únderwèar, or úndercùrrent. Verbs carrying this pattern will be desubstantival verbs (that is, derived from nouns). o-o-o-o-o-o-o Exercise 2 Analyze the two compounds outlaw (verb) and outlaw (noun) and provide the derivative chain of their word-formation process. Use the criteria mentioned in (2) to justify why they are derived from x.

62

Words and word boundaries

Example: outgoing: principle of phonetic shape: -ing is a typical ending of adjectives. Thus, this word is an adjective, and therefore the out has probably been added secondly to going. The derivative chain would then be as follows: (3) go (vb) >going (vb) >outgoing (adj) As for going, according to the principle of stress criterion, compound verbs with locative particles have the stress pattern “middle stress/heavy stress”, which is what we have here. Thus, outgoing comes from a verb (base go) and a locative particle. In this case the principle of morphological type that says that most compound forms come from nouns does not apply, since the base word going is only a verb and not a noun. Another possible derivative chain would be as follows: (4) go> outgo > outgoing In this case, first the prefix was added to the base, and then the suffix. A good way to see what came first is by referring to corpora or banks of terms (using Google as a giant one), and seeing the number of occurrences of an item. The less occurrences, the more highly derived the word is. o-o-o-o-o-o-o Additionally, Dieter Kastovsky, who —as mentioned before— was one of the main figures of word-formation studies, also noticed the interconnectivity of morphology and other areas of language. He began his 1977 article “Word-formation or: at the crossroad of morphology, syntax, semantics and the lexicon” as follows:

63

On



the architecture of words.

















On the Architecture of Words: applications of meaning studies Applications of Meaning Studies

Figure Beginning of in Kastovsky’s (1977) on word-formation interdisciplinarity. Which is the basic unit The 1.  object of study semantics raisesarticle one question: of meaning in language? The word or the sentence?Well, in the first place, there are also intermediate levels between the word and the sentence that However, even if morphology is the foundation for many of of study the (due seem to be significant. These are phrasal expressions suchfields as out Here we are with not going to discussprocesses, which approach is as thewith bestsyntactic or what and to itsblue. interconnection phonological as well comes first (words or sentences). We will focus on words' meanings and semantic ones) nowadays we still see that a model of description for morpholowords’ internal structure and composition as regards meaning and function,

gy in certain theories of grammar is under construction. In chapter 3 we review some recent proposals to integrate morphology into a model of grammar.

64



1

Words and word boundaries

3.  SETTING THE GROUNDS: REVIEW OF BASIC CONCEPTS Leaving theoretical discussions aside, the analysis of morphology and its connection to semantics from a functional and applied point of view is reviewed here.

The object of study in semantics raises one question: Which is the basic unit of meaning in language? The word or the sentence? Well, in the first place, there are intermediate levels between the word and the sentence that seem to be significant. These are phrasal expressions such as out of the blue. Here we are not going to discuss which approach is best or what comes first (words or sentences). We will instead focus on the meaning of words and their internal structure and composition with regard to meaning and func tion, that is, in lexical semantics. Lexical On semantics, dealing with word the Architecture of Words: applications of meaning studies meaning, interplays with morphology, which deals with word grammar.

that is, on Lexical Semantics. Lexical semantics, dealing with word

The kind of meaning we analyze in words and in word formation re- grammar. meaning, interplays with morphology, which deals withis word lated to the interconnection of form and meaning; it is the type of linguistic The kind of meaning we analyze in words and in word formation has to do analysis that connects word-structure and function. regard to prowith the interconnection of form With and meaning: it the is the type of linguistic cess of meaning, the word unit was accurately outlined byfunction. the founder of the process of analysis that connects word-structure and Asregards meaning, the word unit wasde accurately outlined the founder of modern modern linguistics and semiotics, Ferdinand Saussure (1916)bywith his linguistics and Semiotics, Ferdinand de Saussure (1916) with his famous famous distinction between signifier and signified, and later on, when dealdistinction between signifier and signified, and later on, when dealing with ing with Semiotics, bySemiotics, Ogden & by Richards who(1923), represented the pro- the process of Ogden &(1923), Richards who represented meaning with their famous triangle: cess of meaning with their famous triangle: (5)

(5) Concept

Word----------- Referent (object)

It is important to note, that the meaning of a word is not the object, but the concept that we have of that word in our mind. Let us define these three concepts in order to have a better understanding of this triangle and its connection to morphology: Concept: we can define concept as a ‘visual image’. This definition is easily applied in the case of concrete nouns, such as house, dog, or tree, but for

65

On

the architecture of words.

Applications

of

Meaning Studies

abstract nouns we have to define it as an ‘abstract image’, or as a ‘mental abstraction’. A concept is the mental image we have of a word, based on our daily experience. Referent: the external, physical (or mental, if it is abstract) object that the word represents. For instance, for the word smile, we have a mental image of something like: J, the concept ‘smile’, and this mental image are connected to the external object or referent itself. In the case of abstract concepts such as ‘love’ or ‘hate’, we do not have a physical reference, but we may relate them to our thoughts and/or feelings. In both cases, referents are tied to our experience, and language is thus a conceptualization of referents, of our external reality. This is why different languages express different things, according to what the speakers of these languages experience. As an example, let us take the Spanish word merienda, referring to the snack or meal that is customary in Spain between 17:00 and 18:00, consisting of a small sandwich, a tapa, biscuits, etc. The Word Reference Spanish-English Dictionary confusingly defines it as ‘afternoon snack’. Notice, in the first place, that the concept of ‘merienda’ is not lexicalized in other languages such as English. Besides, defining it as an ‘afternoon snack’ can be confusing for an English recipient, for whom the word afternoon refers to a span of time between 12 and 16 pm. Thus, the concept attached to afternoon differs from the concept attached to its Spanish equivalent mediodía, due to cultural reasons. Finally, the word is usually identified as a lexical item. We have to distinguish lexical item from lexeme. In section 4 we see what a word is and what its boundaries are in more detail.

4.  WORDS AND WORD BOUNDARIES: LEXEME, STEM, LEMMA As just mentioned, a word is a lexical item. It is a lexical unit. Words are separated by spaces or punctuation signs in written language. Both a word and a lexeme are composed of morphemes. A word is a concrete unit of morphological analysis, as opposed to a lexeme, which is a basic lexical unit of language, consisting of one or several words, considered as an abstract unit, and applied to a family of words related by form or meaning. A lexeme is an abstract unit of morphological analysis in linguistics, which roughly

66

Words and word boundaries

corresponds to a set of forms taken by a single word. For example, in the English language, run, runs, ran and running are forms of the same lexeme, conventionally written as RUN. Roughly, the lexeme is then the set of inflected forms taken by a single word, such as the lexeme RUN including as members run (lemma), running (inflected form), or ran, and excluding runner (derived term). Likewise, the forms: go, went, gone and going are all members of the English lexeme GO, and as such they are stored in the English lexicon. Thus, word families in their inflectional system are all part of the same lexeme. Lexeme, thus, refers to the set of all the forms that have the same base meaning. A lemma is also an important concept within morphology (as well as in lexicology, terminology, terminography and lexicography): a lemma (plural lemmas or lemmata) is the canonical form, dictionary form, or citation form of a set of words, i. e., of a lexeme. In English, for example, run, runs, ran and running are forms of the same lexeme RUN with run as the lemma. It is the particular form that is chosen by convention to represent the lexeme. Lemmas and lexemes have special significance in highly inflected languages such as Turkish and Czech, in order to observe word-formation and inflectional processes. A lemma is also very important in the process of dictionary writing, or in the compilation of any kind of lexicological or terminological work. The process of determining the lemma for a given lexeme is called lemmatisation. o-o-o-o-o-o-o Exercise 3 Lexemes are marked in capital letters as a kind of lexicological abstraction indicating the complete set of words that share a core meaning and differ only in inflection and cliticization. Give an example of a lexeme and the word inflectional family it comprises. Indicate the inflectional features of this family of words and of the lemma. Indicate also a derived word from such lemma and explain why it cannot be considered part of the lexeme. o-o-o-o-o-o-o

67

On

the architecture of words.

Applications

of

Meaning Studies

4.1.  Lemma versus lexeme: on-going discussions The distinction between lemma and lexeme is the concern of several areas of linguistics. By way of example, within the field of psycholinguistics Roelofs et. al (1998), who deal with the concepts of lemma and lexeme as a way to study the retrieval of lexical entries by aphasic patients, state that the distinction between lemma and lexeme, known as lemma/lexeme distinction in the literature —as introduced by Kempen and colleagues (Kempen and Huijbers, 1983; Kempen and Hoenkamp, 1987)— is as necessary as it is controversial. In their theory, the lemma of a lexical entry specifies its semantic-syntactic properties, and the lexeme specifies its morphophonological properties. The distinction between a syntactic and a morphophonological level of formulation was first proposed by Garrett (1975) to account for certain properties of speech errors. Garrett’s theory did not yet include the lemma/lexeme distinction, but it paved the way for the distinction made by Kempen et al. According to Roelofs et al. (1998) the lemma/lexeme distinction plays a prominent role in Levelt’s (1989) theory of speaking, as well as in other models of speaking such as Roelofs (1997). According to these models, there are two steps involved in the retrieval of lexical items: during the first step, often called lemma retrieval, the syntactic properties of the word and, in some views, its meaning, are retrieved from memory. During the second step, often called lexeme retrieval, information about the morphophonological form of the word is recovered. However, authors such as Caramazza and Miozo (1997) state that the lemma/lexeme distinction is unnecessary, because there is only one lexical level in the meaning of a word, and not two. According to them a single lexical node is linked to the meaning and syntax as well as to the phonological properties of a word. For them, semantically and syntactically specified representations (lexemes) of concepts directly connect to their words, whereas Roelofs et al. (1998) hold the view that the mapping of concepts onto segments is mediated by a second lexical level, namely that of lemmas. In the quotation below (Roelofs et. al 1998: 221-222), the discussion of this topic as well as its effects on the study of speech models is reflected: Thus, the disagreement is whether words are represented as semantically and syntactically specified entities independently of their forms. CM [Caramazza and Miozzo 1997, Miozzo and Caramazza 1997] put forward three main arguments for their view. First, grammatical class deficits in

68

Words and word boundaries

anomic patients may be restricted to the spoken or to the written modality. Second, semantic substitution errors made by patients and normal speakers may occur in only one modality of output, and different spoken and written semantic errors may be made in response to the same object. Based on these two classes of observations CM suggest that modality-specific lexical representations are accessed in speaking and writing, which pleads against the existence of modality-neutral lemmas. Third, in tip-of-the-tongue states (TOTs), grammatical and phonological form information appear to be available independently of each other. This, at first sight at least, argues against a model proposing that speakers first access a lemma and only after successful lemma access embark on the retrieval of the corresponding lexeme. Below, we argue, first, that though our model concerns speaking and has never taken position on writing, extensions to writing are possible that are compatible with the evidence from aphasia and speech errors. Second, we demonstrate that our model does not predict a dependency of gender and form retrieval in TOTs. Our final point is that, on our view, Caramazza and Miozzo’s proposal fails to account for important parts of the evidence motivating the lemma/lexeme distinction.

Within the field of lexicology, Kaszubski (1999) also discusses the status of lemma versus the status of lexeme, this time within the framework of the building of a corpus database developed by the Dutch Center of Lexical Information (Max Planck Institute), called the CELEX corpus3. Kaszubski (1999) arrives at the conclusion that the lemma is the headword of all lexical units included in a lexical entry, and it contains —as described below— se­veral types of linguistic information, rather then just semantic information, as would occur in dictionary compilation. Going further, sometimes in this database the distinction between lemma and lexeme does not hold, as lexical entries are named headword, as an umbrella term for both terms. CELEX is a lexical database of English, German and Dutch4 Focusing on English. It is an example of what type of linguistic information a lexico3  For Kaszubski’s discussion about the distinction between lemma and lexeme in some languages and specific cases, check http://torvald.aksis.uib.no/corpora/1999-4/0038.html. 4  For more information on the CELEX database and other lexical databases and corpora, visit Jen Smith’s Lexical databases and corpora: http://www.unc.edu/~jlsmith/lex-corp.html.

69

On

the architecture of words.

Applications

of

Meaning Studies

logical compilation may include. When you begin to use the CELEX English database, the user first has to choose among three so-called “lexicon types”: (6) 1.  A lemma lexicon 2.  A wordform lexicon 3.  A corpus lexicon Each lexicon type uses a specific kind of entry. The lemma lexicon is most similar to an ordinary dictionary, as every entry in this lexicon represents a set of related inflected words. In a lexicon, a lemma can be represented by using a headword (as in traditional dictionary entries) such as, for example, call or cat. The wordform lexicon yields all possible inflected words: each entry in the lexicon is an inflectional variant of the related headword or stem. Consequently, a wordform lexicon contains words like call, calls, calling, called, cat, cats and so on. A corpus type lexicon, on the other hand, simply gives you an ordered list of all alphanumeric strings found in the corpus with raw string counts, undisambiguated for relations to either lemmas or wordforms. The lexical data that can be selected for each entry in the different English lexicon types can be divided into five categories: orthography, phonology, morphology, syntax and frequency. The following data are provided for each of these categories: (7) —— Orthography (spelling) • • • •

with or without diacritics with or without word division positions alternative spellings number of letters/syllables

—— Phonology (pronunciation) - phonetic transcriptions (using SAMPA notation or Computer Phonetic Alphabet (CPA) notation) with: • syllable boundaries • primary and secondary stress markers

70

Words and word boundaries

• consonant-vowel patterns • number of phonemes/syllables • alternative pronunciations —— Morphology (word structure)















—— Derivational/compositional: On the Architecture of Words: applications of meaning studies

division into stems and affixes that­– is, on Lexical Semantics. Lexical semantics, dealing with word –  flatinterplays or hierarchical representations meaning, with morphology, which deals with word grammar.

The kind of meaning we and analyze in inflections words and in word formation has to do • Inflectional: stems their with the interconnection of form and meaning: it is the type of linguistic connects word-structure and function. Asregards the process of —— analysis Syntax that (grammar) meaning, the word unit was accurately outlined by the founder of modern • word class linguistics and Semiotics, Ferdinand de Saussure (1916) with his famous distinction between signifier signified • subcategorisations per and word class , and later on, when dealing with Semiotics, by Ogden & Richards (1923), who represented the process of with their famous triangle: —— meaning Frequency

• COBUILD frequency (5)

Regarding frequency data, they are based on the COBUILD corpus (sized 18 Concept million words) built up by the University of Birmingham, Great Britain. In order to further illustrate how CELEX works, the following images show Word----------Referent examples of the results that (object) are obtained by certain queries. Illustration 1 shows the results obtained after making a lemma lexicon query of items such as celebrant or cell (listed first and third in the headword column below):

Example 1.  Result of small English lemma query. lexicon query. Example 1: Result after making a small English lemmalexicon

As regards the column Headword, it shows all the list of words, alphabetically arranged, beginning by cel-. As outlined in (7), this column contains the canonical form of the word in terms of orthography. The column Pronunciation represents the phonetic transcription, with @ being used to represent the traditional schwa /ə/. It also establishes syllable

71

On

the architecture of words.

Applications

of

Meaning Studies

The Headword column shows the list of all the words, beginning with cel-, alphabetically arranged. As outlined in (7), this column contains the canonical form of the word in terms of orthography. The column Pronunciation represents the phonetic transcription, with @ being used to represent the traditional schwa /ə/. It also establishes syllable boundaries, separated by hyphens, and stress markers, represented by capital letters, the number of phonemes, and alternative pronunciation forms (marked by an asterisk, as in cellar: sE-l@r*, indicating that the r can be pronounced or not). The third column contains morphological information about the derivative chain of the word. For example, for celebrant we have ((celebrate)(ant)): this means that the base item/source item/primary item is celebrate, to which a suffix, -ant, is added. This morphological information is widened with morphological data in column 4, where information about the syntactic and morphological category of the source item is given. In the case of celebrate, the Vx shows that it is a verb. As for the fourth column, this provides information about the terminal word (that is, the word resulting from the derivative process, after adding –ant to the verb celebrate): N, a noun. Finally, the last column specifies numerical data about the frequency of occurrence of these words in the COBUILD corpus. The lemma lexicon query, therefore, provides exhaustive morpho-phonological information about lexical items. Illustration 2 below shows a the result obtained after making wordform lexicon query of a lexical item, such as celebrant:

On the Architecture of Words: applications of meaning studies

Example 2.  Selection small English wordform lexicon,the showing Example 2: Selection from afrom smalla English wordform lexicon, showing inflectional theofinflectional variants of the theprevious headwords given in the previous example variants the headwords given in example.

72

based on this idea of the value of what can empirically be demonstrated against what can only be believed. What is universally shared by all human beings is our ‘hardware’: our innate ability for language, to see the underlying structure of a string of symbols and to extract a meaning from it, and going further, to relate that

Words and word boundaries

Notice that in the wordform lexicon the column Morphology: structure is omitted. In turn, another column, Word division, is present, which gives information about how a word is separated into syllables. The main diffe­ rence between this type of query and a lemma lexicon query is that it gives all the forms a word has. In the case of celebrant, for example, it also includes the plural form, listed in the second place in illustration 2. Therefore, this query may be useful in order to carry out studies related to plural-singular forms, frequency of use of one form or another, etc. This database can help us realize the importance of morphology and other areas of study in the compilation of lexical databases. Additionally, there are many corpora and dictionaries where lemmatization is fundamental for the compilation of the data they include. A comprehensive list of corpora is available in W3-Corpora Project: http://www.essex.ac.uk/ linguistics/external/clmt/w3c/corpus_ling/content/corpora/list/index2. html. Also, for an exhaustive account of corpus-based multilingual resources, see Manuel Barbera’s page: http://www.bmanuel.org/clr/index. html. o-o-o-o-o-o-o Exercise 4 Look up a word in two different lexicological and/or lexicographical sources: a corpus such as the British National Corpus —BNC— (http://www.natcorp. ox.ac.uk/), or the Corpus of Contemporary American English —COCA— (http://corpus.byu.edu/coca/); a lexical database such as Wordnet (http:// wordnet.princeton.edu/), Framenet (https://framenet.icsi.berkeley.edu/fndrupal/) or Nerthus (http://www.nerthusproject.com/); a bilingual dictionary; a monolingual dictionary; etc. Identify the lemma of that word. Specify all the linguistic information that is given of such word. State whether it is adequate for the type of source which you consulted, and give your opinion about it: which source is better for you? Which source has more useful information? etc. o-o-o-o-o-o-o

73

On

the architecture of words.

Applications

of

Meaning Studies

4.2. Prandi’s (2004) view of lexical meanings: lexemes, the lexicon and terminology As Prandi (2004)5 explains in his own approach to lexical semantics in his own proposal of Philosophical Grammars, there are two-fold roots of lexical meanings, both formal and cognitive. In particular, he notes how it is relevant to define, for each atomic concept, to what extent its structure depends on language-specific formal lexical structures, and to what extent it depends on an independent cognitive categorization shared across diffe­ rent linguistic communities. Prandi (2004) also contrasts the lexicon of natural languages, containing concepts whose identity is critically dependent on specific lexical structures and that he defines as endocentric concepts, with those concepts firmly rooted in an independent categorization of shared things and experiences, easily transferable across languages, which can be called exocentric concepts. Prandi takes the notion of endocentricity/exocentricity from consti­tuency grammars, which define constructions on the basis of the presence vs. absence of a head constituent. With regards to concepts, an endocentric concept would be related in form to a specific language’s lexical structures, and an exocentric concept would be more cross-linguistic, precisely due to its higher independency from specific linguistic categorizations. An example of these would be the concept of ‘mother’, expressed in different languages as mum, mamá, mother, mère, moeder, etc. The lexicon that is related to the concept of ‘mother’ is similar, as their lexical structure is independent from a specific language, and it is more related to universal categorizations. Additionally, in the first stages of children’s speech, the so-called babbling, the sounds they utter are phonetically related to this lexicon. Lexemes that carry exocentric concepts circumscribe within languagespecific lexical structures a layer of natural terminology, which finds its natural complement in specialized terminological repertories connected with special purposes. Endocendric concepts, as they are more specialized, provide a useful bridge between natural language and specialized lexicon, and between lexical and terminological research, which encourages the sharing of methodological tools. Terms (specialized lexicon) connect a signifier and a signified as any other sign, and Prandi (2004) claims that their specificity lies 5  For an exhaustive account of Prandi’s Philosophical Grammar see Prandi (2004), where a systematic exposition of the style of analysis called Philosophical Grammar, of its theoretical presuppositions, semiotic implications and empirical perspectives is offered, and Prandi’s webpage: http://www.micheleprandi.net/.

74

Words and word boundaries

in the way they are shared: not by a whole linguistic community, but by groups con­t aining specialized sectors of different linguistic communities. More discussion on terms and terminological applications is provided in chapter 6. 5.  THE LEVELS OF LANGUAGE ANALYSIS AND THEIR UNITS There are different linguistic levels of analysis, or fields, which go from more abstract to more concrete, and from more universal to more idiosyncratic. Further more, each of these levels is arranged in a way that the one above comprises the level below. This could be represented as a set of concentric circles, as illustrated in figure (10). This organization of linguistic levels is represented in the internal structure of the clause (Van Valin 1993, Dik 1997, Van Valin 2005), and also in the internal structure of the word (Selkirk 1982. Harley and Noyer 1999, Everett 2002, Martín Arista 2007, Ibáñez Moreno 2012, etc.). According to the functional paradigm (Functional Grammar, Functional Discourse Grammar, Role and Reference Grammar, Functional Lexical Grammar, etc.) then, Pragmatics, the outermost level of linguistic analysis, is seen as the framework that embraces semantics and syntax. In turn, semantics is one instrument of pragmatics, and syntax is one instrument of semantics. That is, we use syntax to convey meanings (semantics), and we use meanings to communicate ideas in different —more or less subtle— ways (pragmatics). The main idea is that units at all linguistic levels serve as part of the general enterprise: to communicate meaning. This also means that meaning is a product of all linguistic levels. Let us see how: At the level of phonology, if we change one phoneme for another we get a different meaning: cat-rat. Thus, phonemes are functionally distinct elements, they carry some type of meaning. This system is also attached to meaning in the sense of prosody. Thus, when we utter a sentence, we can transmit different meanings depending on how we utter it. For instance: (8) a.  PETER stole the bike b.  Peter STOLE the bike c.  Peter stole THE BIKE

75

On

the architecture of words.

Applications

of

Meaning Studies

Supposing the words in capitals in (8) are the ones that have been given more emphasis when they were being uttered, we can see that the meaning the speaker is giving to the utterance changes: in (8.a) the speaker may mean that it was Peter who stole the bike, and not John or Sarah. In (8.b), the fact that stole is given more emphasis means that Peter stole the bike, instead of repairing it or buying it. That is, it emphasizes the action presented by the verb. In (8.c) the emphasis is given to the object, the bike, which means that it is a bike that Peter stole, and not a car or a truck. At the level of morphology, both inflectional and derivational morphemes provide words with specific meaning. For instance, the word drivers is composed of three morphemes: driv(e)- + -er + -s: the root driv(e)- has too bound morphemes, the derivational agentive suffix –er, and the inflectional plural ­suffix –s. The meaning that –er gives when added to the root of the verb ­meaning ‘to drive’ is ‘person who does the action referred to by driv(e)-’, that is, -er gives the meaning of agency at a semantic level. At a grammatical level, -er also implies that the resulting derived word will be a noun. Thus, a derivational ­suffix interacts with semantics and with grammar, since the derived word will belong to a specific or different grammatical class depending on the derivational ­suffix. As for the inflectional suffix –s, in line with all inflectional affixes, it does not modify the grammatical class of lexical items, but it changes the grammatical information and the semantic information of the item, in such a way that the structure of the clause where such item is located will also change. Inflectional affixes provide the word with relevant syntactic information: if ins­ tead of one driver we have two or more drivers, the subject becomes plural and the verb will also have to be modified (although in English it is not always overtly realized), in order to follow certain concordance rules. Semantically it is also different to have one agent or to have two or more agents. At the level of syntax, two different structures can produce slightly different meanings: (9) a.  John broke the window b.  The window was broken by John In (9) we see that the idea is the same, but in each sentence the attention is directed to a different part: in (9.a) it may be emphasized that it was the win-

76

Words and word boundaries

dow that John broke, and in the second one in (9.b) it may be emphasized that it was John who broke the window. Furthermore, a passive voice allows for the omission of the agent, which also has meaning implications: The window was broken. Thus, the way elements are organized and expressed in the sentence also carries meaning. Syntax is connected to morphology and to semantics, and it is less equally realized across languages than phonology or morphology. For example, we have SOV languages (Subject-Object-Verb), such as Basque, and SVO languages, such as Spanish and all the rest of Romance languages. Semantics, as a linguistic discipline, studies the codification of the ­meaning that is transmitted among speakers of a language. Because of this, it can be appro­ached from a sort of external logical and philosophical perspective or from an internal perspective. In this first approach, semantics can be seen as the study of concepts and of referents and their relations with the surrounding world and it also has to do with the idea that language is a conceptual phenomenon. It is more idiosyncratic, that is, more dependent on culture and on other aspects that relate language to the world, than syntax. Semantics is the study of concepts and of referents, and these are directly tied to both the external world and to speakers. From this comes the idea that language is a conceptual phenomenon, as defended by Cognitive Semantics (Lakoff 1987, Lakoff and Turner 1989), which developed into Blended Theory (Fauconnier and Turner 2002, Delbeque and Maldonado 2011) and Frame Semantics (Coulson 2001), and ­other conceptually based analysis of language (Jackendoff’s 1992 Conceptual Semantics, for instance). Thus, the interpretation of reality is systematized through the discipline of semantics. In the case of lexical semantics (as opposed to truth-conditional semantics) the study of meaning is connected to words and the internal structure of conceptual phenomena as represented by such words, and in this sense, semantics looks into morphological and syntactic phenomena, and it attempts to reach higher levels of abstraction, that is, universality, as opposed to the idiosyncrasy of the external world. In this view, then, semantics is closely related to pragmatics, and some linguists consider pragmatics as including semantics and others view the relation between both disciplines in the opposite direction. At the level of pragmatics: Pragmatics is the discipline that studies the connection of speakers to the world; it is the study of the speaker’s/hearer’s interpretation of language. Thus, it is meaning described in relation to speakers and hearers, in context.

77

On

the architecture of words.

Applications

of

Meaning Studies

Language as a conceptual system: This expression is connected to what is described in point 4.2. above, in the sense that it supports that idea that language structures and cognition are related. It means that sometimes, different languages express ­differently the same things, depending on the real world that surrounds them. Lakoff (1987) exemplifies this by the fact that in some languages there are only three words for colors: red, black and white, where people obviously perceive other colors such as the green that grass shows. In Australia, some aboriginal languages contain words that include referential orientations up, down, or around Ayer’s Rock, their sacred mountain. Around all this there is still an open debate in linguistics, closely linked to the question of how knowledge is acquired, which can be traced back to the Greeks, 2500 years ago: Is there an overall conceptual system for all languages or has each language its own? There are two main positions on this: universalists and relativists. Universalists (Universal Grammar, 60’s), are closely linked to the Platonic philosophical tradition, holding that there are pure ideas out of which we form concepts and also emphasizing the idea that linguistic universality is based on our common biological endowment. On the other hand, relati­ vists, based on the Sapir-Worf hypothesis developed in the 30’s, have been following the Aristotelian tradition, which holds that we acquire knowledge through experience. This tradition had an important contribution by the English empiricists in the 18thcentury, with Locke and his Treaty of Human Understanding. It is important to emphasize the fact that the foundations of modern science and the present scientific and technological discoveries are based on this idea of the value of what can empirically be demonstrated against what can only be believed. What is universally shared by all human beings is our ‘hardware’: our innate ability for language, to see the underlying structure of a string of symbols and to extract a meaning from it, and going further, to relate that meaning to our external world and to our own experiences as human ­beings. We have evolved more quickly than all other species thanks to language.

78

Words and word boundaries

This organization of language levels provides an important principle in functional theories of language (Van Valin and LaPolla 1997, Dik 1997a, b): The more semantically motivated a phenomenon is, the less cross-linguistic variation we find. That is, those lexically and grammatically codified concepts are more widespread because they are based on our common human cognitive equipment. On the other hand, those meanings based on contextual aspects are more language dependent. As a result, the more pragmatically motivated a concept is the more cross-linguistic variation we find. Such non-universal phenomena are extra-clausal. This takes us to the distinction made in functional grammars of linguistic levels. The following representation of language levels, made by Dik (1997), shows how this influence works: (10)















On the Architecture of Words: applications of meaning studies



Pragmatics

Semantics

Syntax Phonology

The more outside the circle we go, the more variety we find across languages, that is, the less universal languages are. For instance, all The more outside the circle we go, the more variety we find across lanlanguages have letters or characters that represent sounds (phonemes), guages, the less universal languages For instance, all languages words. The system of are. Phonology is quite whichthat groupis, together to form similar in all languages, and most rules that exist in this system are haveuniversal, letterssuch or as characters that represent sounds (phonemes), which group the fact that there are voiced and voiceless sounds, or the together to form words. The system of Phonology phonology is quite similar in all lanfact that there are occlusive or fricative sounds. is directly linked to our physical abilities, which are similar for all humans, so we can utter guages, and most rules that exist in this system are universal, such as the the same sounds. Thus, phonology is a universal linguistic phenomenon. factHowever, that there are not voiced voiceless the we fact this does imply and that all languagessounds, use all theor sounds arethat there are ocable to utter. Some of them are realized in one language, others in another language. For instance, in English we find more sounds than in Spanish, There are sounds that do not exist in Spanish, such as the pronunciation of /æ/. 79 The inner circle shows that Phonology deals with the smallest elements of language and that it depends or is directly influenced by Syntax (at the level of the sentence), which is, at the same time, influenced by Semantics, which,

On

the architecture of words.

Applications

of

Meaning Studies

clusive or fricative sounds. Phonology is directly linked to our physical abilities, which are similar for all humans, so we can utter the same sounds. Thus, phonology is a universal linguistic phenomenon. However, this does not imply that all languages use all the sounds we are able to utter. Some of them are realized in one language, others in another language. For instance, in English we find more sounds than in Spanish, There are sounds that do not exist in Spanish, such as the pronunciation of /æ/. The inner circle shows that phonology deals with the smallest elements of language and that it depends or is directly influenced by syntax (at the level of the sentence), which is, at the same time, influenced by semantics, which, in turn, is influenced by pragmatics. Pragmatics is the discipline that deals with the meaning of linguistic units in context. That is, it goes beyond language itself and relates it to the world.













However, in order to complete this circle we should include morphology, On the Architecture of Words: applications of meaning studies which would be at the level of the word, and thus it should be included bet­ However, in order to complete this circle we should include Morphology, ween thewhich levelwould of phonology and the level of grammar, being the complete be at the level of the word, and thus it should be included the level of phonology and the level of grammar, being the complete circle as between follows: circle as follows:

(11)

(11)

Pragmatics Semantics

Syntax = sentence grammar Morphology= word grammar Phonology

Each subsystem possesses a number of units of analysis, which are studied by the different subdisciplines of Linguistics.

80

o-o-o-o-o-o-o-o

Words and word boundaries

Each subsystem possesses a number of units of analysis, which are stu­ died by the different sub-disciplines of linguistics. o-o-o-o-o-o-o Exercise 5 For the following sentence, state which parts would be analysed by any of the linguistics sub-disciplines: It is hot in here. o-o-o-o-o-o-o To finish with this chapter, in the chart below we have an outline of the main objects of analysis for each sub-discipline:

SUBSYSTEM

UNIT OF ANALYSIS

Graphics: systematic representation of language in writing

Graphemes

Phonology: study of sounds of language: –phonetics -phonemics Morphology: studies the arrangement and relationships of the smallest meaningful units in language: -Inflectional morphology -Derivational morphology Syntax: arrangement of words into phrases, clauses and sentences, as well as with word order in language Semantics: meanings expressed by sentences, clauses, morphemes and lexemes in the language. It explores the relationship between language and the real world: -Grammatical meaning -Lexical meaning -Textual meaning

Phones Phonemes

Morphemes (free/bound) Base morphemes/Affixes

Morphemes [added by the author, AIM] Phrases-Clauses-Sentences-Texts

Morphemes Lexemes-Phrases-Clauses, Texts

Illustration 3.  Adapted from Conde Silvestre and Sánchez Pérez (1996: 2)

81

On

the architecture of words.

Applications

of

Meaning Studies

Notice that in the chart above morphemes are the units of analysis of both morphology and semantics, although the view is held here that they should also be the units of analysis of syntax, especially as regards inflectional morphemes, which carry grammatical information. All languages are systems, or, more precisely, series of interrelated systems governed by rules. This makes them “learnable”. Languages are highly structured. In what follows, a brief outline of the units of analysis of each of these subsystems is given: GRAPHICS: It refers to the systematic representation of language in writing. A single unit in the system is called a grapheme. PHONOLOGY: It deals with the sounds of a language and the study of these sounds. The study of the sounds of speech taken simply as sounds and not necessarily as members of a system is called phonetics. The study of the sounds of a given language as significantly contrastive members of a system is called phonemics, and the members of the system are called phonemes. It is important to notice that phonetics is not the same as phonemics. For example, p in pan is accompanied by a strong puff of air called aspiration, whereas the p in span has no such strong aspiration. The two kinds of p are different phones, but not different phonemes because the two varieties of p never contrast: the strong aspiration occurs only when p is at the beginning of the syllable and not when p follows s. The two varieties of p are not used to distinguish two different words. On the other hand, the initial sounds in the words pan and tan serve to distinguish these two words; the p and the t contrast significantly and are classified as separate phonemes. Let us look at another example: we have a grapheme, p, which is represented phonetically by the phoneme /p/. This grapheme is pronounced ­d ifferently in different contexts: pin [ph in] spin [spin]. Each sound is a different phone, but both are variants of the same phoneme. Each phone is represented with brackets [ph]~ [p] and they do not contrast. They are not used to distinguish different words, as phonemes do: /pan/~ /ban/. Prosodic features are also studied by phonology: pitch (¿? vs. ¡!), stress (`conduct vs. conduct), tempo (“He died happily” vs. “He died, happily”). They can even indicate grammatical meaning. MORPHOLOGY: It is the arrangement and relationships of the smallest meaningful units in a language. These minimum units of meaning are

82

Words and word boundaries

called morphemes. In greyish, then, we have the base morpheme grey plus the suffix morpheme –ish. –ish is meaningful because it adds some meaning to grey: it changes it into an adjective, so it means something like ‘the quality of’. We have, then, two types of morphemes: free and bound. Grey is a free morpheme and –ish is a bound morpheme, since grey can appear independently, not as –ish, which cannot. Bound morphemes only form words when attached to other morphemes. They are called affixes, and we have four types: prefixes, suffixes, infixes and circumfixes (these two are less frequent, and are not used in English). Some examples of prefixes are re- (reappear), dis- (disappoint), un(unbelievable), etc.; and of suffixes: -ing, -ful, etc. Affixes are basic units in morphology, and they can be inflectional or derivational. An inflectional affix indicates a grammatical feature such as number or tense. E.g., the –s used to form plurals and the –ed used to indicate past tense. Derivational affixes usually change the category of the word they are attached to, and they may also change its meaning: e.g. the derivational suffix –ive in generative changes the verb generate to an adjective; the suffix –ness in coolness changes the adjective cool to a noun, etc. In joyless, the suffix –less not only changes the noun to an adjective, it also changes the meaning of the resulting word to the opposite of the original meaning. All this is seen in more detail in chapter 3. Morphology is also a cross-linguistic phenomenon, in the sense that all languages have lexical items. What is different is how the lexicon (that is, words) of a language interacts with each other and within each other, since this brings about different language types of language. This system is interrelated to syntax, in the sense that the more inflectional a language is (that is, the richer it is in inflectional morphology) the less need it has to apply syntactic rules. In turn, a language with few morphological processes tends to give more weight to the role of syntax in arranging and constructing meaningful sentences. This is also further described in chapter 3. SYNTAX: Syntax deals with word order and grammatical functions. Word order is a grammatical sign of natural languages, even if there are some languages like English depending on it more heavily than others do. Example: He died happily vs. He happily died. Grammatical functions are related to the functions of different parts of speech: nouns, adverbs, adjec-

83

On

the architecture of words.

Applications

of

Meaning Studies

tives, verbs and to how they can be organized. The syntax of the world’s languages usually reflects basic logical semantic underpinnings, such as entities and relations, and their combinations. SEMANTICS: It is the study of the meanings expressed by a language. It also studies the codification of the relationship between language and the external world. All these systems interact in highly complex ways within a given language. Changes within one subsystem produce a chain reaction of changes among other systems. For example, in the history of English, the loss of final unstressed syllables of words drastically affected the morphology of English by eliminating most English endings. This change in the morphology meant that the relationships among words in a sentence could no longer be made clear by inflectional endings alone. Hence, word order, or syntax, became much more crucial in distinguishing meaning and also much more rigid. At the same time, prepositions became more important in clarifying relationships among the parts of a sentence.

As regards the development of a model of morphology, all language we have models agree that it is an essential part of language when to describe the components of a grammar, which are two:of a ruleapplications component On the Architecture Words: of meaning studies and a lexicon which contains the words and morphemes that appear in the structures generated by the cited rules. For example, this system is represented as follows in Role and Reference Grammar (Van Valin and La Polla 1997): (12)

(12) GRAMMAR

RULES

[generate]

LEXICON

WORDS

MORPHEMES

[appear in] STRUCTURES

84

Kempen, G., Huijbers, P., 1983. The lexicalization process in sentence production and naming: Indirect election of words. Cognition 14, 185– 209.

Kempen, G., Hoenkamp, E., 1987. An incremental procedural grammar for sentence formulation.Cognitive Science 11, 201–258.

Lakoff, George. 1987. Women, fire, and dangerous things: what categories

Words and word boundaries

In certain linguistic models such as Role and Reference Grammar (RRG) it is defended that grammatical structures are stored in constructional templates, which can be combined with other templates, and which contain morphosyntactic, semantic and pragmatic properties. These templates are grammatical structures. They are composed of a set of syntactic templates that represent all the possible syntactic structures in all languages. They are also stored in the specific syntactic inventory of a specific language. They also include a lexicon, formed by lexical items, and stored in the lexical inventory. The English language in particular has five basic syntactic templates, which can combine to form more complex structures (Van Valin and La Polla 1997, Van Valin 2005). While generative Grammar has a long tradition in studying morphology and its interactions with phonology (within the field of diachrony) and with syntax (called word grammar), there is not yet a well-established and exhaustive model of description of morphological analysis for functional theories. In this sense, Everett (2002), building on the generativist tradition of word grammar (the Lexicalist Hypothesis and Distributed Morphology) and Martín Arista (2004, 2005, 2007), have recently proposed a model of morphology for Role and Reference Grammar. Within the generative framework —which has greatly influenced functional theories in general, as stated in chapter one, and also as regards the proposal of a model of morphology— the discussion about where morphological processes should stand is still going nowadays, with authors defending the Lexicalist Hypothesis and others opposing to it by defending the Distributed Morphology model. A discussion of morphology as word grammar and its implications for morphological analysis is carried out in chapter 3.

6.  FURTHER RECOMMENDED READINGS De Caluwe, Johan and Taeldeman, Johan (2003): “Morphology in Dictionaries”. In A Practical Guide to Lexicography, edited by Pieter van Stertenburg. John Benjamins Publishing Company. Pages 114-126. o-o-o-o-o-o-o

85

On

the architecture of words.

Applications

of

Meaning Studies

Exercise 6 Take a monolingual English dictionary (such as the Collins Cobuild Concise English Dictionary) and analyze the morphological information given there for one lexical entry of your choice. Outline such information briefly and state why it is important for non-specialized users and which kind of information would be useful for a linguist specialized in lexicology or terminology. o-o-o-o-o-o-o 7.  SOME LEXICOLOGICAL SOURCES British National Corpus –BNC–: http://www.natcorp.ox.ac.uk/ CELEX (Dutch Center of Lexical Information): http://www.ldc.upenn.edu/Catalog/ readme_files/celex.readme.html#databases Corpus of Contemporary American English –COCA corpus–:http://corpus.byu.edu/ coca/ Nerthus: a lexicological database of Old English: http://www.nerthusproject.com/ publications Wordnet: http://wordnet.princeton.edu/

8. REFERENCES Caluwe, Johan and Taeldeman, Johan (2003): “Morphology in Dictionaries”. In A  Practical Guide to Lexicography, edited by Pieter van Stertenburg. John Benjamins Publishing Company. Pages 114-126. Caramazza, A., Miozzo, M., 1997. The relation between syntactic and phonological knowledge in lexical access: evidence from the ‘tip-of-the-tongue’ phenomenon. Cognition 64, 309-343. Coulson, Seana. 2001. Semantic Leaps: Frame-shifting and Conceptual Blending in Meaning Construction. New York and Cambridge: Cambridge University Press. Delbecque, Nicole & Maldonado, Ricardo. 2011. Spanish ya: A conceptual pragmatic anchor. Journal of Pragmatics 43 (2011) 73-98. Dik, S. C. 1997a. The Theory of Functional Grammar I. Edited by Kees Hengeveld. Berlin and New York: Mouton the Gruyter.

86

Words and word boundaries

Dik, S. C. 1997b. The Theory of Functional Grammar II. Edited by Kees Hengeveld. Berlin and New York: Mouton the Gruyter. Everett, D. 2002. Towards an RRG theory of morphology. Lecture delivered at the 2002 International Conference on Role and Reference Grammar, held at the University of La Rioja. Fauconnier, Gilles and Turner, Mark. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York: Perseus Books Group. Garrett, M.F., 1975. The analysis of sentence production. In: Bower, G.H. (Ed.), The Psychology of Learning and Motivation, Vol. 9. Academic Press, New York, pp. 133-177. Harley, Heidi and Noyer, Rolf. 1999. Distributed Morphology. State-of-the-art Article.Glot International, Volume 4, Issue 4. Available in: http://babel.ucsc. edu/~hank/mrg.readings/Harley+Noyer1999.pdf Ibáñez Moreno, A. and González Torres. E. 2005. Qué es el inglés antiguo: La base lexicográfica de un estudio de derivación léxica. Interlingüística 16.1-8. Ibáñez Moreno, Ana. 2012. A Functional Approach to Derivational Morphology The Case of Verbal Suffixes in Old English. Lambert Publishing, Germany. Jackendoff, Ray. 1992. Semantic Structures. Massachusetts Institute of Technology. Kastovsky, Dieter. 1977. Word-formation or: at the crossroad of morphology, syntax, semantics and the lexicon. Folia Linguistica X ½. Kaszubski. 1999. Discussion on the lemma/lexeme distinction on corpora: http://torvald.aksis.uib.no/corpora/1999-4/0038.html Kempen, G., Huijbers, P., 1983. The lexicalization process in sentence production and naming: Indirect election of words. Cognition 14, 185-209. Kempen, G., Hoenkamp, E., 1987. An incremental procedural grammar for sentence formulation. Cognitive Science 11, 201-258. Lakoff, George. 1987. Women, fire, and dangerous things: what categories reveal about the mind. Chicago: Chicago University Press. Lakoff, George and Turner, Mark. More than cool reason. A field guide to poetic metaphor. Chicago: Chicago University Press. Levelt, W.J.M., 1989. Speaking: From Intention to Articulation. MIT Press, Cambridge, MA. Lakoff, G. 1987. Women, fire and dangerous things: What categories about the mind. Berlin: Mouton the Gruyter. Marchand, H. 1966. The categories and types of present-day English Word-formation: a synchronical-diacronical approach. Munchen: C.H. Beck’sche Verlagsbuchhandlung. Martín Arista, F. J. 2007. Morphological constructions and the functional definition of lexical categories. Paper delivered at the 2007 Conference of the Societas Linguistica Europaea, held at the University of Joensuu.

87

On

the architecture of words.

Applications

of

Meaning Studies

Miozzo, M., Caramazza, A., 1997. On knowing the auxiliary of a verb that cannot be named: evidence for the independence of grammatical and phonological aspects of lexical knowledge. Journal of Cognitive Neuropsychology 9, 160-166. Ogden. C. K. & Richards, I. A. 1923. The Meaning of Meaning: A Study of the Influence of Language upon Thought and of the Science of Symbolism. Magdalene College: University of Cambridge. Prandi, M. 2004. The Building Blocks of Meaning, John Benjamins, Amsterdam / Philadelphia. Roelofs, A., 1997. The WEAVER model of word-form encoding in speech production. Cognition 64, 249-284. Roelofs et. al. 1998. A case for the lemma/lexeme distinction in models of speaking: comment on Caramazza and Miozzo. Cognition 69.219-230. Saussure, Ferdinand de. 1916. Course on General Linguistics. 1959 Edition. Edited by Perry Meissel and Haun Saussy. New York: Columbia University Press. Selkirk, E. 1982. The Syntax of Words. Cambridge, Mass: MIT Press. Stekauer, Pavol. 2000. English Word Formation: A History of Research (1960-1995). Tübingen: Gunter Narr Verlag T. Van Valin, R. D. Jr. and Wilkins, D. 1993. Predicting syntactic structure from semantic representations: Remember in English and its equivalents in Mparntwe Arrernte. In R. D. Jr. Van Valin (ed.). Advances in Role and Reference Grammar. Amsterdam, Philadelphia: John Benjamins. 499-534. Van Valin, R. D. Jr. and LaPolla, R. 1997. Syntax: Structure, Meaning and Function. Cambridge: Cambridge University Press. Van Valin, R. D. Jr. 2005. Exploring the Syntax-Semantics Interface. Cambridge: Cambridge University Press. Williams, E. 2007. Dumping lexicalism. In Gillian Ramchand and Charles Reiss (eds.). The Oxford Handbook of Linguistic Interfaces. Oxford: Oxford University Press. 353-382.

9.  KEYS TO EXERCISES Exercise 1 Suggested answer According to these criteria, the verb saw must be derived from the noun saw: ‘any of various hand tools for cutting wood, metal, etcetera, having a blade with teeth along one edge’ (Collins Cobuild), since in order to analyze the meaning of the verb saw we have to make use of the noun: ‘to cut with a

88

Words and word boundaries

saw’ (WordReference English-Spanish Dictionary). This means that the verb abides by the principle of semantic dependency; besides that, this verb is more specific semantically than the noun: its semantic scope is narrower. The verb refers to a specific way of cutting, while the noun is defined as ‘a cutting instrument’. There are many types of saws, but the action of sawing is the same one in all case. This obeys the principle of semantic range. Other examples of zero derivation could be: paint (verb) bus (verb: ‘I bus the children to school’). Exercise 2 Suggested answer: Derivative chain: law (n) >outlaw (n) >outlaw (vb) Both of them are derived from joining the adverb out and the noun or the verb law. The Principle of Phonetic shape does not abide here, since both compounds have the same phonetic shape. The Principle of Morphological type tells us that these two words are compounds: there are not affixes or prefixes that indicate that they are derived, so they must be compounds. This principle tells us that most compounds come from nouns, in this case, law. Finally, the Stress criterion suggests that óutlàw as a noun derives into òutláw as a verb, given the position of the primary stress in each of them. As Stekauer suggests, in the case of compound verbs with locative particles: they have the basic stress pattern ‘middle stress/heavy stress’, as is the case here. Exercise 3 Suggested answer: An example is the verb drive: *driver, driving, drove, drives... Lexeme: DRIVE [Inflected forms: drives, drove, driving, driven] Inflectional features: irregular verb, so we have a different form for the past tense than for the past participle, and instead of adding –ed to the

89

On

the architecture of words.

Applications

of

Meaning Studies

lemma the verb goes through other phonetic process, historically based on the strong/weak verb conjugation system of Old English verbs. Lemma: drive Not part of the lexeme: driver. Driver is a noun derived from drive, it is not an inflectional form. The suffix –er is a derivative suffix, not an inflectional suffix, which means that the lemma drive does not only change syntactically but also lexically and semantically when this suffix is added to it (from being a verb to being a noun). Exercise 4 Suggested answer: We look up the word studies in Wordnet and in the BNC. As for Wordnet: it is a very useful database to obtain lexical and semantic relations among words. In the search menu a list of options is provided through a drop-down menu, so that we can adjust our search for informaOn the Architecture of Words: applications of meaning studies tion of a given lexical item, as shown below:

Figure 1: Search options in 1: Wordnet . options in Wordnet Figure Search

90

In order to know what kind of information is given by each option, we select “Select all” and we obtain, when we look up for studies, the results below:

Words and word boundaries

In order to know what kind of information is given by each option, we select “Select all” and we obtain, when we look up for studies, the results below:

















On the Architecture of Words: applications of meaning studies

Figure 2.  Information displayed for the word study. Figure 2: Information displayed for the word study.

Figure 3: Further information displayed for the word study.

Figure 3.  Further information displayed for the word study

As we can see, the database directly displays the lemma/lexeme of the lexical item in question,study. Note also that, on the upper part of the



8

91

On

the architecture of words.

Applications

of

Meaning Studies

As we can see, the database directly displays the lemma/lexeme of the lexical item in question, study. Note also that, on the upper part of the screen, the following indications are given in order to interpret such information: (13) Key: “S:” = Show Synset (semantic) relations, “W:” = Show Word (lexical) relations Display options for sense: (frequency) {offset} [lexical file number] (gloss) “an example sentence” Display options for word: word#sense number (sense key) The only drawback that is found is that there is not any W key within the results displayed, whereas there is an S key that, when clicking on it, provides useful semantic information of the item, as can be seen above on the first entry for study, as well as below:

Figure 4.  Results obtained after clicking the key S

We can conclude that this database is very useful, not only to identify the lemma of any word, but also to obtain all its possible meanings, as well as its lexical categories. As for the COCA corpus: First of all, we select the option KWIC, on the left-hand side panel of the screen:

92

We can conclude that this database is very useful, not only to identify the lemma of any word, but also to obtain all its possible meanings, as well as its lexical categories. As for the COCA corpus (you have to register first!):

Words and word boundaries First of all, we select the option KWIC, on the left-hand side panel of the screen:



















On the Architecture of Words: applications of meaning studies

Figure 5: Selection a KWIC search in COCA corpus. Figure 5.  Selection a KWIC search in COCA corpus

search, which means that This allows for a Kew Word In Context(KWIC) 9 theThis information displayed will be follows: allows for a Key Word In as Context (KWIC) search, which means that the information displayed will be as follows:

Figure 6:Figure Results6. displayed in COCA in corpus forcorpus a KWIC of studies Results displayed COCA forsearch a KWIC search. of studies

The colors are used to highlight the frequency in which this word appears next to and these are annotated (they show lexical The colors areother usedwords, to highlight the words frequency in which this word appears next to other words, and these words are annotated (they show lexical information, such as their lexical category). We can also search for the word 93 in terms of a frequency list, and we obtain the following result:

Figure 6: Results displayed in COCA corpus for a KWIC search of studies.

The colors are used to highlight the frequency in which this word appears the architecture of words. Applications of Meaning Studies next to other words, and these words are annotated (they show lexical information, such as their lexical category). We can also search for the word information, such as their category). We can alsoresult: search for the word in terms of a frequency list,lexical and we obtain the following in terms of a frequency list, and we obtain the following result: On

Figure 7:Figure The word as displayed COCA ininthe list-search 7.  studies The word studies as in displayed COCA in the mode list-search mode

To conclude, we can state that a corpus is useful to analyze the frequency 10 of use of a given word, the context(s) in which it is used, and the frequency with which it is used as one lexical category or another, but definitely not to obtain exhaustive semantic, morphological and lexical information about a lexical item. Exercise 5 Suggested answer: Pragmatics: the Speaker’s intention is to get the Addressee to open a window or to justify why he/she takes off some clothes, for example. Semantics: The State of Affairs is a situation or state: [[be-hot’](here)]. Syntax: Affirmative sentence, present simple tense of the verb to be, third person singular, impersonal it… Morphology: Sentence composed of monosyllabic words: the it pronoun, a form of the lexeme BE, the locative preposition in and the adverbial here.

94

Words and word boundaries

Phonology: the phonetic transcription would be something like [ət əs ‘hɒtɪn ‘hɪər], with the main stress falling on the word hot. The secondary stress may optionally fall on the adverb. Exercise 6 Suggested answer:















If you are a specialized user, you’d better have looked up for your word On the Architecture of Words: applications of meaning studies in another dictionary than the one proposed! Check out what it says for the lexical item Semantics:

Figure 8:Figure Collins Cobuild lexical entry for semantics . 8.  Collins Cobuild lexical

entry for semantics

As can be seen above, there is very basic information. A very lousy defiAs can be seen above, there is very basic information. A very lousy definition nition and and the the phonetic some morphologiphonetic transcription transcription are are given,given, besides besides some morphological information such as the fact that it is a noun and that it is uncountable. cal information such as the fact that it is a noun and that it isThe uncountable. definition is, nonetheless, quite acceptable: it is the one provided for a The definition is, nonetheless, quite acceptable: it is the one provided for a broader term that refers to an even broader term, in this case Linguistics: it is a branch of Linguistics. The hyponym classifies the lexical item into a broader term that refers to an even broader term, in this case Linguistics: it field. Then, some distinguishing features are added: it is specialized in the is a branchmeaning of Linguistics. The hyponym classifies lexicalisitem into a of words and sentences. We can say that all thisthe information basic and useful for the non-specialized user, and that no more information field. Then, some distinguishing features are added: it is specialized in the is indeed necessary for that purpose. meaning of words and sentences. We can say that all this information is basic and useful for the non-specialized user, and that no more information is indeed necessary for that purpose.



11

95

1. Introduction 2. What is the scope of morphology? How to separate down a word 3.  Lexical change and word formation processes 3.1. Introduction 3.2.  Word formation processes 3.2.1. Compounding 3.2.2. Inflection 3.2.2.1.  Typological classification of languages 3.2.2.2.  The index of synthesis of a language 3.2.3. Derivation 3.2.4.  Other word-formation phenomena 3.2.5. Transparent and opaque word formation phenomena 4.  On word grammar: approaches to the study of morphology 4.1. Introduction 4.2.  Paradigmatic and syntagmatic relationships 4.3. Some recent proposals for a model of morphology in grammar 5.  Further recommended readings 6. References 7.  Keys to exercises

1. INTRODUCTION In this chapter we will review some basic notions about morphology and its units of analysis. In line with chapter 2, we will see how this interconnection can be applied to linguistic studies of different kinds, particularly with regards to classifying languages into more or less productive, or analytic or synthetic, or when building up lexicological databases and/or corpora. In this sense, a deep analysis of the interconnection between morphology and semantics (known as semantic morphology) is needed to establish the position of morphology in the fields of computational linguistics and computational semantics. Both disciplines are necessary when it comes to building up lexicological and terminological databases. An example of such database, which deals with Old English, is Nerthus: a lexicological database of Old English. What is morphology? According to Aronoff and Fudeman (2011), the term morphology is attributed to the German poet, novelist, playwright, and philosopher Johann Wolfgang von Goethe (1749-1832), who coined it in the early 19th century in a biological context. The term comes from the Greek root morph-, which means ‘form’. Thus, morphology is the study of forms. In biology it refers to the study of form and structure in living organisms, and in linguistics it is the branch of the discipline that studies the components of words: their internal structure, and how they are formed. This is the reason why some studies within morphology are focused on what has been called word grammar.

2.  WHAT IS THE SCOPE OF MORPHOLOGY? HOW TO SEPARATE DOWN A WORD Morphology is the study of the internal structure of words (word grammar) and of words’ behavior (word formation phenomena). Words are composed of morphemes, which are defined as the smallest linguistic unit with a

99

On

the architecture of words.

Applications

of

Meaning Studies

grammatical function by Aronoff and Fudeman (2011). Another definition for morphemes is a pairing between sound and meaning. In this sense, a morpheme is the shortest combination of articulatory subgestures (sounds) that has a meaning. These two definitions already locate morphology as being interconnected to semantics and to syntax. However, Aronoff and Fudeman (2011) state that this latter definition can lead to confusion, since there are cases in which morphemes do not have a concrete or continuous form, and also some of them do not have meaning in the conventional sense of the term. From a certain perspective, a morpheme always carries some kind of meaning, be it grammatical or semantic, and grammatical functions are meaningful by themselves. Thus, it seems that a complete definition should comprise both of them: a morpheme is the smallest linguistic unit with a grammatical or semantic function. o-o-o-o-o-o-o Exercise 1 As with other levels of linguistic analysis, morphology is defined diffe­ rently depending on the theoretical model used to approach this area of language. Thus, generativists would define morphology as being more related to syntax, while functionalists would tend to identify morphology with functions and conceptual ideas. Try to find definitions that concur more with one school than another. Justify your answers. o-o-o-o-o-o-o A morpheme can consist of one word, such as way, car, ball, or a meaningful part of a word, such as the –ed of played, which means that the verb play refers to the past, or that it is being used in the past tense. All these morphemes cannot be divided into smaller meaningful parts. Morphemes can be free or bound. Free morphemes are those occurring independently, such as ask or drive. Bound morphemes are those that have to be attached to other morphemes in order to exist. They are also called affixes. For example, in the noun disagreement both dis- and -ment are affixes, attached to the stem agree. Those affixes that appear onto the left of the word, that is, before the stem, are called prefixes. Those that appear after the root or the stem are called suffixes. In English, the most common suffixes are given in (1) below (source: http://www.easyenglishvocabulary.com/suffixes.html):

100

On the architecture of words

(1) a) English Suffixes, Group 1: “someone who x” (-er, -or, -ant, -ent, -ard, -ian, -ess) b) English Suffixes, Group 2: “someone who does x” (-ist) c) English Suffixes, Group 3: “action or process” (-ade, -age, -ism, -ment, -ure) d) English Suffixes, Group 4: “material” (-ing) e) English Suffixes, Group 5: “a place for” (-arium, -orium, -ary, -ory) f) English Suffixes, Group 6: “a state, or quality of” (-ance, -ence, -ation, -dom, etc.) o-o-o-o-o-o-o Exercise 2: Provide four examples of words ending in any of these suffixes. Give a definition for them according to the meaning of the suffix. Then, look the words up in a monolingual dictionary and compare the definition given to your definition. Are they similar or different? Is the semantic information given by the suffix included there? For some guidance, consult the example provided below. o-o-o-o-o-o-o For instance: pianist is someone who “does” piano, or rather, someone who plays the piano. The definition given in www.wordreference.com (obtained from Collins Concise English Dictionary) is ‘a person who plays the piano’, as seen below:

Figure 1.  Pianist as seen in www.wordreference.com

101

On

the architecture of words.

Applications

of

Meaning Studies

Thus, we can see how important morphology is for the codification of meaning when it comes to the building of lexicographical compilations. Likewise, we also see this with prefixes. The most common prefixes of the present day English language are given below (source: http://www.englishclub.com/vocabulary/prefixes.htm): Prefix a-

also an-

Meaning

Examples

not, without

atheist, anaemic

a-  in the process of, in a particular state

to, towards

aside, aback

a- completely

of

ahunting, aglow anew

abashed ab-

also abs-

ad-

also a-, ac-, movement to, change into, addiaf-, ag- al-, tion or increase an-, ap-, atas-, at-

anteanti-

also ant-

becompletely having, covered with affect with (added to nouns)

away, from

abdicate, abstract advance, adulterate, adjunct, ascend, affiliate, affirm, aggravate, alleviate, annotate, apprehend, arrive, assemble, attend

before, preceding

antecedent, ante-room

opposing, against, the opposite

anti-aircraft, antibiotic, anticlimax, Antarctic

all over, all around

bespatter, beset

bewitch, bemuse bejeweled befog becalm

cause to be (added to adjectives) with, jointly, completely

combat, codriver, collude, confide, corrode

contra-

against, opposite

contraceptive

counter-

opposition, opposite direction

counter-attack, counteract

de-

down, away

descend, despair, depend, deduct

completely

denude, denigrate

removal, reversal

de-ice, decamp

com-

also co-, col-, con-, cor-

dia-

also di-

through, across

diagonal

dis-

also di-

negation, removal, expulsion

disadvantage, dismount, disbud, disbar

102

On the architecture of words

Prefix en-

Examples

put into or on

engulf, enmesh

bring into the condition of

enlighten, embitter

intensification

entangle, enrage

out

exit, exclude, expand

upward

exalt, extol

completely

excruciate, exasperate

previous

ex-wife

extra-

outside, beyond

extracurricular

hemi-

half

hemisphere

hyper-

beyond, more than, more than normal

hypersonic, hyperactive

hypo-

under

hypodermic, hypothermia

ex-

in-

also em-

Meaning

also e-, ef-

also il-, im- not, without

infertile, inappropriate, impossible

also il-, im-, in, into, towards, inside ir-

influence, influx, imbibe

infra-

below

infrared, infrastructure

inter-

between, among

interact, interchange

intranon-

inside, within absence, negation

intramural, intravenous non-smoker, non-alcoholic

blocking, against, concealing

obstruct, occult, offend, oppose

surpassing, exceeding

outperform

external, away from

outbuilding, outboard

excessively, completely

overconfident, overburdened, overjoyed

upper, outer, over, above

overcoat, overcast

peri-

round, about

perimeter

post-

after in time or order

postpone

pre-

before in time, place, order or importance favouring, in support of

pre-adolescent, prelude, precondition

acting for

proconsul

motion forwards or away

propulsion

oboutover-

pro-

also oc-, of-, op-

pro-African

before in time, place or order

prologue

re-

again

repaint, reappraise, reawake

semi-

half, partly

semicircle, semi-conscious

103

On

the architecture of words.

Applications

Prefix

of

Meaning Studies

Meaning

Examples

at a lower position lower in rank

submarine, subsoil sub-lieutenant

nearly, approximately

sub-tropical

in union, acting together

synchronize, symmetry

across, beyond

transnational, transatlantic

into a different state

translate

beyond

ultraviolet, ultrasonic

extreme

ultramicroscopic

un-

not

unacceptable, unreal, unhappy, unmanned unplug, unmask

under-

reversal or cancellation of action or state beneath, below lower in rank

undersecretary

not enough

underdeveloped

sub-

syntransultra-

also suc-, suf-, sug-, sup-, sur-, susalso sym-

underarm, undercarriage

Figure 2.  Frequent prefixes in English

An interesting discussion arises when dealing with locative particles that can be considered as affixes. This is the case of under, for example. If we follow Stekauer (2000), in line with Marchand (1966) and others, under- would not be considered a prefix here, but rather a free morpheme that combines with a base, in order to form a compound. This is so because it can occur alone: under. Therefore, we see that locative particles such as under could be considered either free or bound morphemes, depending on the context. The limits between compounding and derivation can sometimes blur, especially in the case of locative particles. In order to see whether we should consider a word as compound or derivate, we can apply two criteria: 1)  whenever the particle can occur independently (i. e., as a free morpheme), we have a case of compounding. For cases in which the word cannot occur alone, we are faced with a case of derivation. In this light, we could say that underestimate is a compound, as is overestimate, but unnecessary is a derivate, because un- cannot occur as a word on its own. This criterion works in 90% of the cases. 2)  However, a particle such as under or over loses part of its meaning when it occurs alone, or experiences a change in meaning. Thus under-, in underestimate, means the same as when it is under: in terms of logical structure, under requires three arguments that relate to under. For example: “I place the bag

104

On the architecture of words

under the table”. In underestimate, we can say that it is similar to “You estimate me under him”. However, in the case of the verb overtake, over- has already lost its literal locative meaning as in over: overtake does not mean ‘to take someone over something’. Here, we can say that over- is indeed a prefix, not a free morpheme. This distinction and its internal lexical representation is accounted for by Martín Arista (2008a, b) in a study of Old English verbal derivation, which is further dealt with in point 5. o-o-o-o-o-o-o Exercise 3 Provide four examples of words beginning with any of these prefixes (you can select them from the table above). Give a definition then accor­ding to the meaning of the prefix. Then, look up the words in a monolingual dictionary and compare the definition given to your own definition. Are they similar or different? Is the semantic information given by the suffix included there? Are they as important to the word’s overall meaning as suffixes? Why (not)? For guidance, follow the example given below. o-o-o-o-o-o-o Continuing with the verb underestimate, according to the meaning of under- it would be ‘to estimate something/someone under its/their actual value’. In this sense, according to Collins Concise English Dictionary (in www.wordreference.com), the verb underestimate has two entries: 1. to make too low an estimate of: he underestimated the cost and 2. to think insufficiently highly of: to underestimate a person. This can be seen below:

Figure 3.  Underestimate in www.wordreference.com

105

On

the architecture of words.

Applications

of

Meaning Studies

Therefore, prefixes are semantically very important for the meaning of words. We will see in section 5 —which deals with word grammar— how meanings in the word are compositionally organized. What is important here is to notice that the further outside the scope of the word we go, the more the meaning of the morpheme embraces the meaning of whole word. That is to say that the basic meaning of the word is given by the center of the word, which is in fact the nucleus, and the elements to its right and left provide meanings which also cover the nucleus: the core. Additionally, there can be other layers, encapsulating the meaning of the core, and where the affixes’ meaning will modify such core. This implies that, for us, the word structure (see illustration 19) has the same organization as the clause, and as language itself —given in example (11) in chapter 2—, it is like a series of concentric circles. Here is an example to illustrate this idea: (2) The derivate unpremeditatedly: [3. un-[2. pre-[1. meditat(e)]2.-ed]3.-ly] Step 1. Nucleus: meditate: ‘to think about something deeply’ (first entry of the Concise Oxford Dictionary) Step 2. Core: pre-……..-ed (the meaning of these two affixes, the prefix pre- and the suffixed -ed, affects and modifies the nucleus, meditate): ‘deverbal adjective that modifies and action that has previously been meditated, to have planned or considered something beforehand’. Now, let us see an example from the Corpus of Contemporary American English (http://corpus2.byu.edu/coca/), known as COCA corpus:

106

On the architecture of words

Figure 4.  First sentence obtained out of the KWIC search in COCA ­corpus for the word premeditated

Step 3. Periphery: un-…..-ly: here, again, the (semantic and grammatical) meaning of the affixes (grammatical only in case of the suffix) exerts an influence on the meaning of the core, premeditated, by modifying its meaning. Thus, the negation prefix un- will negate the meaning referred to by the core, and the adverbial suffix -ly will modify the grammatical class of the core, changing it from adjective or participial to adverb, and also the semantics, since -ly is an adverb that indicates manner. As a result, the meaning of unpremeditatedly will be something like ‘to do something in such a manner (-ly) that it is without premeditation (un-). It is common that the more a word derivates from others, the less frequently the word is used. As a result, we have words such as unpremeditatedly that are not even recorded in all dictionaries, as is the case for the Concise Oxford English Dictionary. However, after a Key Word in Context (KWIC) search in the COCA corpus, we see that the word unpremeditatedly has actually been used, though it is not a common word: there are only two hits (only two texts of the corpus that contain the word), as can be seen in the illustration below:

107

On

the architecture of words.

Applications

of

Meaning Studies

Figure 5.  The word unpremeditatedly in the COCA corpus

Word frequency is interesting because it shows the degree of derivation of a word. As already mentioned, the more frequently used, the more primitive it is. It also shows which affix comes first in the derivative chain. Thus, for unpremeditatedly, what was added first? un- or -ly? The rule says that it is the prefix that comes first, but it may not necessarily be so. If we want to make sure of the derivative process of a word we can refer to corpora. We can search for the words in the COCA corpus, and the more results we obtain for a word the more frequently used that word will be, and thus the more primitive —that is, earlier or older— will it be. As an illustration of this, let us see how many hits meditate, premeditate, meditated, premeditated, premeditatedly, unpremeditated, and unpremeditatedly have in COCA: (3) 1.  Primitive or base word >meditate: 608 results 2.  First step derivation through suffixation >meditated: 151 results 3.  First step derivation through prefixation >premeditate: 20 results 4. First step derivation through prefixation and suffixation > premeditated: 496 results 5. Second step derivation through prefixation >unpremeditated: 33 results 6.  Second step derivation through suffixation >premeditatedly: 9 results 7. Second step derivation through prefixation and suffixation >unpremeditatedly: 2 results

108

On the architecture of words

As can be seen in (3), meditate, the base word, has the highest frequency of occurrences, with 608 results. The second most frequent word is premeditated. Thus, the derived word has become more frequent than the base word (also called stem), which is a less common situation. In the second process of derivation, known as recursion or recursivity (when predicates are further derived by means of the attachment of another prefield or postfield predicate), premeditated derives into unpremeditated, with 33 results. It is therefore the most frequent word of all those undergoing recursion. The word premeditatedly gives 9 results, and the rare terminal word or terminal predicate (the last word in the derivational chain) unpremeditatedly has just 2 hits. Summarizing: with the exception of premeditated, the more recursion, the less frequency of use. These results are illustrated in the figure below, which shows all the images of the results obtained:

Figure 6.  Frequency of occurrences shown in COCA corpus for meditate and its derivates

o-o-o-o-o-o-o

109

On

the architecture of words.

Applications

of

Meaning Studies

Exercise 4 Choose a derived word with at least two levels of recursion (that is, with two derivational affixes) and find it and its base and stem in the COCA corpus. Check the frequency of occurrence of the word(s) and try to find an explanation for it. o-o-o-o-o-o-o There are other types of affixes, more seldom seen (and inexistent in English) called infixes and circumfixes, which according to Aronoff and Fudeman (2011) challenge the classical notion of morphemes. Infixes are affixes that are not bound to the right or the left of a word, but that occur in the middle. When separating a word into morphemes, or for lemmatization, most affixes are separated with a hyphen, but infixes are separated with angle brackets. An example given by McCarty and Prince (1993: 101-105) is the Tagalog infix —um—, which creates an agent from a verb stem and appears before the first vowel of the word: (4) Root ‘write’ + -um-→à ‘one who wrote’ Circumfixes are affixes that come in two parts: one attached to the right of the vowel and the other to the left. They are controversial because they can be considered as the simultaneous joint attachment of a prefix and a suffix. One example is the Indonesian ke-…-an in the form kebesearan, where besar means ‘big’ and kebesaran means ‘bigness, greatness’ (Beard 1998: 62). According to Aronoff and Fudeman (2011), the existence of circumfixes challenges the traditional notion of morpheme because it implies discontinuity. In English there are no cases of circumfixes. In Spanish we could consider cases such as engrandecer, from grande, as circumfixation, since neither *engrandar, nor *engrande, nor *grandecer exist. In English, nonetheless, we have seen that there are cases of prefix-….-suffix combination that have become more frequent than when the prefix or the suffix occur separately, as has been observed in the word analyzed above, premeditated, which is more frequent than meditated or premeditate.

110

On the architecture of words

Another important term in morphology is morph or allomorph, which refers to the specific phonological realization of a morpheme. For example, the plural morpheme –s is realized as [z] after the voiced final of play, and as [s] after the voiceless /t/ of cuts. The realization of one allomorph or the other depends on the place of articulation of the final phoneme of the stem. In this regard, stem is also a basic concept. It is the basic unit to which another morphological piece is attached. Aronoff and Fudeman (2011) state that it has to be distinguished from the root, which is like a stem in that it constitutes the core of the word to which other pieces attach, but it refers only to the morphologically simple units. Let us see an example: (5) disagreement driver) >screwdriver. A question arises with compounding: if two of the morphemes are free, thus potentially likely to be the word base, which of them is the head —base— of the word? Prototypically the head of the word is the base that provides the grammatical category for the compound word, that is, the free morpheme that is located on the right. In this case there is no doubt because both screw and driver are nouns. Another criterion is the semantic one: The head is the word that provides most of the semantic content related to the definition of the output. Thus, for instance in screwdriver, screw refers to an object (theme), and driver refers to an entity that would play the role of agent if it was an animate entity, and instrument if it was inanimate. Since screwdriver also refers to an instrument, the head of this compound is driver, and screw is the adjunct that modifies the meaning transmitted by driver, indicating what type of tool it is: an instrument used for screws (object, theme). Examples of compounds are: greenhouse, ashtray, iron-lady. Anecdotic is the case of daisy. It comes from day’s-eye, being thus a metaphoric compound which lost its original form and figurative sense. Inflected and derived words, as opposed to compound words, are composed of one base morpheme (i.e free) and one or more bound morphemes. In the sections that follow we will deal separately with the two processes of word formation: inflection and derivation. 3.2.2. Inflection Inflectional morphemes (which, in the case of English, are always suffixes) indicate grammatical features such as number or tense. They are di-

113

On

the architecture of words.

Applications

of

Meaning Studies

rectly linked to the syntactic structure of the clause: if a subject has an added suffix –s, it means that the subject is a plural noun (phrase), so the verb has to agree with this plural form and be used in plural form. Likewise, if a verb in English ends in –(e)s it means that the subject is singular and refers to a third person, so a third person singular form has to be selected. In Spanish we also distinguish between feminine and masculine features in nouns and adjectives by means of the inflectional suffixes –a/-o. In German the inflectional system is yet more complex as there are three genders expressed through inflection, as well as grammatical cases: ablative, accusative, genitive, etc. The inflectional system of a language is very important for language typology in order to classify languages into different types. Of the two ways that exist to classify languages (the genetic and the typological), the genetic (or genealogical) classification is the one that is based on the idea of ‘language families’, that is to say, on the assumption that some languages are closely related because they derive from a common ancestor. Italian, Spanish, French, Portuguese, and Catalan are all members of the same family of Romance languages. They all come from Latin. Classical Greek, Sanskrit, Latin, Gothic, Old Irish, and Eskimo all belong to the wider family of IndoEuropean languages. In the case of Roman languages, there are written records of both Latin and all the languages that have derived from it, so by comparing those written records it is easy to see that there are some systematic similarities between those languages and their common ancestor. However, in the case of the second group of languages mentioned, we have no written evidence of their common ancestor, all we can see is the systematic similarities among them. In these cases, linguists reconstruct the common ancestor using the comparative method. This method consists of comparing the structure and sounds of a set of languages looking for systematic similarities or differences. The second type of classification is the typological, which is based on a comparison of the formal similarities that exist between languages. It is an attempt to group languages according to their structural characteristics regardless of whether they have a historical relationship. These formal similarities are mainly based on the inflectional system of the language.

114

On the architecture of words

3.2.2.1.  Typological classification of languages Pyles an Algeo (1982) established four kinds of languages, according to their inflectional system, which are given below: a.  Isolating, analytic, or root languages An isolating language is one in which words tend to be one syllable long and invariable in form. Each word has one function. A tone language, where the tone can also help distinguish the function of words in the sentence, is usually an isolating language. Words take no inflections or other suffixes. The function of words in a sentence is shown primarily by word order, and also by tone. An example is Chinese: (8) Ni men ti hua wo pu tu tung “I don’t entirely understand your language” Ni men ti hua

wo pu tu

tung

You plur.possessor language I no all

understand

b.  Agglutinating or agglutinative languages An agglutinative language is one in which words tend to be made up of several syllables, and each syllable has one grammatical concept. Typically each word has a base or stem and a number of affixes. The affixes are quite regular: they undergo very little change regardless of what base they are added to. An example of such a language is Turkish: (9) Babam kardesime bir mektup yazdirdi “My father had my brother write a letter” Baba -m kardes -im -e bir mektup yaz - dir -di Father my brother my dative a letter write cause.to past tense c.  Inflective or fusional languages

115

On

the architecture of words.

Applications

of

Meaning Studies

Inflective languages are like agglutinative ones in that each word tends to have a number of suffixes. However, in an inflective language the suffixes often show great irregularity by varying their shape according to the wordbase to which they are added. Also, a single suffix tends to express a number of different grammatical concepts. An example is Latin: (10) Arma virumque cano “I sing about weapons and a man” Arm

-a

vir -um

-que can -o

Weapon neut.acc.pl man masc.acc.sing and sing 1.sing.pres.ind. d.  Polysynthetic or incorporating languages An incorporating language is one in which the verb and the subject or object of a sentence may be included in a single word. What we would think of as the main elements of a sentence are joined in one word and have no independent existence. For example, Eskimo: (11) Qasuiirsarvigssarsingitluinarnarpuq “Someone did not at all find a suitable resting place” Qasu -irr -sar

-vig

-ssar -si -ngit -luinar -nar

Tired not cause-to-be place-for suitable find not completely someone puq 3rd.sing.indic. 3.2.2.2.  The index of synthesis of a language

These four language types are just mere abstractions. No real language belongs solely to any one type. For example, the English language has a use of monosyllables like to, for, when, not, must, the, and, or, etc. Furthermore, it relies on word order to signal grammatical meaning, which is proper of isolating languages. However, the existence of paradigms like ox-oxen, show-shows, showing, shown, good-better-best is typical of inflective lan-

116

On the architecture of words

guages. Also, words like activist, which are built by adding suffixes to a stem one by one (act, active, activist) is a characteristic of agglutinative languages. And finally, forms like baby-sitting or horseback-riding are the hallmark of an incorporative language. So, what type does English belong to? It is more realistic to speak of tendencies: to say that a given language is more analytic than synthetic, or that it tends to be highly incorporative or highly agglutinative, for instance. In any case, it can be said that English was originally closer to a synthetic language (inflective and incorporating languages) but it has gradually evolved towards “analycity” (root and agglutinative languages), so nowadays English is mostly analytic but with some features of synthetic languages, such as the genitive case or the inflections of verb cases. The problem is how to measure these tendencies. Greenberg (1960) provided a measurement tool, which he called “the index of synthesis”: Take a passage written in the language you wish to analyze; the longer the passage, the more accurate the index will be. Count the number of words in the passage (W). Count the number of morphemes, that is, the smallest meaningful elements, in the passage (M). Divide the number of morphemes by the number of words (M/W). The result will be the index of synthesis.

For example, the Chinese sentence we examined earlier has 8 words; it also has 8 morphemes. The index of synthesis is thus 8/8, or 1. The Turkish sentence: 10/5=2; the Eskimo sentence: 1 word, 10 morphemes. Index: 10/1=10. A low index of synthesis, sometimes approaching 1 (1.50), tells us that the language is analytic (Chinese). A higher index of synthesis, somewhere between 1.50 and 2.50, characterizes the language as synthetic. Agglutinating and inflective languages are both synthetic. A very high index of synthesis, something around 3 or above, identifies the language as being polysynthetic or incorporating8. o-o-o-o-o-o-o 8  This single index is very useful for comparing the inflectional system of languages, although it has received some criticism, as can be read here: http://balshanut.wordpress.com/2009/01/10/greenbergj-h-“a-quantitative-approach-to-the-morphological-typology-of-language”-international-journal-ofamerican-linguistics-26-1960-178-94/

117

On

the architecture of words.

Applications

of

Meaning Studies

Exercise 5 Take the Latin sentence used to illustrate an inflective language (Arma virumque cano) and complete the index of synthesis for it. o-o-o-o-o-o-o Exercise 6 Write a paragraph in English and then the same paragraph in Spanish or in another language you know. Apply this index and compare the result for each language. Identify which languages has a richer inflectional morphology and what the implications of this are for the other levels of language. o-o-o-o-o-o-o Exercise 7 Discussion: English is in many ways the odd one out among the modern Germanic languages. As the last exercise suggests, it has arguably undergone more dramatic changes in grammar than its relatives, and there is no doubt at all that it has experienced vastly greater changes in its vocabulary than any other Germanic language. Can you think of any reason why English should have changed more rapidly than its relatives? o-o-o-o-o-o-o 3.2.3. Derivation We have already seen how it works: derivational suffixes change the category of the word they are attached to, and they may also result in a change in meaning. Ex: -ive>generative; -less>joyless, colorless. Types of derivation Derivation occurs in different ways: we add affixes (addition through prefixation, suffixation or infixation), we remove a morpheme (reduction or subtraction), mute sometting, change the vocalic structure of a syllable (conversion), or we add nothing (zero derivation). More than one of these processes can take place at the same time, such as mutation and addition. Examples are given below:

118

On the architecture of words

(12) a. English repetition> repeat + -tion (mutation and suffixation) b. Spanish previsión: ver> visión (derived from the Latin perfect form of videre, visus,visi) + pre- (mutation and prefixation) As prefixes are on the left, they do not modify the word class of a word. Thus, prefixes only affect the semantic meaning of the word, not the grammatical meaning, as suffixes can do. Thus, derivational suffixes provide semantic meaning to the lexeme, but also add grammatical meaning (they provide the information about the category of the word). Obviously, if the suffix is in the same grammatical category as the base word, then the grammatical category stays the same. Thus, we can say that derivational suffixes can be classified into different types according to the category or word class they are attached to and the grammatical word class they give way to. Let us see some examples: (13) Prefixes: view (noun) > (pre) view (noun). Noun Suffix: view (noun) >view(er) (noun) [-er = nominal suffix] Adverbial suffix: real (adjective) >real(ly) (adverb) [-ly = adverbial suffix] It is important to note that derivational suffixes can have different grammatical functions (and therefore meaning). Thus, what looks like a single morpheme can in fact be two morphemes. An example is -ly: if we attach -ly to an adjective the resulting word will be an adverb (quick>quickly), but if we attach -ly to a noun the result is an adjective (love>lovely). Thus, there are two suffixes -ly, one that brings about adverbials and another one that produces adjectives.

3.2.4.  Other word-formation phenomena Mutation: According to Kreidler (2003), the words proud and pride are both semantically and formally related, but it is impossible to say that one is formed because something was added to the other. Rather, derivation is accomplished here by a change of a vowel; in other words pairs the change may be in consonants, as in believe and belief; or both vowel and consonant, as

119

On

the architecture of words.

Applications

of

Meaning Studies

with choose and choice; or by change of stress, as in verbs extráct, insúlt, progréss, in contrast to nouns éxtract, ínsult, and prógress. Conversion, zero change or zero-derivation: This is a simple change of a word from one class to a word of another class with no formal alteration. Marchand (1969: 359) defines it as follows: By derivation by a zero-morpheme I understand the use of a word as a determinant in a syntagma whose determinatum is not expected in phonic form but understood to be present in content, thanks to an association with other syntagmas where the element of content has its counterpart on the plane of phonic expression.

For him, it is similar to transposition, in line with Bally (1965:116), who speaks of “transposition fonctionelle ou hypostatique”. Another term to identify this phenomenon is functional change. With regards to the productivity of zero-derivation, Marchand (1969: 364) notes that English is more productive than other languages such as Latin, French, Spanish and German in derivations, and especially in the denominal verbs. In this line, Kastovsky (1968) states that the norm in English is zero-derivation. In Ibáñez Moreno (2012) it is shown that OE was also highly productive in transparent word-formation phenomena. Examples of zero-derivation in English are clean, dry and equal, which are adjectives and also verbs; the relation of the adjective clean to the verb clean is the same as that of the adjective long to the verb lengthen. Fan, grasp and hammer are verbs and also nouns; capital, initial and periodical are nouns and adjectives. Abstraction or reduction: According to Kreidler (2003), reduction is a word formation process by which new lexemes are formed by way of removing parts of certain lexemes. One kind of modern shortening is called acronym; another is known as clipping. An acronym is a word derived from the written form of a construction; a construction is a sequence of words that together have a meaning. Some acronyms are pronounced as a sequence of letters: UK for United Kingdom, USA for United States of America. In other acronyms the letters combine to produce something pronounceable: AIDS for Acquired Immune Deficiency Syndrome, UNESCO for United Nations Educational, Scientist, and Cultural Organization, and so on. The acronym is typically, but not always, formed from the first letter of each written

120

On the architecture of words

word. The acronym may be formed from parts of a single word: ID for Identification, TB for Tuberculosis, TV for television, or it may include more than initial letters: Nabisco (National Biscuit Company), Sunoco (Sun Oil Company). With a few exceptions, acronyms are essentially names (Kreidler 1979). As for clipping, it is the use of a part of a word to stand for the whole word. Laboratory is abbreviated to lab, telephone to phone, and refrigerator to fridge. Sometimes a vowel is added when other material is cut away, as in Chevy for Chevrolet, divvy for dividend, ammo for ammunition. In these examples and many others we only see new, shorter ways of designating what was previously designated by a longer term. However, clipped forms sometimes come to have meanings that are distinct from the original sources. The part of speech may change (a kind of conversion): divvy, just cited, is used as a verb whereas dividend is a noun. Without a change in part of speech the clipped form may have a connotation different from the source word: hankie, undies, and nightie are ‘cuter’ than handkerchief, underwear, nightgown, respectively. Another modern form of reduction is blending, such as brunch (breakfast + lunch), workaholic (work + alcoholic), and a very recent one, jeggings (leggings + jeans). In other stages of the English language the phenomena of word reduction took place at the level of phonology (i-mutation, umlaut). There are even cases of internal sound loss, within the stem. This loss may either be related to vowel length, that is, a long vowel becomes short such as styfician and stifician (from sti:f), or to the change of a diphthong to a short vowel, such as in elcian and ylcian (from ieldan), where there is also consonantal loss of /d/. In both cases we have a reduction in vowel quantity. In the latter, this may be explained through i-mutation, since there is a raising of /ie/ to /e/ or /y/. Another base undergoing mutation is the case of sty:fecian, whose source predicate is sti:f. In this case, /i:/ is fronted due to the presence of suffix -cian, with the alveopalatal affricate /č/ and the alveopalatal semivowel /j/. This same process takes place in fiðercian, deriving from feðer, where the /e/ is raised to /i/. Another type of vowel reduction was the contraction of diphthongs to vowels, as happens in the variant form of tealtrian: taltrian. In PDE we also have many examples, such as move>movement, where the vowel /u:/ becomes short /u/.

121

On

the architecture of words.

Applications

of

Meaning Studies

3.2.5.  Transparent and opaque word formation phenomena When one word is formed by adding a suffix to another, such as paint-er, or by subtracting one or more elements from another, such as in the case of prep from preparation, the direction of derivation is clear. But the direction is unclear when the process is mutation or conversion. These are instances of opaque word formation phenomena. In these cases there may be a problem in deciding what is derived from what: which comes first, the noun hammer or the verb to hammer, the noun kiss or the verb to kiss? We look for ways to decide such questions consistently. As we have seen in chapter 2, one general principle that Kreidler (2003) also supports is to go from concrete (more primitive) to more abstract meanings (more derived). The noun hammer denotes a concrete object, the verb hammer means the use of that object; we can say, then, that the verb is derived from the noun. On the other hand, the verb kiss names a physical, observable action, while the noun kiss is the result of the action; we say that the noun is derived from the verb. In numerous instances, however, we may have to be arbitrary in saying which the primary or basic word is and which is the derived word. Other principles were given in chapter 2. Opaque phenomena are zero-derivation phenomena, and transparent phenomena are the rest. OE was very rich in word formation processes of all kinds, both transparent and opaque. Because of this, it has been identified by Kastovsky (1992: 294) as one of the “large morphologically related word-families”. This is also a feature of Indo European and of present-day German, but not of present day English. The different character of Old English and present day English has been noted by Lass (1994), who identifies the former as a variable stem or stem-based system, and the latter as an invariable-base or word-based morphology. The shift from one system to the other constitutes a point of inflection in English morphology, as we saw when dealing with the word-formation processes above. Lexical change takes place when new words appear, or when words that existed before are not used any longer. We need to have the loss or gain of words to talk of lexical changes. Notice that these changes are different from semantic changes, in that in semantic changes the same word undergoes a change in its meaning. Lexical changes are more idiosyncratic (that is, they are more tied to culture), so here we will find less regularity. o-o-o-o-o-o-o

122

On the architecture of words

Exercise 8 Give examples in English of at least three of the word formation phenomena we have just seen. o-o-o-o-o-o-o 4. ON WORD GRAMMAR: APPROACHES TO THE STUDY OF MORPHOLOGY (ADAPTED FROM GEERT BOOIJ 2005) 4.1. Introduction Morphology deals with the internal structure of words: with their form and meaning. We have word grammar, as opposed to the notion of sentence grammar. That is, it is a set of correspondence rules between forms and meanings of words, and thus word families are sets of related words (remember the degree of synthesis of languages). The notion of “systematic” is essential for the description of morphological processes. The morphological system has the following sources: a. Complex words (word formation) b. Borrowing c. Phrases becoming words (univerbation) d. Word creation (word manufacturing): different from word formation in that in the latter the meaning of the new word is recoverable from that of its constituents, and it is typically an intentional form of language use (Booij 2005: 22) 4.2.  Paradigmatic and syntagmatic relationships A paradigm is a set of linguistic elements with a common property. From the paradigmatic perspective, morphology is seen as the systematic study of form-meaning correspondences between the words of a language. From the syntagmatic perspective, morphology is seen as the study of the internal constituent structure of words. An example of syntagmatic relationships would be the relationship that holds between the and book in the book. An example of paradigmatic relationships would be a and the when added to book: we can say either the book or a book, but not *the a book.

123

On

the architecture of words.

Applications

of

Meaning Studies

In this light, there are two main approaches to the study of word grammar, and they determine the types of theoretical models proposed, which in turn will determine the type of study that will be carried out on words. The first approach is the syntagmatic approach to morphology. The main features of this approach are: 1. Morpheme-based morphology: the focus is on the analysis of words into their constituent morphemes. Thus, morphology is the syntax of morphemes, i.e. the set of principles to combine morphemes into words. 2. Affixes are represented as bound lexical morphemes, with their own lexical entry. 3. Morphological use of the notion of head -see illustration in Booij (2005: 53). 4. Morphemes: minimal linguistic units with a lexical or grammatical meaning. For example, eater in (14): (14) eat: free or lexical morpheme of the category verb; -er: bound morpheme of the category noun, subcategorized as an affix: [[eat]v[er]n-aff ]n >concatenation (combination of morphemes into a linear sequence) The challenge to this approach is non-concatenative morphology. This refers to the existence of morphological operations that do not consist of the concatenation of morphemes. An example is the past tense of irregular verbs in English, which is not formed through adding -ed to the base of the verb, but through alternations. This pattern can only be expressed in paradigmatic terms, as a difference in form correlating with the semantic distinction between past and present. The second approach is the paradigmatic approach to morphology. The justification of the existence of this approach is the existence of paradigmatic cases of word formation, where by replacing one constituent with another a new word is formed. For instance, the feminine word for policeman is policewoman (analogy). The main features of this approach are: a)  Lexeme-based morphology: lexemes are the starting point of morphological processes.

124

On the architecture of words

b)  The creation of new words is primarily the extension of a systematic pattern of form-meaning relationships to new cases. Once we discover the abstract systematic pattern behind the words, we can extend the pattern to others. For example: from eater to swimmer (see Booij 2005:10). That is, words and relationships between them are the point of departure, and morphemes have a secondary status in that they figure as units of morphological analysis. c)  Bound morphemes like –er do not have lexical entries of their own, but exist as part of complex words and of abstract morphological patterns such as pattern [x]v: [x-er]N “one who Vs”, as in swim: swimm-er. Patterns are useful for predicting correct combinations of affixes and stems. d)  Language acquisition: this perspective on complex words is the starting point of morphological analysis. In mother tongue acquisition, one has to discover the existence of morph patterns. e)  Morphemes: a pattern is a morphological rule for the attachment of bound morphemes to words (paradigmatic relationships can be projected on the syntagmatic axis of language structure). For example: The pattern in 14 above would be as follows: morphological rule: [x]Và [[x]V er]N “one who Vs”. The morphological rules must be lexeme-based because this allows poly-morphemic lexemes to function as the bases of word formation, having such lexemes idiosyncratic properties that will be inherited by their derived words. f)  The assumption of this affix-specific rule implies that bound morphemes do not exist as lexical items on their own, but as part of morphology rules. g)  We can also express these patterns in the form of templates: [[x]V er]N “one who Vs”, which are abstract schemes that specify the common properties of a set of words and are also used as a ‘recipe’ to create new words. h)  Semantic: the meaning of a complex word is a compositional function of the meaning of the base word and its morphological structure. The meaning of the affix may be vague (Booij 2005: 57), mainly formal (syntactic and phonological); The category determining the nature of an affix is expressed in that template. They state the predictable properties of established complex words. That is, they reflect the way in which language users acquire the morphological system of a language. Both approaches may lead to same analysis of word structure. The main difference is that in the lexeme-based approach the bound morpheme -er has no lexical category label of its own, as it is not a lexical entry.

125

On

the architecture of words.

Applications

of

Meaning Studies

4.3.  Some recent proposals for a model of morphology in grammar Note that in this section we will be using examples obtained from Old English for one main reason: OE was very rich in word formation phenomena. Therefore, it constitutes a very good source of samples for the analysis of morphology. The theoretical assets for establishing a model of morphology within a descriptive and explanatory model of language are not completely settled. Here we will review the different proposals made by different language schools (Generativists and Functionalists). Although we advocate a functional theory of morphology, the reader is free to follow this view or stand by the generativist principles of the lexicalist hypothesis or of distributed morphology. Nonetheless, given the practical orientation of this book, a basic view of the state of the art is more than enough for our purposes here, which is indeed necessary in order to understand the philosophy of language that lies behind any lexical compilation. We have adopted the functional framework, more specifically that of Role and Reference Grammar (RRG, Van Valin and LaPolla 1997, Van Valin 2005), and recent proposals made by Everett (2002), Martín Arista (2008a, 2009, 2011) and Ibáñez Moreno (2012). The general aim of this section is to account for the field of morphology, which needs a deeper analysis into the functional models of grammar. With regards to description, there is not yet a complete work that accounts for all the predicates and processes, implied as a whole. With regards to explanation, up until, now most explanatory works have been generativist. Within the functional framework there is some recent work made by followers of the Lexical-Constructional Model (Cortés Rodríguez 2006a, b) that attempts to analyze Old English derivation. However, it focses on prefixes or consists of theoretical proposals for the analysis of words, paying attention to lexical decomposition and to the semantic logical structure of predicates. The wide research on the matter made by Martín Arista (2009, 2010, 2011) and his team (Nerthus Project: www.nerthusproject.com) is one of the most important nowadays, and the data handled in this section, as well as the theoretical assets, stem from this project.

126

On the architecture of words

The first time that the idea of word grammar was introduced to talk about morphological processes was within the generativist school, under the influence of American Structuralism. This is remarked by Williams (2007), who states that it already existed years ago: “I have already said that there is no argument over the point of whether words have syntactic structure or not, in the general sense. Everybody says so, and has said so for 25 years”. Within it, however, there are two different positions: The lexicalist hypothesis and distributed morphology. Distributed Morphology is a model within the generative tradition opposed to the lexicalist hypothesis in some theoretical aspects dealing with morphology. For instance, it is held that every morpheme occupies a slot in the functional structure of the clause (morpheme-based approach), whereas in the lexicalist hypothesis it is held that complex derived forms are inserted into the syntactic structure (lexeme-based approach) and that morphology does not reduce to morphosyntax. What separates them from one another is the answer to the following question: Do words and phrases have the same syntactic structure? Within distributed morphology, Harley and Noyer (1999: 3) have this idea in mind: “elements within syntax and within morphology enter into the same types of constituent structures (such as can be diagrammed through binary branching trees)”. The structure of a sentence is a structure that has morphemes as its terminals, instead of words. In distributed morphology the only unit of “insertion” is thus the morpheme. Idiomatic or non-compositional meaning is consequently handled in a different way. As Marantz (1997, section 1) says, “The Encyclopedia lists the special meanings of particular roots, relative to the syntactic context of the roots, within local domains” and Harley and Noyer (1999: 4) give the following example: “the encyclopedia entry for kick, for example, will specify that in the environment of the direct object ‘the bucket’ ‘kick’ may be interpreted as ‘die’”. On the other hand we have the lexicalist hypothesis, with advocates such as Williams (2007), for whom both words and phrases have syntax, which is not the same as clause syntax-that is, they have parts, and there are rules and principles for putting the parts together and for determining the properties of the resulting constructs. To narrowly use the term “syntax” as the name of the rules of phrasal composition may be useful in some contexts, such as when the discussion is about phrasal syntax; but it is simply a source of confusion to use it that way in a discussion of the lexical/phrasal inter-

127

On

the architecture of words.

Applications

of

Meaning Studies

face. The example given by Williams (2007: 17): How completeness? justifies the fact that syntax cannot look into morphology and vice-versa: he says that we can know that for how we need and adjective, not a noun, so it is the external property of the word, the category, which gives us this information. That is, outer morphological properties of words are related to syntax, and inner properties are related to morphology: this has an important implication, the modular conception of language. This gives way to a principle called the Hypothesis of lexical integrity, which reads as follows: The constituents of a complex word cannot be operated upon by syntactic rules. Put differently: words behave as atoms with respect to syntactic rules, which cannot look inside the word and see its internal morphological structure (Booij 2005: 22)

In our view, this does not mean that the principles that apply to each level are different, but that we are paying attention to different levels of language, to pieces of language that pertain to different levels. The fact that the same principles of clause syntax can be applied to word syntax is one thing, and the fact that we try to see the underlying mechanisms of words when we are at the level of the clause/phrase is another thing. In functional grammars, however, the same principles are applied: they use the same system of analysis and representation, by which word syntax works in the same way as phrase/clause syntax. This does not mean that when we are paying attention to the properties of a phrase, such as how completeness, the inner properties of the adjective can play a role in the phrase. The fact that the outer property of the word, that is, its category, is important for the clause, shows that there is interaction. This communication and interaction among levels is also acknowledged in the lexicalist hypothesis, but the rest of ideas are denied. In Ibáñez Moreno’s (2012) view, generative theories and functional theories do not seem to agree, but this is just because they focus on different aspects. The lexicalist hypothesis does not pay attention to the fact of whether the system of syntactic phrase analysis can be applied to smaller linguistic units; it only focuses on the analysis of inflectional and derivational affixes and especially on the role they play in the clause. Their focus of attention is not, therefore, the same.

128

On the architecture of words

From her view, the phrase, the clause and the word work with similar principles (that is why the same metalanguage and mechanisms of analysis can be applied) but they correspond to different levels of language and thus each level has to be equated to specific analysis. Thus, we have to apply the same principles at different levels but not mix them, with the exception of those outer properties that act as linkers of each level: in the case of morphology and syntax, gender and category are the morphological properties that are realized syntactically. For example el alto is a nominalized adjective, where alto becomes a noun in this specific syntactic environment. It undergoes a recategorization by means of conversion. Going to a “lower” (or more central) level, in the case of phonology and morphology, ablaut and other phonological processes are realized in morphology through vowel alternations that give way to different allomorphs: contar>cuento. Thus, there is a clear interaction of all levels of grammar. This is not rejected by the lexicalist hypothesis. Also, the idea of word syntax is originally generative, as taken from American Structuralism. The difference between functional models such as RRG and the Unification and Separation Model (USM, Martín Arista 2008), and generative models such as the lexicalist hypothesis, is that generative models do not apply the same principles of analysis in the word and in the clause, since it is defended that, even though there may be some interaction among them, the different modules of language are independent, with their own internal mechanisms. Furthermore, functional theories are monostratal, while generative theories are multistratal. There are, nonetheless, some ideas that may contribute to the development of a functional model, as present in the lexicalist hypothesis (Williams 2007), such as the notion of scope: in the lexicalist hypothesis, as well as in standard generative grammars in general, this notion is essential for the analysis of the clause. Let us see an example in (15): (15) John remade the cake for Mary. In this clause, the scope of the prefix re- is the object the cake. Williams (2007) assigns this scope to the level of the clause. That is, he analyzes how

129

On

the architecture of words.

Applications

of

Meaning Studies

an affix has a certain scope on the whole clause. For Williams (2007) such affixes may not be derivational when they take part in the functional structure of the clause, as happens with re- in remake. Therefore, in Williams (2007) those prefixes are all analyzed in terms of scope in the structure of the clause. This view of the scope of action of derivational morphology is interesting, since it already shows some kind of interaction between word formation and word grammar, and between word grammar and clause grammar. Thus, they may be analyzed as lexically derived phonological words, as Williams (2007) calls them, as opposed to syntactically derived phonological words such as kicked, but paying attention to the way they affect the functional structure of the clause, in terms of scope: rekick. In standard theory, inflectional morphology is that which is relevant to syntax. Nonetheless, derivations such as instruction are also relevant to syntax9. Within the functional perspective, the Unification and Separation Model (USM) departs from Everett’s (2002) proposal for the development of a morphological theory for RRG, although there are some modifications. Let us review Everett’s (2002) ideas and how they are adapted to the USM. In the first place, words are stems plus features, and they are the maximum unit of morphology. This is in line with the transformationalists Halle (1973) and Halle and Marantz (1993), followers of the so-called model Distributed Morphology, who also consider, like Everett (2002), that stems are the base of morphological processes. Secondly, there are no derivations understood as a combination of filters, in the line of Halle (1973), as the establishment of levels, in the line of Siegel (1979), or as strata, in the line of Kiparsky (1982, 1985). Thus, this conception of derivation for Everett (2002) must not be understood in this light, but as a combination of morphemes into a linear se9  Explanatory note from the author: The scope of a prefix refers to the area of influence of that prefix. For Williams, prefixes are derivational or not depending on whether their scope of influence affects the word/phrase/lexical item they are attached to or the whole sentence in a functional/syntactic way, as opposed to prefixes that affect the item they are attached to just semantically. Thus, according to him, repaint is ‘to paint a wall twice’, and this means that the action is carried out twice, affecting the whole predication (from being an accomplishment to being an iteration or semelfactive, in this case). The prefix affects the whole clause, that is, the whole action expressed by the verb, to make a cake twice, and so these types of prefixes are not derivational, although they are derivational in the traditional sense. What the text says is that what is considered by traditional grammar to be derivational, it is not so for him. For him, just those prefixes that change the meaning of the item they are attached to are derivational. The other prefixes have a higher scope, the whole clause, and therefore would be, according to him, more a part of inflectional morphology.

130

On the architecture of words

quence —as a syntagmatic or morpheme-based approach—, in line with Stump (2005) and others, as can be seen in the illustration below: (16) [[friht] n [ere]n-aff ]]n The Old English noun frihtere ‘diviner’ comes from the basic noun fright ‘divination’ as a result of the addition of the bound morpheme -ere, which is a nominal affix. This is a concatenation of morphemes. However, the challenge to this approach is non-concatenative morphology (that is, a paradigmatic or lexeme-based approach), which departs from the existence of morphological operations that do not consist of just an addition of morphemes. For example, most Old English nouns ending in –a come from strong verbs and are agent nominalizations, and at the same time they take the masculine gender. Thus, this suffix is derivative (it builds agent nouns from verbs) and inflectional (masculine nouns). Examples of this are given in (17): (17) a.  bo:nda ‘householder’< [[bindan]str.v[a]n-aff + masc-gender]n] b.  bora< [[beran]str.v[a]n-aff + masc-gender]n] As it can be seen in (17), where str.v means ‘strong verb’ and masc.gender, ‘masculine gender’, and n-aff means ‘nominal affix’ suffix,–a is difficult to represent in such a syntagmatic approach to morphology. Everett (2002) does not pay attention to this detail, which means he locates inflection at the periphery. Such alternations contradict this organization of words, so in the USM this morpheme-based approach is adapted to the analysis of words by means of trees and templates. That is, the USM adopts this morphemebased approach, in the sense of considering bound morphemes as lexical items with their own independent entries, but at the same time it is lexemebased, in the sense that paradigmatic patterns are established. This means that lexemes are analyzed in the paradigmatic axis as well as in the syntagmatic axis, so we can express patterns in terms of templates that account for all grammatical aspects: semantic, syntactic and phonological. Another strand of Everett’s (2002) proposal within the functional perspective is that word structure can be manipulated by syntax, semantics and pragmatics. This is also followed in the USM, and fits the RRG organi-

131

On

the architecture of words.

Applications

of

Meaning Studies

zation of grammar. Everett (2002) unifies inflection and derivation on the functional view of language, since both processes add meaning to the word, and separates them on the structural view, since inflections are a combination of stems and features —addition of arguments to a nucleus— while derivation is a combination of stems —combination of nuclei—. In the USM, the inflectional and the derivational dimension of words are described in a continuum by means of the operator and the constituent projections, thus solving this problematic issue of the dividing line between both dimensions. Another important part of this proposal is the generalization of the layered structure of the clause to the layered structure of the word. As Martín Arista (2008a) remarks, this is in accordance with the step given in RRG (Van Valin and La Polla 1997) of generalizing the layered structure of the clause to the phrase, and it accounts for the interaction of syntax and morphology. In our view, this conception of morphological structure is the main asset and starting point, and what really constitutes the innovative distinctive feature of the theory. As for Functional Grammar (Dik 1997a, b), and Functional Discourse Grammar (Hengeveld and Mackenzie) —FG-FDG—, what is essential for this school, and which influences other functional schools, is the conception of primitives as entities that restrict in themselves the possible configurations at the representational level. They are lexemes, frames and operators, devices that have later been applied by subsequent models. The interaction of the different subfields of grammar is directly related to the conception of language as a system of communicative social action. One general functional principle is also that syntax is semantically motivated. Morphology has not been analyzed in functional schools as much as pragmatics, syntax or semantics, although now it seems to be getting increasing attention. With regards to the assets over which functional grammars lie (aim at concreteness, interrelation of linguistic levels and adaptation of the theory to real language and language use), they are also present in the USM. The communicative-cognitive perspective of linguistic phenomena is common to all these schools, in the sense that language is conceived as a dynamic mechanism of communication and of mental processing. All these theories reach an explanatory level. Description is secondary, and they all deal with cognitive issues. However, the cognitive dimension of these models may be further elaborated. This has been applied in the Lexical Constructional Model (recent version of the Functional-Lexematic Model and the Lexical-

132

On the architecture of words

Grammatical Model, with authors such as Mairal Usón, Ruiz de Mendoza, Periñán Pascual, and Cortés Rodriguez) with more emphasis given to the lexicon and its decomposition and richness than in the others, but there may still be some work to be carried out on these cognitive grounds. Among the functional schools presented, RRG constitutes the main point of departure for the development of other models (the LGM and the USM), which are aimed more specifically at concrete levels of the language or at specific areas of grammar. RRG provides us with useful devices, such as the description of a sentence formulated in terms of its logical (semantic) structure and communicative functions, as well as the grammatical procedures available for the expression of these meanings. Thus, one essential feature in RRG is concreteness. For this, RRG is a monostratal theory, that is, it only posits one level of syntactic representation, the actual form of the sentence. This is common in the functional framework. There is the rejection of the existence of independent modules and of complicated interface mechanisms. What we have is interaction through templates, trees and constructions. What the USM and the other functional models do is to present and describe such mechanisms. Also, the use of lexical decomposition is important, although it is further developed in the LCM. The distinction between arguments and predicates in RRG and referential and non-referential elements in FG, and the functional-syntactic notions of core, periphery, nucleus, etc. are adopted in the USM. The layered structure of the clause, with its universal and non-universal features, is applied in the USM to the word. The concept of valence refers to the number of arguments a verb takes in FG-FDG and in RRG. In the USM it is related to the number of arguments a word admits. With respect to the LCM, what is important in this model is the unit of analysis, which is the micro discourse: lexemes. This is also what characterizes the USM. The main difference between them is, nonetheless, that the main interest in the LCM is lexical decomposition, whereas in the USM the main point of attention is morphological analysis of word structure. It is this model that is used for the development of a lexical database from a semasiological approach, Nerthus, while the LCM model is the basis for the development of a database that takes an onomasiological approach, FunGramKb. Concernig differences, each functional school may apply a different metalanguage for the representation of internal structures, and in this case the model selected may depend on the investigation carried out. The USM

133

On

the architecture of words.

Applications

of

Meaning Studies

uses the same metalanguage as RRG, and the LCM uses COREL, an example of which can be found in chapter 1. Finally, some remarks need to be made on the similarities and differences between the generative (or transformational) and the functional frameworks. As we have remarked throughout this section, we do not think that the differences are as wide as the representatives of each framework themselves defend. In the USM, Martín Arista (2008a) talks about parts of morphology that are exclusive to it, as Williams (1981, 2007) and Halle and Marantz (1993) do. In fact, Halle and Marantz (1993) are highly concerned with the study of those aspects of morphology that interact with syntax. The differences among them lie mainly on the degree of interaction they admit and, above all, the descriptive and explanatory model used to account for this interaction. Thus, Williams (2007) focuses on how morphology affects the functional structure (that is, syntactic structure) of the clause. In this sense, Marantz (1997) argues that every morpheme has a slot in functional structure, so morphology is reduced to morphosyntax. Within this generative framework, Stump (2005) defends, as does Martín Arista (2008a) within the functional one, that inflection and derivation form a continuum that is not always separable, and that inflectional morphology interacts with the phonological and grammatical properties of words. What is different is that in functional grammars the same principles are applied to the analysis of words and of phrases and clauses, and in that they focus on different issues. Generative theories focus on the analysis of inflectional and derivational affixes and on the role they play in the clause. Functional grammars (the USM in particular) focus on the interaction of these affixes with the other word constituents and also with the clause. Their attention is not then focused on the same topics. Ibáñez Moreno (2012), in a study about the derivational morphology of verbs of the Old English language, applies the systems of syntactic and semantic representation of the clause in RRG to the internal structure of the word. In this sense, all the central concepts of the theory of RRG deal with the clause and its structure, as in Functional Grammar. This author follows Everett’s (2002) and Martín Arista’s (2006, 2007, 2008a, b) proposals for adopting such structures to the analysis of the word, with the starting hypothesis that the internal structure of the word is parallel and therefore comparable to the internal structure of the clause. The USM: Martín Arista’s Model of Unification and Separation aims to answer the question: Then, where is the place of morphology? This debate is a

134

On the architecture of words

central tenet of transformationalist morphology, under the hypothesis of lexical integrity, which is the counterpart of the principle of the autonomy of syntax, which preserves syntax as an untouched system. In early transformationalism (Chomsky 1975) there was no morphology, since syntactic transformations also account for units smaller than phrases. Thus, Di Sciullo and Williams (1986) propagated no interaction between morphology and syntax. The reviewed version (Chomsky 1992) still states that there is no interaction between the formation of words and phrases, because the lexicon produces full words that enter the syntactic derivation including inflectional morphology and derivational morphology. Zwicky (1986) allows for some degree of interaction, which is followed by functional theories. The USM model, as well as Faber and Mairal (2002) in the LGM (Lexical Grammar Model) and Everett (2002), state that morphology is the product of the interaction between the lexicon (Dik 1997a) —defined as the inventory of predicates and rules that derive non-basic predicates from basic predicates—, and categories. Categories are functional labels of two types: Lexical categories that combine with other lexical categories (relations of complementation) and grammatical categories. Major lexical ones are the Noun (referential predicates), the Verb (state or process predicates), and the Adjective (quality or relation properties). These distribute across one another and across other lexical categories to form paradigms. The lexicon is productive (Zwicky 1992: 331) and motivated externally. In the USM the entries of the lexicon are lexemes of two kinds: Free (major lexical categories) and Bound (affixes that take part in derivational processes). They are fully especified, and therefore predicates, with the following information: Categorial and combinatorial properties, the different forms, the features that motivate such forms, the stems and the phonological representations. In this model, morphemes have distributional properties, are available for insertion as operators into phrases or clauses, and their final form is stated by constructive templates; derivational affixes are lexemes, in line with Lieber (1992), Dik (1997a: 54 “all lexical items in a language are analyzed as predicates”), Beard (1995) and Mairal and Cortés (2001). With regards to layered structures, templates and constructions in functional morphology, the USM is based on the Structural-functional tradition of linguistics (Dik 1997, Van Valin and LaPolla 1997), the layered structure of the clause (Foley and Valin 1984, Hengeveld 1989, Rijkhoff 2002), the tradition of word syntax (Marchand 1969, Selkirk 1982, Sproat 1985, Baker

135

On

the architecture of words.

Applications

of

Meaning Studies

1988, 2003, Lieber 1992, 2004), and functional morphology (Dik 1997, Mairal y Cortés 2000, 2001, Everett 2002, Cortés 2006). It accounts for the combination of three descriptive-explanatory resources: Layered structures, templates and constructions. The morphological processes that are accounted for are: compounding, affixation, inflection, zero derivation, and category extension. The semantic domains of the word are, as in the Clause in RRG, Nucleus, Core, Word and Complex Word. The outer layers include the inner ones, and each layer has its own operators. These words also have, as in RRG, an operator projection, which accounts for inflection (relational morphology) and the derivation not accounted for by the constituent projection (non-motivated non-relational morphology). The explanation for this is Diachronic: free lexemes become bound lexemes and these in turn become inflectional (grammaticalization), and Typological: some meanings are inflectional in certain languages and derivational in others, and the operator projection explains this continuity and stresses the properties of derivative bases. All this is illustrated in (18) below, where operators are realized as grammatical arguments, distinct from the lexical operators that are realized in the constituent projection (Martín Arista 2008a: 128): (18) WORDα CORE α NUCLEUS α PREDICATE α

136

NUC OPERATORS

NUCLEUS

CORE OPERATORS

CORE

WORD OPERATORS

WORD

On the architecture of words

The USM includes a functional definition of categories. Thus, many affixes are mere recategorisers. These lexical operators can be realized by different categories and they can perform different functions: Recategorisers or conveyors of a more complex meanings: Transitivisers (NUC operators) or lexical negation (CORE operators). Lexical arguments take up functional positions in the constituent projection of the word. They perform semantically-motivated syntactic functions: Argument, Argument-Adjunct, and Periphery –which in RRG is called Adjunct when dealing with the Layered Structure of the Clause. Below we provide a few examples of trees for complex words, so that the internal structure of words can be observed (as in Martín Arista 2008a: 129-132): (19) a.  First and Second Argument in a Complex Word: bellringestre ‘bell ringer’

COMPLEX WORDN CORE N

ARG

AR G n

WORD

WORD N

CORE

CORE N

NUC bell

NUC ring

NUC N estre

137

On

the architecture of words.

Applications

of

Meaning Studies

b.  Argument-Adjunct in Complex Word Core: upastigan ‘go up’

COMPLEX WORDv CORE v AA J

NUC v

WORD CORE NUC up

astigan

c.  Periphery and Core of Complex Word: inwritere ‘inner secretary’ COMPLEX WORD V PERIPHERY

CORE V ARG V

138

WORD

WORD V

CORE

CORE V

NUC

NUC

NUC V

in

writ

ere

On the architecture of words

d.  Cumulation of two semantic elements represented in a single form: toga ‘leader’ COMPLEX WORD N CORE N NUC

ARG N WO RD N CORE N NUC N

tog

a

NUC CORE COMPLEX WORD COMPLEX WORD

Number Gender Case

Before continuing, some comments relating to these trees need to be made. Firstly, the structure in (19.a) of two arguments for a complex word does not exist in OE derived verbs. For this, the example used is taken from Martín Arista (2008a) and it shows how a word is represented when the nucleus has two lexical arguments. This implies that they are equally essential in the derivative process for the logical semantic structure of the word. Now, compare this word to (19.c), inwritere. In this case, the nucleus writ has two lexical elements, in- and -ere, but they do not have the same role or status in the LS of the word. While -ere is an argument, that is, it is essential for the outcome of the word, the peripheral element in- is an adverbial element that modifies the noun writere by adding some additional semantic information to it. With regards to (19.b), it shows a verb derived through a prefixal element, up, which functions as an AAJ of the verb. Finally, (19.d)

139

On

the architecture of words.

Applications

of

Meaning Studies

shows the representation of the semantic accumulation of three grammatical elements that are represented by a single form. Note that when there are two operators working at the same level, as in this case with gender and case, the labels are duplicated. This system also accounts for polysemous constructions in an adequate way, as Martín Arista (2008a: 134) shows with the example of forcuman. This OE verb can both be a compound, meaning ‘come before’, and a derivate, meaning ‘destroy’. In each case, a different tree will represent their internal structure: (20) a.

COMPLEX WORD v CORE v

AAJ

NUC v

WORD CORE NUC for

cuman ‘come before’

b.

WORDv COREv NUCv

for transitivity

cuman ‘destroy’ NUC CORE WORD

140

On the architecture of words

In (20.a), we have the structure of a complex word, in which for is a word that acts as an AAJ of a more complex word that dominates it, forcuman. On the other hand, forcuman in (20.b) is a simplex word, with one single word node that has lexical operators —for in this case—. With respect to (20.a), only the constituent projection is accounted for. In (20.b) the operator projection is also needed in order to account for the function of the bound lexical morpheme for as a transitivizer. Going deep into the operator projection, in this model it is devised so that it accounts for inflection, known as relational morphology, and the derivation not accounted for in the constituent projection, called non-motivated non-relational morphology in the USM-. On these grounds, the operator projection accounts for the continuity between inflection and derivation, an issue that is controversial and which is still nowadays the center of lively discussions. Thus, as can be seen in (19.d), the inflectional categories of number, gender and case are accounted for, and as can be seen in (20.b), the derivative prefix for-, which works as a transitivizer, is also outlined. That is, it accounts for those derivational morphemes that behave as operators, as is the case with for in forcuman ‘destroy’. In traditional terms, this kind of derivation would be equated to the term derivation proper and, as opposed to compounding, which is the traditional term used to address the derivation that is also represented in the constituent projection, as is the case of forcuman ‘come before’. This organization of morphological phenomena is explained in two ways diachronically, in the USM: there are free lexemes that become bound lexemes and after that become inflectional through a process of grammaticalization. An example of this is the verbal suffix -ian. Typologically, some meanings are inflectional in some languages and derivational in others (Bybee 1985). Thus, this arrangement of phenomena accounts for this continuity between inflection and derivation. This also permits the further justification of the interaction of semantic, lexical and syntactic phenomena, since it basically accounts for the tendency of languages to grammaticalize semantic features. In this line, it fits the functional principle followed by the functional schools (FG, RRG, the LCM) that semantics motivates syntax. To conclude, operators are functional labels, which may be realized (in our own view) in the constituent projection as lexical arguments or not. The categories that can realize them are adjectives, nouns, adverbs, and verbs. Thus, word syntax is elated to the operator projection, more specifically inflection and recategorization. The conclusions of the model are that mor-

141

On

the architecture of words.

Applications

of

Meaning Studies

phology is driven by syntax and semantics but it is independent of the charac­ teristics of bases and adjuncts. This takes us to the functional basis of the theory of separation and unification, where compounding and derivation in the structuralist sense are avoided and some languages are shown not to conform to the principle of lexical integrity, such as OE. o-o-o-o-o-o-o Exercise 9 Look for a word in present day English that is highly derived (as a derivation or as a compound) and analyze it in terms of the USM: provide its constituent and operatorOn projection. the Architecture of Words: applications of meaning studies Exercise 10

o-o-o-o-o-o-o

Exercise 10

Look for two words in the DOE and in Nerthus3, and provide the following 10 are their inflectional information: what is in their wordand class? Which Look for two words the DOE in Nerthus , and provide the followmorphological features (ex. Weak verb class 1)? Which areinflectional their meanings? ing information: what is their word class? What are their morAre there any derived predicates or related predicates?

phological features (ex. Weak verb class 1)? What are their meanings? Are there any derived predicates or related predicates?

Nerthus is anin example of a database created and maintained departing from the of morFor10 instance, the fragmentary search option if you look forstudy dalc the phology (following functional task here is to enter the database: www.nerthus.com and, database launches theprinciples). followingThe result: with the help of an Old English Dictionary, (e.g. Dictionary of Old English –DOE- online: http://www.doe. utoronto.ca/pages/pub/web-corpus.html) to see the possibilities it offers.

142

On the architecture of words

For instance, in the fragmentary search option if you look for dalc the database launches the following result:

This means that dalc is also the base of a compound name ste:ordalc. In this case, you should try to find a tentative explanation for such derivate. What is/are the predicates meaning(s)? Finally, comment on all the data you find interesting. o-o-o-o-o-o-o The aim of this section, then, was to show that the theoretical knowledge and positioning in morphology and lexical semantics are important for the application of meaning studies. This knowledge can be applied to the creation of databases, which later on can provide data that is to be used in different ways: explanatory (building up a model of language) or descriptive (studying the degree of productivity of words in a certain language). Thus, the theoretical positioning is important for the posterior elaboration of language models in language technologies. More on language technologies, especially as regards term extraction, is described in chapter 6.

5.  FURTHER RECOMMENDED READINGS Martín Arista, Javier: 2010. The Nerthus Project: Aims and Methodology. In Current Projects in Historical Lexicography. Cambridge Scholars.

143

On

the architecture of words.

Applications

of

Meaning Studies

Cortés Rodríguez, Francisco: 2006. Derivational Morphology in Role and Reference Grammar. A New Proposal. Cortés. Resla 19. 41-66. Available at http://dialnet. unirioja.es/servlet/articulo?codigo=2198590 Harley, Heidi (2003): Pre- and suf-fix-es: Engl-ish Morph-o-log-y. In A Linguistic Introduction to English Words.Arizona. http://dingo.sbs.arizona.edu/~hharley/ PDFs/WordsBook/Chapter4.pdf Harley, Heidi (2003): Morphological idiosyncrasies. In A Linguistic Introduction to English Words. Arizona. http://dingo.sbs.arizona.edu/~hharley/PDFs/ WordsBook/Chapter5.pdf

6. REFERENCES Aronoff. Mark and Fudeman, K.: 2011. What is Morphology?Wiley-Blackwell. Baker, M.: 1988. Incorporation. Chicago: UCP. Baker, M.: 2003. Lexical Categories. Cambridge: Cambridge University Press. Bally, C.: 1965. Linguistique Générale et Linguistique Française. Berne: A. Francke. Beard, R.: 1995. Lexeme-Morpheme Base Morphology. Albany, N.Y.: SUNY Press. Beard, R. and Volpe, M.: 2005. “Lexeme-Morpheme Base Morphology”. In P. Stekauer and R. Lieber (eds.). Handbook of Word-Formation. Dordrecht: Springer. 189-205. Booij, G.: 2005. The Grammar of Words.An Introduction to Linguistic Morphology. Oxford: Oxford University Press. Chomsky, N.: 1975. Reflections on Language. Nueva York, Pantheon [trad. esp en Barcelona: Ariel, 1979]. Cortés Rodríguez, F. J.: 2006a. Negative affixation within the Lexical Grammar Model. RæL5.27-56. Cortés Rodríguez, F. J.: 2006b. Derivational morphology in Role and Reference Grammar: a new proposal. Revista Española de Lingüística Aplicada (RESLA) 19. 41-66. Davies, Mark: Corpus of Contemporary American English (http://corpus2.byu.edu/ coca/) Dik, S. C.: 1997a. The Theory of Functional Grammar I. Edited by Kees Hengeveld. Berlin and New York: Mouton the Gruyter. Dik, S. C.: 1997b. The Theory of Functional Grammar II. Edited by Kees Hengeveld. Berlin and New York: Mouton the Gruyter. Disciullo A. M. and E. Williams.: 1986. On the Definition of Word. MIT Press. Di Paolo Healey, et. al.: 2003. The Dictionary of Old English. Toronto: Toronto University Press.

144

On the architecture of words

Everett, D.: 2002. Towards an RRG theory of morphology. Lecture delivered at the 2002 International Conference on Role and Reference Grammar, held at the University of La Rioja, Spain. Faber, P. & Mairal Usón, R.: 2002. Functional grammar and lexical templates. In R. Mairal Usón & M. J. Pérez Quintero (eds.). New Perspectives on Predicate Argument Structure in Functional Grammar. Berlin and New York: Mouton de Gruyter. 39-94. Foley, W. A. and Van Valin R. D. Jr.: 1984. Functional Syntax and Universal Grammar.Cambridge University Press. Greenberg, J.: 1966. Linguistic Universals. The Hague: Mouton. Halle, M.: 1973. Prolegomena to a theory of word-formation. Linguistic Inquiry 4.3-16. Halle, M. and Marantz, A.: 1993. Distributed morphology and the pieces of inflection. In K. Hale and S.K. Keyser (eds.). The View from Building 20. Cambridge, MA: MIT Press. 111.176. Harley H. and R. Noyer: 1999. Distributed Morphology. GLOT International 4.4.3-9. Harley, Heidi: 2003. A Linguistic Introduction to English Words. Available online at: http://dingo.sbs.arizona.edu/~hharley/PDFs/WordsBook/Frontmatter.pdf Hengeveld, K.: 1989. Layers and operators in Functional Grammar.In Journal of Linguistics 25.125-157. Hengeveld, K.: 2005. Dynamic expression in Functional Discourse Grammar. In C. de Groot and K. Hengeveld (eds.). Morphosyntactic expression in Functional Grammar. Berlin: Mouton the Gruyter. 53-86. Hengeveld, K. and Mackenzie, J. L.: 2005. Interpersonal functions, representational categories, and syntactic templates in Functional Discourse Grammar In J. L. Mackenzie and M. de los A. Gómez-González (eds.). Studies in Functional Grammar. Bern: Peter Lang. 9-27. Hengeveld, K. and Mackenzie, J. L.: 2006. Functional Discourse Grammar. In: K. Brown (ed.). Encyclopedia of Language and Linguistics, 2nd Edition, Volume 4. Oxford: Elsevier. 668-676. Hengeveld, K. and Mackenzie, J. L.: 2009. Functional Discourse Grammar. In B. Heine and H. Narrog (eds.). The Oxford Handbook of Linguistic Analysis. Oxford: Oxford University Press. Ibáñez Moreno, Ana.: 2012. A Functional Approach to Derivational Morphology The Case of Verbal Suffixes in Old English. Germany: Lambert Academic Publishing. Kastovsky, D.: 1968. Old English Deverbal Substantives Derived by Means of a Zero Morpheme. PhD dissertation 1967. Tübingen University. Esslingen/N: Langer.

145

On

the architecture of words.

Applications

of

Meaning Studies

Kastovsky, D.: 1992a. The formats change – the problems remain: Word-formation theory between 1960 and 1990. In Pütz, Martin (ed.). Thirty years of linguistic evolution. Amsterdam: John Benjamins. Kastovsky, D.: 1992b. Semantics and Vocabulary. In Hogg, R. M. (ed.). The Cambridge History of the English Language. Vol. 1. The Beginnings to 1066. Cambridge: Cambridge University Press. 290-407. Kiparsky, P.: 1982. Analogy. In William Bright (ed.). International Encyclopedia of Linguistics. New York: Oxford University Press. Kiparsky, P.: 1985. Some consequences of Lexical Phonology. Phonology Yearbook 2.83-136. Lakoff, G.: 1987. Women, Fire and Dangerous Things: What Categories Say about the Mind. Berlin: Mouton the Gruyter. Lass, R.: 1994. Old English: a Historical Linguistic Companion. Cambridge: Cambridge U. P. Mairal Usón, R.: 2001. En torno a la interficie léxico-gramática en los modelos gramaticales. In P. Durán, P. and G. Aguado (eds.). 2001. La investigación en las lenguas aplicadas: enfoque multidisciplinario. Madrid: Fundación Gómez Pardo. 115-151. Mairal Usón, R. and Cortés Rodríguez, F. J.: 2000-2001. Semantic packaging and syntactic projections in word-formation processes: the case of agent nominalizations. RESLA 14.271-294. Mairal Usón, R. and Faber, P.: 2005. Decomposing semantic decomposition. Towards a semantic metalanguage in RRG. Lecture delivered at the 2005 Role and Referente Grammar Internacional Conference. Taiwan. (Internet document available at http://linguistics.buffalo.edu/research/rrg/Mairal2005.pdf.). Mairal Usón, R. F. Ruiz de Mendoza and C. Periñán-Pascual: 2011. Constructions within a Natural Language Processing Knowledge Base. In Hans Boas and F. Gonzálvez-García (eds.) Construction Grammar goes Romance (provisional title). Amsterdam /Philadelphia: John Benjamins. Mairal Usón, R. and Periñán-Pascual, J. C.: 2010.Role and Reference Grammar and Ontological Engineering.In J.L. Cifuentes, A. Gómez, A. Lillo, J. Mateo, F. Yus (eds.) Los caminos de la lengua. Estudios en homenaje a Enrique Alcaraz Varó. Alicante: Universidad de Alicante. 649-665. Marantz, A.: 1997. No scape from syntax. Penn Working Papers.555-595. Marchand, H.: 1969. The categories and types of present-day English Word-formation: a synchronical-diacronical approach. Munchen: C.H. Beck’sche Verlagsbuchhandlung.

146

On the architecture of words

Martín Arista, F. J.: 2006a. Derivation and compounding in a structural-functional theory of morphology. Paper delivered at the meeting of the Danish Functional Grammar Group, held at the University of Copenhagen, April 2006. Martín Arista, F. J.: 2006b. Alternations, relatedness and motivation: Old English A-. In Guerrero Medina, P. and E. Martínez Jurado (eds.) Where Grammar Meets Discourse: Functional and Cognitive Perspectives. Córdoba: Servicio de Publicaciones de la Universidad de Córdoba. 113-132. Martín Arista, F. J.: 2006c. From dictionary to lexical database: the limits of lexical derivation. Paper delivered at Queen Mary College, University of London. Martín Arista, F. J.: 2007a. Morphological constructions and the functional definition of lexical categories. Paper delivered at the 2007 Conference of the Societas Linguistica Europaea, held at the University of Joensuu. Martín Arista, F. J.: 2007b. Compounding and derivation in Nerthus: an online lexical database of Old English. Paper delivered at the Centre for Medieval Studies, University of Toronto. Martín Arista, J.: 2008a. Unification and separation in a functional theory of morphology. In R. Van Valin (ed.), Investigations of the Syntax-Semantics-Pragmatics Interface. Amsterdam: John Benjamins, 119-145. Martín Arista, F. J.: 2008b. Old English ge- and the descriptive power of Nerthus. Journal of English Studies 5.209-232. Martín Arista, J.: 2009. A Typology of Morphological Constructions.In C. Butler and J. Martín Arista (eds.) Deconstructing Constructions. Amsterdam: John Benjamins. 85-115. Martín Arista, J.: 2011. Projections and Constructions in Functional Morphology. The Case of Old English HRĒOW.Language and Linguistics 12/2: 393-425. Pyles, T. and Algeo, J.: 1982. The Origins and Development of the English Language. Orlando: Harcourt Brace, Rijkhoff, J.: 2002. The Noun Phrase. Oxford: Oxford University Press. Selkirk, E.: 1982. The Syntax of Words. Cambridge, Mass: MIT Press. Siegel, D.: 1979. Topics in English Morphology. New York: Garland. Sinclair, J.: 1991. Corpus Concordance Collocation. Oxford: Oxford University Press. Sproat, R.: 1985. On Deriving the Lexicon. Ph.D.dissertation. MIT. Stekauer, Pavol.: 2000. English Word-formation: A History of Research, 1960-1995. Gunter Narr Verlag. Stump, G. T.: 2005. Word-formation and inflectional morphology. In Pavol S. and R. Lieber (eds.). Handbook of Word-Formation. Dordrecht: Springer.

147

On

the architecture of words.

Applications

of

Meaning Studies

Van Valin, R. D. Jr. and La Polla, R.: 1997. Syntax: Structure, Meaning and Function. Cambridge: Cambridge University Press. Van Valin, R. D. Jr.: 2005. Exploring the Syntax-Semantics Interface. Cambridge: Cambridge University Press. Williams, E.: 1981. On the Notions ‘Lexically Related’ and Head of a Word’. Linguistic Inquiry 12 (2).245-274. Williams, E.: 2007. Dumping lexicalism. In Gillian Ramchand and Charles Reiss (eds.). The Oxford Handbook of Linguistic Interfaces. Oxford: Oxford University Press. 353-382. Word Reference: www.wordreference.com (obtained from Collins Concise English Dictionary). Zeki Hamawand, Morphology in English: Word Formation in Cognitive Grammar. Continuum, 2011). Zwicky, A.: 1986. The general case: basic form versus default form. In Proceedings of the 12th annual meeting of the Berkeley Linguistics Society.305-314. Zwicky, A.: 1992. Some Choices in the Theory of Morphology. In R. Levine (ed.). Formal Grammar Theory and Implementation. Oxford: Oxford University Press. 327-371.

7.  KEYS TO EXERCISES Exercise 1 Suggested answer: The generativist Robert Beard (1995: 1) suggests, in his book Lexememorpheme Base Morphology: A General Theory of Inflection and Word Formation (SUNY University Press), the following: Morphology is superficially the sum of all phonological means of expressing the relations of the constituents of words, of words in phrases, and of the phrasal constituents of sentences. The key element of morphology is the WORD, a symbol comprising mutually implied sound and meaning. The central purpose of morphology, therefore, is to map sound to meaning within the word and between words. The issues of morphology are what constitutes linguistic sound, what determines linguistic meaning, and how the two are related. Since these questions are central to the linguistic enterprise in general, morphology should be the centerpiece of language

148

On the architecture of words

study. Yet, instead of gravitating to the center of linguistics during the recent Generativist revolution in language studies, in the past few decades morphology has all but vanished from the agenda of linguistic inquiry.

Within the school of Systemic Functional Linguistics, Richard Nordquist, in http://grammar.about.com/od/rs/g/Systemic-Functional-Linguistics-Sfl. htm gives the following definition of linguistics: The study of the relationship between language and its functions in social settings. In systemic functional linguistics (SFL), three strata make up the linguistic system: meaning (semantics), sound (phonology), and wording or lexicogrammar (syntax, morphology, and lexis). Systemic functional linguistics treats grammar as a meaning-making resource and insists on the interrelation of form and meaning.

He also provides the following definition of morphology: The branch of linguistics (and one of the major components of grammar) that studies word structures, especially in terms of morphemes. Adjective: morphological. There is a basic distinction in language studies between morphology (which is primarily concerned with the internal structures of words) and syntax (which is primarily concerned with the ways in which words are put together in sentences).

As for cognitive grammar, we have the following definition: 
Morphology is an essential subfield of linguistics. Generally, it aims to describe the structures of words and patterns of word formation in a language. Specifically, it aims to (i) pin down the principles for relating the form and meaning of morphological expressions, (ii) explain how the morphological units are integrated and the resulting formations interpreted, and (iii) show how morphological units are organized in the lexicon in terms of affinity and contrast. The study of morphology uncovers the lexical resources of language, helps speakers to acquire the skills of using them creatively, and consequently express their thoughts and emotions with eloquence (Zeki Hamawand, Morphology in English: Word Formation in Cognitive Grammar. Continuum, 2011).

149

On

the architecture of words.

Applications

of

Meaning Studies

If we compare these three definitions, we can say that they all agree on one thing: morphology is the study of linguistics that focuses on the word as unit of analysis. However, we can also observe differences in their approach: the generativists focus on the relationships between sound and form, phonology and meaning, and consider morphology to be the discipline in between these two, as being in the middle; in the center of all linguistic phenomena. The systemic functional linguistics approach focuses on the internal structure of words, as opposed to syntax, which is concerned with the internal structure of sentences, as if they had not much in common. Finally, the cognitivist approach seems to consider morphology especially in regards to its connection to the lexicon and to lexical resources of language. Finally, as it has been stated in chapter 2 and in this chapter, the last proposal within the Functional Framework of Role and Reference Grammar (Everett 2002, Martín Arista 2008a, b) put forward an eclectic idea of morphology. This idea sees morphology as taking care of the analysis of the internal structure of words, but also that this internal structure responds to the same principles as the internal structure of sentences, and therefore, is deeply interconnected with syntax and with semantics. o-o-o-o-o-o-o Exercise 2 Suggested answers: A pianist-is someone who “does” piano, that is, who plays the piano. The definition given in www.wordreference.com (obtained from Collins Concise English Dictionary) is ‘a person who plays the piano’, as seen in the image below:

150

Pianistis someone who “does” piano, that is, who plays the piano. Th definition given in www.wordreference.com (obtained from Collins Concis English Dictionary) is ‘a person who plays the piano’, as seen in the imag below:

On the architecture of words

Figure 1.  Pianist as seen in www.wordreference.com

Figure 1: pianist as seen in www.wordreference.com A lover is ‘someone who loves’. The entries given in http://www.macmillandictionary.com are: 1. ‘someone who is in a loving or sexual relationship with another person’ and 2. ‘someone who likes or enjoys something very Lover is ‘someone who loves’. The entries given much’. This is shown in figure 2 below:

i http://www.macmillandictionary.com are: 1. ‘someone who is in a loving o sexual relationship with another person’ and 2. ‘someone who likes or enjoy something very much’. This is shown in figure 2 below:

49

Figure 2.  Lover as seen in http://www.macmillandictionary.com

151

On

the architecture of words.

Applications

of

Meaning Studies

If we compare both derived words, we can see that pianist is closer in meaning to its “original meaning”, that is, to the sum of the meaning of piano and -ist. In the case of lover, the central meaning in both definitions is indeed ‘Someone who loves’, but different senses, more contextually specific, have been added to this original meaning. In this case, “love” is a more abstract meaning than “piano”, and this may be the reason why there is a higher degree of semantic change. Exercise 3 Suggested answer: Going on with the verb underestimate, according to the meaning of under- it would be “to estimate something/someone under its/their actual value”. In this sense, according to Collins Concise English Dictionary (in www. wordreference.com), the verb underestimate has two entries: 1. to make too low an estimate of: he underestimated the cost and 2. to think insufficiently highly of: to underestimate a person. This can be seen below:

Figure 3.  Underestimate in www.wordreference.com

Therefore, the meaning of under in both definitions remains. The mea­ ning of the prefix is basic, this is important for the resulting meaning of the word. We could even say that the prefix is very important semantically, for the semantic content of the word.

152

On the architecture of words

Exercise 4 Suggested answer: see example given in the unit for unpremeditatedly. Another example: inedible < edible < eat For eat there are 44.529 results. For edible the are 1.611 results. For inedible, just 202. This shows, very clearly, that the words involving more recursion occur less frequently. This is due in part to the higher restriction of the semantic domains in which this derived word will be used. The more affixes a word has, the more meaning specifications are added to it. Exercise 5 Suggested answer: State the words=?; and morphemes=?, and calculate the index. Your result should be M/W=2.33: Arm-a vir-um-que can-o: 7 morphemes divided by 3 words = 2,3333’. Exercise 6 Suggested answer: If you compare Spanish to English, the degree of synthesis in Spanish should be higher than that of English. Exercise 7 Suggested answer: The key to exercise 7 can be found in what follows: in regards to the loss of the inflectional system in English, and how this affects all the other subsystems of languages. Harley (2003: 276) explains: This period also saw another, perhaps more important change in English: the almost total loss of the rich inflectional system that is so characteristic of most other Germanic languages. The distinct class, gender and case suffixes on nouns, adjectives and determiners disappeared almost entirely, leaving only the modern possessive inflection ‘s and the plural -s;  the verbal suffixes showing agreement with the number and person of the subject, as well as tense and mood, was also completely lost, leaving

153

On

the architecture of words.

Applications

of

Meaning Studies

only the past tense marking -ed, the 3rd singular present tense -s and the progressive -ing. The 3rd singular -eth ending and the 2nd singular –est ending hung around in religious texts for a while, because of the conservativeness of ceremonial language that we’ve remarked on before, but by 1400, the entire complicated system has essentially disappeared. In the space of 200 years, English went from being a highly inflected language with relatively free word order to being an almost completely isolating language with quite fixed SVO word order. It’s hard to say why this change was so fast, radical and complete. One major contributor to the decline was a new phonological trend of reducing vowels to /´/ in unstressed syllables. Since the inflectional lendings were all unstressed, the vowel reduction blurred acoustic clues to the different inflectional classes and made them much more difficult to distinguish. It may have also been helped along by the number of second language speakers of English during this time: both the native speakers of French in the south and the native speakers of Old Norse in the north had different systems of gender and inflection in their own languages. Given that the English inflectional markings were hard to hear because of reduction, and given that a complex inflectional system is one of the most difficult aspects of a new grammar for a second language learner to master, it may be that the second language speakers of English helped spread the use of uninflected bare root forms. Whatever the cause, by 1400, no one learning English as a first or second language had to worry about noun class, case or gender, and the complexity of the verbal inflection was also severely reduced.”

Exercise 8 Suggested answer: love>love (zero derivation), daisy = 0) ( 0 20)))) (TIME-STAMP (VALUE (COMMON “lcarlson at Monday, 6/4/90 12:49:46 pm” “updated by lori at 14:02:32 on 03/15/95” …. etc “lori at 09:47:09 on 07/20/95”))) (COLOR (SEM (COMMON RED BLUE YELLOW ORANGE PURPLE GREEN GRAY TAN CYAN MAGENTA))) (OWNED-BY (SEM (COMMON HUMAN))) (MADE-OF (SEM (COMMON MATERIAL)))

234

Meaning, knowledge and ontologies

(PRODUCT-TYPE-OF (SEM (COMMON ORGANIZATION))) (PRODUCED-BY (SEM (COMMON HUMAN))) (THEME-OF (SEM (COMMON EVENT))) (MATERIAL-OF (SEM (COMMON *NOTHING*))))

Figure 3: Example of a concept in Mikrokosmos

The Mikrokosmos website puts the current size of the ontology at about 4,500 concepts. o-o-o-o-o-o-o-o Exercise 1 Search the web for Mikrokosmos and download its free version. Then, look for a concept of your choice and describe your results. If your computer operating system is not compatible, check in The Ontology Page(OntologyProjects Worldwide) for an ontology, such as the Ontology Lookup Service (OLS), whose software is freely available for use for all users, academic or commercial, under the terms of the Apache License, Version 2.0.http://groups.csail.mit.edu/medg/people/doyle/top/projects.html o-o-o-o-o-o-o-o-o Exercise 2 Go to the aforementioned website (The Ontology Page. Ontology Projects Worldwide http://groups.csail.mit.edu/medg/people/doyle/top/projects.html) and go through the different ontologies available. Describe some of the projects presented there. o-o-o-o-o-o-o-o-o

6.  ONTOLOGY APPLICATIONS Vossen (2004: 473-478) and Grishman (2004: 456-558), focus on applications such as information extraction by name identification and classification and event extraction. This kind of information extraction can be achieved using commercial programs for these purposes. One among many

235

On

the architecture of words.

Applications

of

Meaning Studies

others is Mike Scott’s WordSmith. This, once again, shows how what are called ontological applications frequently make use of linguistic data. Niremburg and Raskin (2004:159) also list a series of similar ontological semantic applications: machine translation (MT), information extraction (IE), question answering (QA), text summarization, answering and general human–computer dialog systems and various combinations of them all. The difference is that they developed a program. They are, basically, the same types of applications that computational dictionaries offer. These applications have frequently been attempted without dealing with ontological semantics or without even considering meaning. However, when these applications are based on an ontological approach to meaning, any kind of input to the system (a text for MT, a query for a question answering system, a text for information extraction, etc.) has to undergo various stages of analysis such as tokenization, parsing, etc. Then, if successful, all these stages generate the meaning of a text or a “text meaning representation”, as these authors claim. Because of this, text meaning representation (TMR) serves as an input to specialized processing relevant for this particular application. For example, in machine translation, the TMR needs to be translated into a natural language different form the one in which the input was supplied. The program that does this task is usually called a text generator. In information extraction, TMRs are used by special rules as sources for fillers in users queries. The question answering processor (QA) must first understand exactly what the user wants the system to do, then find the necessary information (most of the times either in the background world knowledge resources or in a fact repository, but sometimes in the ontology of the lexicons) to be able to generate a well formed answer.

7.  ONTOLOGIES AND ARTIFICIAL INTELLIGENCE Artificial Intelligence (AI) can be defined as the branch of computer science that is concerned with the automation of intelligent behavior disre-

236

Meaning, knowledge and ontologies

garding whether this effort is based on the abstract concept of rationality or involves mimicking the human mind. The majority of AI frameworks include some kind of more or less powerful knowledge base and an increasing number also provides some facility specifically dedicated to concept knowledge. In contrast with other areas of computer science, in AI the focus is on the computational reasoning about using ontological knowledge, on the computationally intelligent acquisition and revision of new knowledge, and on the computational use of ontological knowledge for decision-making. 8.  KNOWLEDGE BASES Knowledge bases refer to a collection of facts and rules, along with inference procedures to make use of these rules. This area of research originaly focused on separating declarative specific knowledge from procedural one, allowing a software system to behave more intelligently and less mechanically by dynamically combining small chunks of knowledge to reach an answer. Knowledge base systems are also called expert systems in part because this work was primarily undertaken to provide a software-base expert in some field. Whereas a conventional software system would have specified a series of operations to be performed in a certain order, a knowledge base system has a generic inference process that can apply a range of declaratively specified knowledge in order to reach different answers to different queries. A characteristic of open environment in knowledge base systems is that knowledge domains are dynamic and can often be modeled with some uncertainty. Because of this, probabilistic ontologies provide the possibility to describe concepts and instances with variable degrees of belief denoting uncertainty of description logic’s terminological axioms (as opposed to vagueness in fuzzy logic). Although knowledge bases and ontologies are close, the former refers to a collection of facts and rules, along with inference procedures to make use

237

On

the architecture of words.

Applications

of

Meaning Studies

of these rules whereas the latter is a much broader term, sometimes also including a database. However, whatever the computational product and whatever the name it is given, if a computational program lists a series of well described objects and a series of rules that link them, we have an ontology like product or a data base like product. How to label it very much depends on how broad or specific the included entities are. The following list is a limited selection of web references, which offer interesting material about language tools: (6) — http://www.linguistik.uni-erlangen.de/~rrh/papers/ontologies/dublin.html — http://ksi.cpsc.ucalgary.ca/KAW/KAW96/guarino/guarino.html (This is a good paper explaining some of the problems with ontologies) — http://www.jfrowa.com/ontology/grided.htm (This is a guided tour of ontologies) o-o-o-o-o-o-o-o-o Exercise 3 Study the Nerthus Methodology (document available in http://www.nerthusproject.com/) as an example of how an ontology and a data base merge into a lexical product. Think of the possibilities that this methodology suggests. o-o-o-o-o-o-o-o-o

9.  FURTHER RECOMMENDED READINGS A recommended journal is Applied Ontology, published as the first journal with explicit and exclusive focus on ontological analysis and conceptual modeling under an interdisciplinary view. It is the journal of the International Association for Ontology and its Applications (http://iaoa.org/jao/jao.html ) and publishes articles in several research areas.

238

Meaning, knowledge and ontologies

10. REFERENCES Aguado, G., Álvarez de Mon, I and Pareja, A. 2009. Una versión interdisciplinar de la anotación semántica en Terminología y sociedad del conocimiento. Bern: Peter Lang. Buendía M. and Ureña, J. M. 2010. “Towars a Methodology for Semantic Annotation: The case of meteorology.” In Traducción y modernidad: textos científicos, jurídicos, económicos y audiovisuales. Córdoba. Universidad de Córdoba. Butler, Chris. 2012. Antological Approach… Chu-Ren Huang, N. Calzolari, A. Gangemi, A. Lenzi, A. Oltramari and L. Prévot. 2010. Ontology and the Lexicon. Cambridge. New York: Cambridge University Press. De Groote, Carine. 2013. Introduction to Terminology and Terminography. Course imparted within the International Postgraduate Course on Dutch and Translation, University of Ghent, Belgium. Fellbaum, C. (ed.). 1998. Wordnet: An Electronic Lexical Database. Cambridge, Massachusetts: The MIT Press. Fellbaum, C. 2002. Parallel Hierarchies in the Verb Lexicon. In Proceedings of the OntoLex Workshop, LREC, Las Palmas, Spain. http://www.cogsci.princeton. edu/~wn/ García, A., Pareja-Lora, A., Pradana, D. 2008. “Reutilización de tesauros: el documentalista frente al reto de la web semántica”. En El Profesional de la Información, 17(1):8-21. Gómez-Pérez, A., Fernández-López, M., Corcho, Ó. 2003. Ontological engineering: with examples from the areas of knowledge management, e-commerce and the Semantic Web. Londres: Springer Verlag London Ltd. Heflin, J. and Zhengxiang Pan. 2004. A model theoretic semantics for Ontology Versioning. S. A. McLlraith et al (Eds.). ISWC. 2004. LNSC3298, pp. 62-76. Sprinter-Verlag Berlin Heilderberg. Metzinger, t. and Vittorio Gallese. 2007. The Ontology: Building blocks of a theory. In Zaefferer edts. 2007. Ontolinguistics. How Linguistic Coding of Concepts. Berlin: Mouton

Emergence of a Shared Action Schalley, Andrea and Dietmar Ontological Status Shapes the de Gruyter.

Miller, R. et al. 1990. Introduction to wordnet: An On-line Lexical Database. Journal of Lexicography 3(4), 235-244. Niremburg, Sergei and Victor Raskin, 2004. Ontological Semantics. Cambridge, Massachusets. London, England: MIT Press. Nickles, Matthias; Adam Pease, Andrea C. Schalley and Dietmar Zaefferer. Ontologies Across Disciplines, in Schalley, Andrea and Dietmar Zaefferer eds.

239

On

the architecture of words.

Applications

of

Meaning Studies

2007. Ontolinguistics.How Ontological Status Shapes the Linguistic Coding of Concepts. Berlin: Mouton de Gruyter. Pareja-Lora. A. 2012. “OntoLingAnnot's LRO: An Ontology of Linguistic Relations”. Proceedings of the 10th Terminology and Knowledge Engineering Conference – New frontiers in the constructive symbiosis of terminology and knowledge engineering, pp. 49 - 64. Madrid: Universidad Politécnica de Madrid, 20/06/2012 [http://www.oeg-upm.net/tke2012]. Pareja-Lora, A. 2013. “An Ontology-Driven Methodology to Reuse, Link and Merge Terminological and Language Resources”. Proceedings of the 10th International Conference on Terminology and Artificial Intelligence – Terminology for a networked society: Recent advances in multilingual knowledge-based resources, pp. 189-196. París (Villetaneuse), France. 28-30 October, 2013 [https://lipn.univparis13.fr/tia2013/Proceedings/actesTIA2013.pdf#page=191]. Pareja-Lora, A. 2012. Providing Linked Linguistic and Semantic Web Annotations: The OntoTag Hybrid Annotation Model. pp. 1 - 500. Saarbrücken: LAP LAMBERT Academic Publishing, 17/10/2012. ISBN-13: 978-3-659-25526-7 [https://www.lap-publishing.com/catalog/details//store/gb/book/978-3-65925526-7/providing-linked-linguistic-and-semantic-web-annotations]. Parik, Prashant. 2010. Language and equilibrium. Cambridge, Massachusetts. London, England: The MIT Press. Periñán-Pascual, Carlos and Francisco Arcas Túnez (2011) “Introducción a FunGramKB”. Anglogermánica Online 8, pp. 1-15 (ISSN: 1695-6168). Periñán-Pascual, Carlos and Ricardo Mairal Usón (2010) “La gramática de COREL: un lenguaje de representación conceptual”. Onomázein 21, pp. 11-45 (ISSN: 0717-1285). Philpot, A. Eduard Hovy and Patrick Pantel. 2010. The Omega ontology in Chu Ren Huang et alia. Ontology and the Lexicon. CUP. Pustjovsky, J. 2012. Natural language annotation for machine learning. USA: O’Reilly Media. Ray, Steven. 2004. NIST’s Semantic Approach to Standards of Interoperability. Unpublished presentation. Schalley, Andrea and Dietmar Zaefferer (eds.). 2007. Ontolinguistics. How Ontological Status Shapes the Linguistic Coding of Concepts. Berlin: Mouton de Gruyter. Schalley, Andrea and Dietmar Zaefferer. 2007. Ontolinguistics- An outline, in Schalley, Andrea and Dietmar Zaefferer (eds). Ontolinguistics. How

240

Meaning, knowledge and ontologies

Ontological Status Shapes the Linguistic Coding of Concepts. Berlin: Mouton de Gruyter. Smith, B. 1982. “Linguistic and computational semantics” in Proceedings Conference of the Association for Computational linguistics. Vossen, P. 2004. Ontologies.In Mitkov, R. (ed.). The Oxford Handbook of Computational Linguistics. Oxford: Oxford University Press.

11.  KEYS TO EXERCISES Exercise 1 Suggested answer: Mikrokosmos is announced as an ontology of broad spectrum for multilingual Natural Language Processing. The reader can obtain useful information about it in the webpage of The Instituto di Linguistica Computazionale “Antonio Zampolli”: http://www.ilc.cnr.it/EAGLES96/ rep2/node1.html. There it is stated: In the last several years a number of higher or upper level ontologies have become generally available to the knowledge representation and natural language (NL) research communities. While NL applications in different domains appear to require domain-specific conceptualizations there is some hope that a common upper level of domain-independent concepts and relations can be agreed: such a shared resource would greatly reduce the load on individual NL application developers to reinvent a model of the most general and abstract concepts underlying language and reasoning. An example of upper level ontologies are: Cyc, Mikrokosmos, the Generalised Upper Model, and Sensus. These are by no means the only candidates, but give an indication of the work going on in this area, especially work of relevance to NL processing. Ontologies are not lexical resources per se, but conceptualizations underlying language, so that mappings from lexicons into ontologies need to be provided. One of the advantages of this is that ontologies can serve an interlingual role, providing the semantics for words from multiple languages, as is shown in WordNet

241

On

the architecture of words.

Applications

of

Meaning Studies

(http://wordnetweb.princeton.edu/perl/webwn), where different languageword-nets share the same Top Ontology. Mikrokosmos is an in-depth, broad coverage ontology for multilingual Natural Language Processing. The Mikrokosmos ontology is part of the Mikrokosmos knowledge-based machine translation system currently under development at the Computer Research Laboratory, New Mexico State University. It is meant to provide a language-neutral repository of concepts in the world to assist in the process of deriving an interlingual text meaning representation for texts in a variety of input languages. The ontology divides at the top level into object, event, and property. Nodes occurring beneath these divisions in the hierarchy constitute the concepts in the ontology and are represented as frames consisting of slots with facets and fillers. For a Natural Language (NL) definition, concepts have slot. There, you can also find links to superordinate and subordinate concepts, and an arbitrary number of other properties (local or inherited). These slots have facets each of which in turn has a filler. Facets capture such things as the permissible semantic types or ranges of values for the slot (sem), the actual value (value) if known, and default values default. Where necessary, inheritance has been blocked by using the value NOTHING. According the The Mikrokosmos, the present size of the ontology is about 4500 concepts. The ontology is being constructed manually, with a set of guidelines to place concepts into the ontology adequately. According to its developers, it is primarily designed to support knowledge-based machine translation (KBMT) - and is being used in Spanish-English-Japanese translation applications. Therefore, the features of Mikrokosmos are the following:

(1) —  intermediate level grain size —  not many resources spent on development —  rich connectivity between conceptual nodes —  domain independent —  language independent

242

Meaning, knowledge and ontologies

The website provides further useful information: What about domain -specific ontologies? We can try experimenting with some specific medical ontologies. An example is the Ontology Lookup Service (OLS) a web service interface to query multiple ontologies from a single location with a unified output format. The OLS can integrate any ontology available in the Open Biomedical Ontology (OBO) format (The OLS software and the SOAP web service are both freely available for use for all users, academic or commercial, under the terms of the Apache License, Version 2.0.). The OBO Foundry is a collaborative experiment involving developers of science-based ontologies who are establishing a set of principles for ontology development with the goal of creating a suite of orthogonal interoperable reference ontologies in the biomedical domain.

Figure 6: (ge)be:odan and all its lexically related items

o-o-o-o-o-o-o-o-o

243

On

the architecture of words.

Applications

of

Meaning Studies

Exercise 2 Define a domain of knowledge and try to build your own ontology. For example, FOOD, MEANS OF TRANSPORT, HUMAN FEELINGS, HUMAN ACTIVITIES LIKE GAMES OR WORK. Make your classes and attributes after reading Pareja 2003, 2008, 2012 and 2013 and using the following steps. Step 1 Define Classes Since classes represent broad concepts in modeled domains, define the domain concepts in a broad sense. For example, Continent, Mammal, Elephant, Dolphin, Cetaceous, Proboscides, Grey Whale, etc. Step 2 Define the attributes of such classes. Since attributes represent the particular characteristics of each object within the domain attributes also represent the particular characteristics of each class of objects in a particular domain. There are two types of attributes: class attributes and instance attributes. For example Class attributes for Cetaceous are: •  has whiskers ( Cetaceous) •  lives in the sea (Cetaceous) Class attributes for Proboscides •  has a long, flexible prehensile trunk or snout (mammals) Instance attributes for mammals •  height •  sex Instance attributes for dolphins •  fin length Instances: For example:

244

Meaning, knowledge and ontologies

Instance (Flipper, Dolfin); instance (Dumbo, elephant) Instance (America, continent) o-o-o-o-o-o-o-o-o

245

1. Objectives 2. Introduction 3. Defining terms 4. The acquisition of terminology 5.  Terminology extraction 6.  Storing terms 7.  Concluding remarks 8.  Further recommended readings 9. References 10.  Keys to exercises

1. OBJECTIVES In this chapter, the focus will be on terms and terminological applications. After defining the basic concepts involved, we distinguish between two different theoretical viewpoints regarding the acquisition of terminology and elaborate on the different possible methodologies to extract terms from both monolingual and multilingual documents. In a last section, the focus shifts from the extraction of terms to the management of terminologies.

2. INTRODUCTION Phenomena such as the shorter life cycle of products and the increasing internationalization of commerce, have recently lead to an enormous increase of the written output of companies and organizations. Whereas word processing applications such as spelling and grammar checking have become valuable and often indispensible support tools in our everyday writing practice, none of these tools informs you whether you used a wrong or an outdated word for a given concept, or whether you used the terminology of a competitor. Organizations invest a lot of time and money on the creation of terms to position their brand, company or a product in the market. Take for example the PlayStation 3 of Sony versus Nintendo’s Wii and how does the Blue Efficiency Technology of car constructor Mercedes differ from the Efficient Dynamics line of competitor BMW? Whereas the consistent usage of these brand positioning terms causes little trouble, this seems much less the case for the usage of the accompanying terminology by the production, marketing, sales, support departments. Often, different terminology lists are used in these different departments, leading to miscommunication. This prob-

249

On

the architecture of words.

Applications

of

Meaning Studies

lem even becomes more problematic in case of translation or in case of mergers. “Having a common business language can decimate the time and effort (of business process integration), and make these critical business processes much more efficient” (Gartner). A consistent and centralized term usage by all humans and systems in an organization is key to the efficient production and standardization of (multilingual) content. Such semantic interoperability is crucial in all organizations confronted with large volumes of digital information. In fact, anyone who has to deal with technical documents on a day-today basis, such as an engineer working in the car industry or an editor of a scientifically oriented magazine, will need to study terms (...) The engineer needs to correctly understand the technical manual produced by other engineers; the editor has to ensure that the use of terms in the materials that they publish is consistent and up-to-date (Le An Ha, 2007).

This standardization has many advantages, including understandability (a common understanding of documents), translatability (only one or a restricted set of possible translations), maintenance (through the limited set of terms and translations) and retrievability. Let us take the following example in German taken from the database of a large car manufacturer. In that database, which serves as the basis for the production of all car manuals, 7 different words or multiword units refer to the same concept of aluminium rims, being ‘Leichtmetallscheibenrad’, ‘Aluminiumscheibenrad’, ‘Alufelge’, ‘Leichtmetallrad’, ‘Aluminium-Scheibenrad’, ‘LeichtmetallScheibenrad’ and ‘Scheibenrad Aluminium’. Keeping only one of these terms in the database will lead to a much more manageable, consistent and searchable database. A related issue is the issue of spelling variation (e.g. ‘Record Number’, ‘Rec. No.’, ‘Record No.’, ‘Rec #’, ‘Rec.-No’) and the need for consistency also at this spelling level. As IBM states on its website1, “without controls, inconsistent and inappropriate terms infiltrate product user interfaces, documentation, packaging, marketing materials, and support Web sites. This reduces product usa1 

250

http://www-01.ibm.com/software/globalization/topics/terminology/introduction.html

Terms and terminological applications

bility, increases service calls, weakens the brand, and escalates translation costs. Some terminology errors can even cause products to malfunction”. In this chapter, we will discuss the basic principles of terminology and elaborate on the different possible methodologies to extract terms from (monolingual and multilingual) documents.

3.  DEFINING TERMS How can we define a term? In practice, it is difficult to define precisely what a term is. According to Wright (1997), terms are “the words that are assigned to concepts used in the special languages that occur in subjectfield or domain-related texts”. Similar notions are also introduced in the following ISO 1087 (ISO 1990) definition, which defines a term as “designation of a defined concept in a special language by a linguistic expression”, a concept as 
“a unit of thought constituted through abstraction on the basis of properties common to a set of objects”, and a special language as “a linguistic subsystem, intended for unambiguous communication in a particular subject field using a terminology and other linguistic means”. Terms consist of single-words or multi-word units that represent discrete conceptual entities, properties, activities or relations in a particular domain (Bowker, 2008). But how do we consider complex terms which are fully compositional? According to Daille, “it is difficult to determine whether a modified or over composed base term is or is not a term” (Daille, 1996, p. 30). Macken et. al. (2013) state that the issue of what constitutes a term is even more difficult in a bilingual setting, as the word formation rules differ across languages and terms that are fully compositional in one language might not be compositional in another language. As an example, they mention the French term ‘vide-poches’ which is not compositional, whereas the English (‘storage compartment’), Italian (‘cassettino portaoggetti’), and Dutch (‘opbergvak’) counterparts are. What is a terminology? Sager (1990) distinguishes three meanings of the word: i) the set of practices and methods used for the collection, description, and presentation of terms;

251

On

the architecture of words.

Applications

of

Meaning Studies

ii) a theory, i.e. the set of premises, arguments and conclusions for explaining the relations between concepts and terms; iii) a vocabulary of a specific subject field. In the remainder of this chapter, we will mainly use the word “terminology” in its third meaning. In other words, “the items which are characterized by special reference within a discipline are the ‘terms’ of that discipline, and collectively they form its ‘terminology’” (Sager 1990). Notable examples of such terminologies are IATE 2 , TermiumPlus 3, EuroTermBank4, and TermSciences5, which are freely available and usually include terms from a wide range of specialized fields. As an example, Figure 1 presents the output of the IATE multilingual term bank for the term “cutter suction dredger”.

Figure 1.  Output of IATE for the English term “cutter suction dredger”

Although many term banks exist covering a variety of domain-specific vocabulary, they cannot always keep pace with —often— fast evolving technical domains. Take for example the following sentence taken from a manual from a construction organisation: “Akoestische prestaties van hout-be-

252

2 

http://iate.europa.eu

3 

http://www.btb.termiumplus.gc.ca

4 

http://www.eurotermbank.com

5 

http://www.termsciences.fr

Terms and terminological applications

tonvloeren”. IATE will not find a valid translation for the term “hout-betonvloer”. However, since the company itself has all its documents available in French and Dutch, the company itself has all this relevant terminology available hidden in earlier written and translated documents. A company-specific or domain-specific dictionary can thus automatically be derived from the document database from the company using terminology extraction techniques. Given the availability of the sentence “Performances acoustiques de planchers mixtes en bois-béton”, automatic terminology extraction would lead to the extraction of the translation pair “hout-betonvloeren / planchers mixtes en bois-béton”. We will come back to this issue of automatic terminology extraction in Section 4.

4.  THE ACQUISITION OF TERMINOLOGY The relationship between words and objects is often illustrated by the semiotic triangle as illustrated in Figure 2 (Ogden and Richards, 1923):

concept

word

object

Figure 2.  Semiotic triangle

The semiotic triangle has 3 vertices corresponding to the word, its interpretation (concept) and the object it refers to. The dotted line between the word and object indicates that the word does not point directly to an object, but only via its interpretation.

253

On

the architecture of words.

Applications

of

Meaning Studies

This triangle helps us to distinguish between two different theoretical viewpoints regarding the acquisition of terminology: the onomasiological perspective and the semasiological perspective: — The onomasiological perspective starts with identifying a concept and its characteristics and then determining which term is used to designate that concept. — The semasiological perspective, on the other hand, starts from the linguistic form to ask how it is understood. While the early approaches to terminology were onomasiological, the more recent corpus-driven approaches are per definition semasiological. The different theories of terminology are described in Cabré Castellví (2003) and Bowker (2008). Wright (1997) views the process of terminology management as an iterative one in which both the semasiological and onomasiological approach interact. Terminology extraction can be seen as an important step of a larger process of corpus compilation, terminology extraction and terminology management (Gamper & Stock, 1999; Macken et. al. 2013): —  First, a special language corpus is collected. Sager (1993) defines these special languages as ‘(…) whose use is restricted to specialists for conceptualization, classification and communication in the same or closely related fields’. —  In the terminology extraction phase, terms are identified in a text. In case of multilingual terminology extraction, the corresponding translations are retrieved. The extracted terms and their translations can be stored in bilingual glossaries, which are already a valuable resource for technical translators. —  If the aim is the creation of a term bank, the extracted terms are structured in concept-oriented databases in the terminology management phase. Each database entry represents a concept and contains all extracted term variants (including synonyms and acronyms) in several languages. 5.  TERMINOLOGY EXTRACTION Automatic term recognition (ATR) is crucial in many domains of natural language processing, including automatic translation, text indexing, the

254

Terms and terminological applications

automatic construction and enhancement of lexical knowledge bases, etc. But what is Natural Language Processing? NLP is a sub-discipline of artificial intelligence aiming at the automatic modeling of natural language. In order to do so, linguistic ambiguity has to be resolved at a variety of levels: at the lexical level through automatic lemmatization and part-of-speech tagging, at the syntactic by means of parsing, at the semantic level by developing techniques for named entity recognition and word sense disambiguation, at the pragmatic level through automatic coreference resolution, etc. In this modeling of natural language, two different methodologies can be distinguished: rule-based versus corpus-based approaches. In a rule-based or deductive NLP approach, a set of rules is written to model a certain NLP task. The inductive, corpus-based approaches take another perspective and start from corpora labeled with a certain type of information to be learned (e.g. POS tags). These annotations serve as the basis for inducing a learning or statistical NLP approach. Also in the research on automatic term extraction, two different directions have mainly been taken. On the one hand, the linguistic-based or rulebased approaches, as proposed by Dagan and Church (1994), Ananiadou (1994), Fukuda et al. (1998) and others, make use of hand-coded rules and look for specific (mostly language-specific) linguistic structures that match a number of predefined syntactic patterns (e.g. Noun Noun, Noun Preposition Noun, Adjective Noun). Term formation patterns for English can be found in Justeson and Katz (1995) and Quin (1997); patterns for French in Daille (1996). In order to determine the morpho-syntactic patterns, linguistically-based systems apply language-specific part-of-speech taggers. As a result, linguistically-based terminology extraction programs are always language-dependent. Part-of-speech tagging is the process of automatically assigning the contextually appropriate morpho-syntactic category to a given word as exemplified for the following three excerpts in English, French and German: The

active

substance

Is

a

quinolinone

derivative

determiner

adjective

noun

Verb

determiner

noun

noun

poser

la

fermeture

d”

embout

de

brancard

verb

determiner

noun

preposition

noun

preposition

noun

255

On

the architecture of words.

Applications

of

Meaning Studies

den

Abschluss

Des

Längsträgeransatzes

einbauen

determiner

Noun

determiner

Noun

verb

In the first sentence, the above-mentioned pattern “adjective noun” would ‘active substance’ as term, whereas the “noun noun” pattern yields “quinolinone derivative” as term. In the second example, the pattern “noun preposition noun” will lead to the selection of ‘embout de brancard’ or ‘fermeture d’embout’ as terms. As opposed to the linguistic approaches, the statistical approaches are language-independent and are based on quantifiable characteristics of terms. They extract terms using measures of “unithood” and/or “termhood” to detect candidate terms. Unithood indicates the collocation strength of the units of a term, whereas termhood refers to the association strength of a given term to a domain concept. Well-known statistical filters —often borrowed from the domain of information retrieval— are (Kageura and Umino 1996): —  filters that measure the Termhood (Drouin 2006) or “degree to which a linguistic unit is related to domain-specific context”, e.g. TF-IDF and LogLikelihood —  filters that measure the Unithood or “degree of strength or stability of syntagmatic combinations or collocations”, such as Mutual Expectation. One possible measure to calculate termhood, is Log-Likelihood. Both Daille (1995) and Kilgarriff (2001) have determined empirically that LL is an accurate measure to find the most “surprisingly” frequent words in a corpus that also corresponds fairly well to what humans might associate with distinctiveness of terms. In order to calculate LL, a frequency list is made for each corpus (the domain-specific and the background corpus). Then, log-likelihood is calculated for each word in the frequency lists. This is done by constructing a contingency table as is shown in Table 1, where c represents the number of words in the first corpus, while d corresponds to the number of words in the second corpus. The values a and b are called the observed values (O).

256

Terms and terminological applications

First Corpus

Second Corpus

Total

Frequency of word

A

b

a+b

Frequency of other words

c-a

d-b

c+d-a-b

  Total

C

d

c+d

Table 1.  Contingency table to calculate Log-Likelihood

In the formula below, N corresponds to the total number of words in the corpus, i corresponds to the single words, whereas the “observed values” Oi correspond to the real frequency of a single word i in the corpus. For each word i, the observed value Oi is used to calculate the expected value Ei according to the following formula:

Applying this formula to our contingency table (with N1 = c and N2 = d) results in:

The resulting Expected values can then be used for the calculation of the Log- Likelihood:

which equates to:

The formula for the calculation of both the expected values (E) and the Log-Likelihood have been described in detail by Rayson and Garside

257

On

the architecture of words.

Applications

of

Meaning Studies

(2000). Words with a LL value above a predefined threshold are considered terms. If we have the word “soupapes”, for example, in a domainspecific automotive corpus this word will most probably be considered a term. We can illustrate this with the following numbers: let us assume we have a domain-specific automotive corpus consisting of 14,384 words (c in Table1) and a more general-domain corpus such as the newspaper corpus “Le Monde” (d in Table 1) containing 1,514,306 words. The word “soupapes” occurs 22 times in the first corpus (a in Table 1) and 0 times in the second corpus (b in Table 1), which leads to E1 = 0.207005998600109 and E2 = 21.7929940013999. The loglikelihood value of “soupapes” is: 2 * (102.65309913651 + 0), thus 205.30619. o-o-o-o-o-o-o Exercise 1 Calculate the LL value of “demande” in the corpora mentioned above. The word occurs 5 times in the domain-specific automotive corpus, as opposed to 1260 times in the Le Monde corpus. o-o-o-o-o-o-o Another possible measure to calculate termhood is TF-IDF (term frequency inverse document frequency). It is widely used in Information Retrieval to isolate useful keywords in document collections. TF-IDF (Salton 1989) combines two hypotheses: a search term is of more value when it occurs in few documents (IDF) and distinctive terms have a high frequency in a given document (TF). In order to determine domain-specific terms in a given document (collection), TF-IDF is calculated for all words in that document. To calculate the IDF, large reference corpora are collected for the language under consideration (such as the Google n-gram corpus or the BNC corpus for English). This should enable us to extract lexicon-specific terms that have much lower frequencies in background reference corpora.

258

Terms and terminological applications

Given a document collection D, a word w, and an individual document d in D, On the Architecture of Words: applications of meaning studies

wherefw,d equals the number of times w appears in d (term frequency), |D| is the size of the corpus and fw,D equals the number of documents in which w appears in D (inverse document frequency) (Berger et al. 2000).

where fw,d equals the number of times w appears in d (term frequency), |D| is the size of the corpus and fw,D equals the number of documents in which w All wordsinwith a TF-IDF value above a predefined threshold are considered appears D (inverse document frequency) (Berger et al. 2000). domain-specific terms. In other words, TF-IDF assigns to a word w a weight All wordsd with a TF-IDF value above a predefined threshold are considthat is: in document ered domain-specific terms. In other words, TF-IDF assigns to a word w a weight in document d that is:  highest when w occurs many times within a small number of documents lending to of those • highest when w(thus occurs many high timesdiscriminating within a smallpower number docudocuments);

ments (thus lending high discriminating power to those documents);  lower when w occurs fewertimes times in in aa document, document, ororoccurs in many • lower when w occurs fewer occurs in many documents (thus offering a less pronounced relevance signal); documents (thus offering a less pronounced relevance signal); occurs in allall documents.  lowest when wordwwoccurs • lowest when thethe word invirtually virtually documents.

Let us consider the following example. We have a small text of 500 words

Let us considerin the following example. have a small of 500 (1 de(1 document) which all words areWe POS-tagged (seetext above forwords a short document) in which all words are POS-tagged (see above for a short scription of part-of-speech tagging) and chunked. Chunking implies that description of part-of-speech tagging) and chunked. Chunking implies that words are are grouped groupedininphrases, phrases,such suchasasprepositional prepositional phrases (PP), noun words phrases (PP), noun phrases (NP), (NP),verb verbphrases phrases(VP) (VP)and andadverbial adverbial phrases (ADVP). Through phrases phrases (ADVP). Through chunking,potential potentialmultiword multiword terms can already determined. Below, chunking, terms can already be be determined. Below, have a fragment of this 500 words text. have a fragment of this 500 words text.

Let us consider the following excerpt: “General layout of (a) Cutter Suction Dredger”. If we compare the frequency of the chunks in our domain-specific text of 500 words with a large background corpus of >50 million words (e.g. 259 the Google Terabyte corpus), then we have to run through the following steps:

On

the architecture of words.

Applications

of

Meaning Studies

Let us consider the following excerpt: “General layout of (a) Cutter Suction Dredger”. If we compare the frequency of the chunks in our domain-specific text of 500 words with a large background corpus of >50 million words (e.g. the Google Terabyte corpus), then we have to run through the following steps: Step 1: calculate term frequency (TF) of a word in a particular document, and divide it by the number of total words in that document. Step 2: calculate the inverse document frequency (IDF). This is done by first dividing the total number of documents, by the number of documents that contain the actual keyword in question. Then, we take the logarithm of the result. Step 3: multiply the term frequency (TF) by the inverse document frequency (IDF) to get the result.

Chunk

general layout of cutter suction dredger

Frequency in domain-specific document (500 words; 1 document) 3 40 2

Frequency in background general domain corpus (>50 million words)

Number of documents in background general domain corpus in which the term occurs (10,000 documents)

 15

13

509,801

 9,500

  0

 0

Let us use the phrase ‘cutter suction dredger’ for an example calculation. In the domain-specific document of 500 words, this sequence of words occurs 2 times. The term frequency value is thus 2/500 = 0.004. None of the other 10,000 documents contains our chunk in question. The IDF value is therefore: log (10000/1) or 4. In the last step, we multiply the TF by the IDF to get the result: 0.004 * 4 = 0.016. If we calculate TF-IDF for ‘of’, we get the following results: 40/500 = 0.08 as term frequency and log(10000/9500) = 0.022 as inverse document frequency, leading to a TF-IDF value of 0.00176. The result, viz. that the weight for the ‘cutter suction dredger’ (0.016) is higher than the weight for ‘of’, is what we expected, given that the word sequence ‘cutter suction dredger’ appears less often in the English language than the word ‘of’. Although, ‘cutter

260

Terms and terminological applications

suction dredger’ (2 times) appears less frequent in the domain-specific document than the word ‘of’ (40 times), it is more significant to this document. o-o-o-o-o-o-o Exercise 2 Calculate the TF-IDF value of ‘general layout’. o-o-o-o-o-o-o As a measure of unithood, Mutual Expectation values can be calculated for n-grams. N-grams are sequences of n words, with n ranging from 1 to sentence or document length. As it does not make sense to investigate for extremely large n-grams whether they constitute a term or not, the n-gram length is often restricted to 4, 5 or 6. Dias and Kaalep (2003) developed the Mutual Expectation (ME) Measure to test the cohesiveness between words in a multiword term, i.e. the group of words forming a multiword should occur together more frequently than expected by chance. In order to calculate the Mutual Expectation values, the n-gram frequencies are calculated based on the domain-specific corpus and used to derive the Normalized Expectation (NE) values for all multiword terms, as specified by following formula:

This Normalized Expectation expresses the cost, in terms of cohesiveness, of losing one component of the n-gram. In case the cohesiveness of the multiword is very high, the frequency of the n-gram minus one component is expected to be lower, and the resulting Normalized Expectation value will be high again. Example: we have two bigrams, namely “protease inhibitors” (with a mutual expectation score of 9.3) and “the wart” (with a mutual expectation score of 0.0001).

261

On

the architecture of words.

Applications

of

Meaning Studies

In the bigram “the wart”, it is already intuitively clear that the resulting unigram “the” when deleting the last word of the bigram (“wart”) will be much more frequent than the resulting unigram “protease” when deleting the last word in the bigram “protease inhibitors”. As simple n-gram frequency appears to be a valid criterion for multiword term identification (Daille 1995), the final Mutual Expectation values are obtained by multiplying the Normalized Expectation and the relative frequency of the multiword. Also here, a threshold is used to differentiate between terms and non-terms. Fulford (2001, p. 261) pointed out that “terms do not tend to possess linguistic features that distinguish them clearly and decisively from nonterms”. Hence, we can expect that linguistically-based approaches tend to overgenerate. Also the statistical approaches tend to produce some noise. Moreover, frequency-based systems will not be able to detect newly-coined terms with low frequencies. Hence, most state-of-the-art systems use hybrid approaches that combine linguistic and statistical information. Different methods and systems are described and compared in Kageura and Umino (1996), Cabré Castellví et al. (2001), and Zhang et al. (2008). Bilingual term extraction is faced with the additional problem of finding translation equivalents in parallel texts. There is a long tradition of research into bilingual terminology extraction (Gaussier, 1998; Kupiec, 1993). In most systems, candidate terms are first identified monolingually. In a second step, the translation candidates are extracted from the bilingual corpus on the basis of word alignments or co-occurrence information. Interesting contributions to this domain have been made by Itagaki et al. (2007), Vintar (2010), Macken et al. (2013), etc.

6.  STORING TERMS Storing terms can be done in a simple text file or a spreadsheet. However, dedicated terminology management systems (TMS) such as SDL Multiterm offer much more flexibility. The most fundamental function of a terminology management system (TMS) is that it acts as a repository for consolidating and storing terminological information for use in future translation projects.

262

Terms and terminological applications

The amount of detail stored in a TMS can also differ considerably. Although the flat term lists that result from terminology extraction are often sufficient for translators to identify an appropriate translation for a given domain-specific term, it is also possible to enrich term records with information of subject field, definition, context, source, synonyms, hypernyms, hyponyms, etc. as exemplified in the following screenshot from Multiterm in which domain information (“medical”), a definition and synonymy information are added to the bilingual correspondence between “eyelid conditioning” and “ooglidconditionering”.

7.  CONCLUDING REMARKS In this chapter, we discussed the basic principles of terminology and we elaborated on the different possible methodologies for extracting terms from monolingual and multilingual documents. Although much progress has been made in this domain, such as the development of user-friendly in-

263

On

the architecture of words.

Applications

of

Meaning Studies

terfaces for terminology management, and the development of accurate systems for extracting company-specific or domain-specific dictionaries, there is still room for improvement. As the accuracy of terminology extraction relies on the quality and the size of the underlying monolingual and multilingual corpora, current systems often suffer from the lack of corpora for specialized domains or specific language pairs. Therefore, more recent research methods have been proposed that try to exploit comparable corpora, i.e. similar texts in different languages that are not translations.

8.  FURTHER RECOMMENDED READINGS Cabré Castellví, T.: 2003. “Theories of terminology. Their description, prescription and explanation”. Terminology, 9(2), 163-199. Kageura, K. and B. Umino: 1996. “Methods of automatic term recognition: a review”. Terminology 3(2), 259-289. Drouin, P.: 2006. “Termhood experiments: quantifying the relevance of candidate terms”. In Picht, H. (ed.). Modern Approaches to Terminological Theories and Applications, Linguistic Insights, 36, 375-391. Ha, L. A.: 2007. “Advances in Automatic Terminology Processing: Methodology and Application in Focus”. Wolverhampton: Wolverhampton.

9. REFERENCES Ananiadou, S.: 1994. “A methodology for automatic term recognition”. In Proceedings of the 15th conference on computational linguistics. 1034-1038. Kyoto, Japan. Berger, A.; R. Caruana, D.; Cohn, D.; Freitag and V. Mittal: 2000. “Bridging the lexical chasm: Statistical approaches to answer finding”. In Proceedings of the 23rd International Conference on Research and Development in Information Retrieval. 192-199. Athens, Greece. Bowker, L.: 2008. “Terminology”. In Baker, M. & G. Saldanha (eds.), Routledge Encyclopedia of Translation Studies. 286-290. London, New York: Routledge. Cabré Castellví, T.: 2003. “Theories of terminology. Their description, prescription and explanation”. Terminology, 9(2), 163-199. Cabré Castellví, T.v, R. E. Bagot & J. V. Palatresi: 2001. “Automatic term detection. A review of current systems”. In Bourigault, D., C. Jacquemin & M.-C. L.

264

Terms and terminological applications

Homme (eds.), Recent advances in computational terminology. 149-166. Amsterdam: John Benjamins. Dagan, I. and K. Church: 1994. “Termight: identifying and translating technical terminology”. In Proceedings of the 4th Conference on Applied Natural Language Processing. 34-40. Stuttgart, Germany. Daille, B.: 1995. “Combined approach for terminology extraction: lexical statistics and linguistic filtering”. Tech. Rep. 5, Lancaster University: UCREL. Daille, B.: 1996. “Study and implementation of combined techniques for automatic extraction of terminology”. In Klavans, J. L. & P. Resnik (eds.), The balancing act: combining symbolic and statistical aproaches to language. Massachusetts: MIT Press. Dias, G. and H. Kaalep: 2003. “Automatic extraction of multiword units for estonian: Phrasal verbs”. Languages in Development, 41, 81-91. Drouin, P.: 2006. “Termhood experiments: quantifying the relevance of candidate terms”. In Picht, H. (ed.). Modern Approaches to Terminological Theories and Applications, Linguistic Insights, 36, 375-391. Dunning, T.: 1993.“Accurate methods for the statistics of surprise and coincidence”. Computational Linguistics 19(1), 61-74. Fukuda, K.; T. Tsunoda, A.; Tamura and T. Takagi: 1998. “Toward information extraction: Identifying protein names from biological papers”. In Proceedings of the Pacific Symposium on Biocomputing. 707-718. Maui, Hawaii. Fulford, H.: 2001. “Exploring terms and their linguistic environment. A domainindependent approach to automated term extraction”. Terminology, 7(2), 259-279. Gamper, J. & O. Stock: 1999. “Corpus-based terminology”. Terminology, 5(2), 147-159. Gaussier, E.: 1998. “Flow Network Models for Word Alignment and Terminology Extraction from Bilingual Corpora”. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (Coling-ACL). 444-450. Université de Montréal, Montreal, Quebec, Canada. Ha, L. A.: 2007. “Advances in Automatic Terminology Processing: Methodology and Application in Focus”. Wolverhampton: Wolverhampton. Itagaki, M.,; T. Aikawa, & X. He.: 2007. “Automatic Validation of Terminology Consistency with Statistical Method”. In Proceedings of the Machine Translation Summit XI. 269-274. Copenhagen, Denmark. Justeson, K. & S. Katz: 1995. “Technical terminology: some linguistic properties and an algorithm for identification in text”. Natural Language Engineering, 1(1), 9-27.

265

On

the architecture of words.

Applications

of

Meaning Studies

Kageura, K. and B. Umino: 1996. “Methods of automatic term recognition: a review”. Terminology 3(2), 259-289. Kilgarriff, A.: 2001.“Comparing corpora”. International Journal of Corpus Linguistics 6(1), 1-37. Kupiec, J.: 1993. “An algorithm for finding noun phrase correspondences in bilingual corpora”. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics (ACL). Columbus, Ohio, United States. Macken, L.; Lefever, E. and Hoste, V.: 2013. “TExSIS: Bilingual Terminology Extraction from Parallel Corpora Using Chunk-based Alignment”. Terminology, 19 (1). Quin, D.: 1997. “Terminology for Machine Translation: a Study”. Machine Translation Review, 6, 9-21. Rayson, P. and R. Garside: 2000. “Comparing corpora using frequency profiling”. In Proceedings of the workshop on Comparing Corpora, 38th annual meeting of the Association for Computational Linguistics (ACL 2000). 1-6. Hong Kong, China. Sager, J. C.:1990. “A Practical Course in Terminology Processing”. J. Benjamins Publishing Company. Sager, J. C. (1993): Language Engineering and Translation: Consequences of Automation. Amsterdam: Benjamins. Salton, G.: 1989. Automatic text processing: the transformation, analysis and retrieval of information by computer. Reading, MA: Addison Wesley. Vintar, S.: 2010. “Bilingual term recognition revisited. The bag-of-equivalents term alignment approach”. Terminology, 16(2), 141-158. Wright, S. E.: 1997. “Term selection: the initial phase of terminology management”. In Wright, S. E. & G. Budin (eds.), Handbook of terminology management. 13-23. Amsterdam: John Bejamins. Zhang, Z., J. Iria, C. Brewster, & F. Ciravegna: 2008. “A comparative evaluation of term recognition algorithms”. In Proceedings of the Sixth international conference of Language Resources and Evaluation (LREC). Marrakech, Morocco.

10.  KEYS TO EXERCISES Exercise 1 Suggested answer: Given: c = 14,384 words and d = 1,514,306. The word “demande” occurs 5 times in the first corpus (a in Table 1) and 1260 times in the second corpus (b in Table 1), which leads to E1 = 11.9028449195062 and E 2 =

266

Terms and terminological applications

1253.09715508049. The loglikelihood value of “ demande” is: 2 * (-4.33669763988523 + 6.9218227034579), thus 5.17025012714533. o-o-o-o-o-o-o Exercise 2 Suggested answer: The word sequence “general layout” occurs 3 times in the domain-specific corpus of 500 words. The TF value is thus 3/500 = 0.006. As Inverse Document Frequency, we obtain log(10000/13) = 2.886, leading to a TF-IDF value of 0.017316. o-o-o-o-o-o-o

267

This glossary is a compilation on the main terms that appeared in the book. Ablaut: see alternation. Abstraction: see reduction. Acronym: lexical item composed of the initial letters of a term or proper name. Bound morphemes are those who have to be attached to other morphemes in order to exist. They are also called affixes. Affix: also called bound morpheme, as opposed to free morphemes, it is a bound form which is added to the base or stem of a word which is added to the base or stem of a word to make a different word, tense, etc. Bound morphemes only form words when attached to other morphemes. They are called affixes, and we have four types: prefixes, suffixes, infixes and circumfixes (these two are less frequent, and are not used in English). Some examples of prefixes are re- (reappear), dis- (disappoint), un- (unbelievable), etc.; and of suffixes: -ing, -ful, etc. Affixation: the word formation process of attaching an affix to a base or stem to produce a new word or a morphological word form (Sterkenbug 2003). Allomorph: A variant of a morpheme. When we find a group of all versions of one morpheme, we can use the prefix allo- (=one of a closely related set) and describe them as allomorphs of that morpheme. “Take the morpheme ‘plural.’ Note that it can be attached to a number of lexical morphemes to produce structures like ‘cat + plural,’ ‘bus + plural,’ ‘sheep + plural,’ and ‘man + plural.’ In each of these examples, the actual forms of the morphs that result from the morpheme ‘plural’ are different. Yet they are all allomorphs of the one morpheme. So, in addition to /s/ and /əz/, another allomorph of ‹plural› in English seems to be a zero-morph because the plural form ofsheep is actually ‘sheep + ø.’ When we look at ‘man + plural,’ we have a vowel change in the word... as the morph that produces the ‘irregular’ plural form men”. (George Yule, The Study of Language, 4th ed. Cambridge Univ. Press, 2010.)

271

On

the architecture of words.

Applications

of

Meaning Studies

Alternation: the phenomenon by which a phoneme or morpheme exhibits variation in its (morpho)-phonological realization. Ambiguity: the condition of a word or phrase that can be understood in more than one way. Ambiguity is the major problem of terminology and lexicology in the computerization of language. Analytical definition: a definition that analyzes the meaning of a lexical item according to genus proximum, that is, to the superordinate classifying lexical item, and differentia. That is, the distinctive constituent features. See lexicographic definition. Annotation: the procedure of inserting tags into a text in order to add information resulting from the analysis of the text. The tagging by word classes uses the same conventions as mark –up but has no limits on the kind of information that is recorded. Sometimes, annotation is used as a general term to include mark –up (Sterkenbug 2003). Arguments: The elements (entities or names) that are linked by a relation (Sterkenbug 2003). Argument structure: The logical concept that links a number of argument to a certain relation. Its most important feature is the number or arguments that this relation can take. The argument structure can also be called predicate structure or just predicate (Sterkenbug 2003). Automatic Term Recognition (ATR): “Following the growing interest in “corpusbased” approaches to computational linguistics, a number of studies have recently appeared on the topic of automatic term recognition or extraction. Because a successful term-recognition method has to be based on proper insights into the nature of terms, studies of automatic term recognition not only contribute to the applications of computational linguistics but also to the theoretical foundation of terminology. Many studies on automatic term recognition treat interesting aspects of terms, but most of them are not well founded and described”. (Kyo and Bin 1996: Methods of automatic term recognition: A review, in Terminology 3-2.) Back formation: the word formation process by which a word is created from another by removing an element. Base: the part of a lexeme consisting of a root or stem and to which an affix can be added. Blending: the word formation process in which a new word is formed by fusing arbitrary elements to two other words (Sterkenbug 2003).

272

Glossary of terms

Borrowing: the process of taking over words, constructions or morphological elements from another language. Bound morpheme: see affix. Broader term: term that encompasses a wider “scope” for the meaning of the terms included in a semantic field. Canonical form: the morphological word form chosen as the headword or lemma of an entry (Sterkenbug 2003). Chunk: unit of information. Chunking: special type of parsing that is involved in term extraction. Citation form: see canonical form. Clipping: the process of word formation consisting of abbreviating an existing form in order to create a new word form (Sterkenbug 2003). Complex stem: a word consisting of a simple word, a compound word or a base and one or more derivational elements (Sterkenbug 2003). Componential analysis: the method of semantic description by which the sense of a lexeme is differentiated from that of other lexemes by a set of semantic features, markers or components. Compounding: the word-formation process in which two or more bases are combined to form a new lexeme. Collocation: the habitual juxtaposition of a particular word with another word or words with a frequency greater than chance. Concatenation: in formal language theory and computer programming, string concatenation is the operation of joining two character strings end-to-end. In morphology, it is the combination of morphemes into a linear sequence. Concept: an idea of a class of objects, a general notion or an abstract principle which a lexeme is designated to express (Sterkenbug 2003). Concordance: an alphabetic or otherwise ordered index of every occurrence of all the lexical units in a corpus with a reference to the passage or passages in which each indexed lexical unit appears (Sterkenbug 2003). Constructional template: Constructional templates are inspired in the work of construction-based approaches like Goldberg (1995, 2005). A grammar consists of an inventory of constructions, which are in turn defined as form meaning pairings lexical and constructional templates share the view that both are based on aktionsart distinctions. (Mairal Usón and Ruiz de Mendoza, How to design lexical and constructional templates, a step by step guide: http://www.lexicom. es/drupal/files/templateDesign.pdf).

273

On

the architecture of words.

Applications

of

Meaning Studies

Conversion: the word formation process by which a lexeme of one syntactic class is changed to another word class without undergoing any modification. Corpus: a collection of language texts in electronic form, selected according to external criteria to represent, as far as possible, a language or language variety as a source of data for linguistic research. Bilingual or multilingual corpora include data from more than one language, and are usually used in translation and in terminology, as well as in linguistics. Data model: the theoretical representation of the organization of data in a database. Definiendum: the defined word. The lexeme to be defined. Definiens: the definition; the defining part of the definition that explains the meaning of a lexeme. Definition: statement explaining the meaning of a lexeme (Sterkenbug 2003). Derivation: the type of word formation process by which affixes are used to create new words. In derivation there is only one free morpheme. Derivational Morphology: the branch of Morphology concerned with the use of morphemes as word formation elements. Derivative: a word that is formed by adding one or more derivational affixes to a stem or root. Descriptive definition: the type of definition that contains a systematic, objective, and explicit account of the meaning and usage of a lexeme, of its collocations and selection restrictions and other syntactic patterns (Sterkenbug 2003). Disambiguation: the process of removing multiple meanings from a lexeme. This process is necessary for computerized linguistic applications, such as terminologies, or computer translation systems. Domain label: a label indicating that the usage of a lexical unit is restricted to a particular area of knowledge or activity (Sterkenbug 2003). Encyclopedic information: factual information about the world: persons, things, topics, etc. Endocentric (constructions): this distinction in linguistics dates back to Bloomfield’s work (1930), and it is possible in constituency grammars (as is the case of generative grammars and later on of functional grammar), where a there is a node that provides the head of the construction with the function of head, as opposed to dependency grammars, where all constructions are necessarily endocentric. Prandi (2004) takes the notion of endocentricity/exocentricity from constituency grammars, which defines constructions on the basis of the presence vs. absence of a head constituent. As related to concepts, an endocentric concept would be related in form to a specific language’s lexical structures.

274

Glossary of terms

Exocentric (constructions): Prandi (2004) contrasts the lexicon of natural languages, containing concepts whose identity critically depends on specific lexical structures and that he defines as endocentric concepts, with those concepts firmly rooted in an independent categorization of shared things and experiences, easily transferable across languages, which can be called exocentric concepts. Prandi (ibidem) takes the notion of endocentricity/exocentricity from constituency grammars, which defines constructions on the basis of the presence vs. absence of a head constituent. As related to concepts, an exocentric concept would be more crosslinguistic, precisely due to its higher independency from specific linguistic categorizations. Extensional definition: the definition that consists of enumerating all the members of the class comprised by the definiendum (Sterkenbug 2003). Frame: a structural environment within which all the information about a lexeme is housed; a conceptual structure with slots and fillers that is supposed to reflect the stereotypical knowledge that we have about a concept (Sterkenbug 2003). Free morpheme: morphemes can be free or bound. Free morphemes are those occurring independently, such as ask or drive. They are stand-alone words. Bound morphemes are those who have to be attached to other morphemes in order to exist. Functional change: see conversion. Genus: a group of lexemes within a family that consists of a number of similar or closely related species. Gloss: an explanatory lexeme clarifying the meaning of an unfamiliar workd or note made in the margin of a text or between lines, explaining and translating a difficult lexical unit in a text (Sterkenbug 2003). Grammatical category: a property of items within the grammar of a language; it has a number of possible values (sometimes called exponents, or grammemes), which are normally mutually exclusive within a given category. Examples of grammatical categories are tense, number and gender. A distinction should be made between these grammatical categories (tense, number, etc.) and lexical categories, which are closely synonymous with the traditional parts of speech (noun, verb, adjective, etc.), or more generally syntactic categories. Grammatical meaning: the meaning that expressed a specific grammatical function. Grammaticalization: the process of using words or parts of words to mark the grammatical concepts which are relevant in a particular language.

275

On

the architecture of words.

Applications

of

Meaning Studies

Head: see base. Heading: see lemmatization. Headword: see lemma. Hierarchical structure: the structure of an entry with two or more levels of subordination which express the relationships between core senses and subsenses which are usually indicated by Arabic numerals, Roman numerals, various characters, type-faces or specific symbols. Idiosyncratic: in Linguistics, this term is used to refer to items or phenomena that are particular of one (type of) language, and not universal. It means the opposite of universal. For example, the fact that the language sign is conventional is a universal feature of languages, whereas the fact that some languages mark gender is an idiosyncratic phenomenon. i-mutation: this phonological phenomenon is also known as umlaut, frontmutation, i/j-mutation and i/j-umlaut. It is an important type of sound change, in which a back vowel is fronted, and/or a front vowel is raised, if the following syllable contains /i/, /ī/ or /j/ (a voiced palatal approximant, sometimes called yod, the sound of English in yes. Information model: it is a kind of ontology. In software engineering it is unders­ tood as a representation of concepts and the relationships, constraints, rules, and operations to specify data semantics for a chosen domain of discourse. Typically it specifies relations between kinds of things, but may also include relations with individual things. It can provide sharable, stable, and organized structure of information requirements or knowledge for the domain context. Irregular allomorphy: see root allomorphy. Language applications: see linguistic applications. Language for Specific Purposes: this term is usually used as opposed to Language for General Purposes. It is the language variation across a particular subject field, for instance, the language of medicine or the language of engineering. It is used in the field of applied linguistics to refer to areas of language that need education and training, as well as in the field of terminology to identify the degree of specialization of certain specific texts (the more specific a field, the more terms will be used). Lemma: a headword or the canonical form of a word in a dictionary (Sterkenbourg 2003). See canonical form. Lemmatization: the process of removing inflectional elements from the lexical item thus changing it to its canonical form (Sterkenburg 2003). Lexeme: the set of all the forms that have the same base meaning.

276

Glossary of terms

Lexicalization: the process of naming a concept. Lexical entry: entry of a word in a dictionary or in any other kind of lexicographical tool. Lexical item: a word that forms part of a lexicon. See word. Lexical representation: lexical representation is, after all, meaning representation. It must be noted, however, that lexical meaning is just one part of the whole meaning transmitted in verbal communication, where the total content of information transmitted is usually more than purely verbal communication. Gibbon(1998) –see reference in chapter 3– explains that principles of representation need to be sufficiently well-defined to permit computational implementations. On the other hand, lexical information of different kinds tends to require many different kinds of representation, from sequences of orthographic units through phonological lattices of parallel and sequential sound events, to complex syntactic frames and semantic networks. Gibbon is also concerned with practical considerations such as “typability” at the computer keyboard and how it imposes further constraints. Lexical semantics: Lexical semantics is a subfield of linguistic semantics. It is the study of how and what the words of a language denote (Pustejovsky, 1995). Words may either be taken to denote things in the world or concepts, depending on the particular approach to lexical semantics. Lexicography: the art and science of compiling dictionaries (Sterkenburg, 2003). Lexicology: the study of morphology, meaning, etymology, paradigmatic, and syntagmantic relations and use of lexical items (Sterkenburg, 2003). Also, Gibbon (1998) states: “Lexicology, on the other hand, is the branch of descriptive linguistics concerned with the linguistic theory and methodology for describing lexical information, often focusing specifically on issues of meaning. Traditionally, lexicology has been mainly concerned with ‘lexis’, i.e. lexical collocations and idioms, and lexical semantics, the structure of word fields and meaning components and relations. Until recently, lexical semantics was conducted separately from study of the syntactic, morphological and phonological properties of words, but linguistic theory in the 1990s has gradually been integrating these dimensions of lexical information”. Lexicon: 1. A work of reference listing and explaining the meanings and uses of lexical items; 2. The set of all the lexical items of a language (Sterkenburg, 2003). Lexicon building: the process of construction a lexicon under certain specifications.

277

On

the architecture of words.

Applications

of

Meaning Studies

Loglikelyhood: this is a method used to compare twocorpora. As in Rayson and Garside (2000), it is a simple procedure of corpora comparison, where frequency listsare produced after a part-of-speech tagging is applied to each corpora and expected values are calculated. Ref.: Rayson, P. and R. Garside (2000). Comparing Corpora using Frequency Profiling.In Proceedings of the Workshop on Comparing Corpora.Association of Computational Linguistics. Stroudsburg. P.A. USA. LSP: see Language for Specific Purposes. Meaning postulates: in semantics, a meaning postulate is the postulate whereby lexical items can be defined in terms of relations with other lexical items. The classic example is: bachelor = unmarried male. Meaning representation: see lexical representation. Morph: see allomorph. Morphology: subfield of Linguistics dealing with word grammar. Mutation: see i-umlaut. Narrower term: the more specific term within a certain subject field. The narrower term can hold a relationship of PART OF or of ISA with its broader term. n-grams: In the fields of computational linguistics and probability, an n-gram is a contiguous sequence of n items from a given sequence of text or speech. They are typically collected from a text or speech corpus. An n-gram of size  1 is referred to as a “unigram”; size 2 is a “bigram” (or, less commonly, a “digram”); size 3 is a “trigram”. Larger sizes are sometimes referred to by the value of n, e.g., “four-gram”, “five-gram”, and so on. Non-referential elements: same as predicates but in FG-FDG. Notation: system of graphic or symbols, characters and abbreviated expressions, used in artistic and scientific disciplines to represent technical facts and quantities by convention. Thus, it is collection of related symbols that are given an arbitrary meaning, created to facilitate structured communication within a domain knowledge or field of study. Onomasiological approach: an approach in the analysis of lexical semantic properties having the concept as starting point and describing the lexemes that represent that concept. Onomasiological dictionary: a dictionary that proceeds from concepts to words. A thematic dictionary. Onomasiological entry: an entry of a dictionary which is based on the onomasiological approach.

278

Glossary of terms

Onomasiology: the study of names and the process of naming that varies geographically, socially, occupationally or in other ways and that has the content or concept of the lexical items as the starting point investigating which lexical items may be associated with that concept. Ontology: a network of knowledge. It is also a collection of concepts that structures information and allows the user to view it. Since concepts are organized into hierarchies, each concept is related to its super- and sub-concepts. All this forms the basis for inheriting defined attributes and relations from more general to more specific concepts. Opaque word formation phenomena: word formation phenomena where the derivative processes are not overtly realized, that is, they are not realized through affixes, but by means of conversion. Paradigm: pattern or model. Paradigmatic relationships: Lyons (2002) defines paradigmatic relations as those established on the bases of intersubstitutability while syntagmatic relations are based on co-occurrence. Ref.: Lyons, J. (2002). Lexical structures based on sense relations I: General overview, inclusion and identity, in Cruse et al, Lexicology. An international handbook on the nature and structure of words and vocabula­ries. Vol I. Walter de Gruyter. Berlin. New York. Parser: “A natural language parser is a program that works out the grammatical structure of sentences, for instance, which groups of words go together (as ‘phrases’) and which words are the subject or object of a verb. Probabilistic parsers use knowledge of language gained from hand-parsed sentences to try to produce the most likely analysis of new sentences. These statistical parsers still make some mistakes, but commonly work rather well. Their development was one of the biggest breakthroughs in natural language processing in the 1990s. You can try out our parser online”. (From http://nlp.stanford.edu/software/lexparser.shtml, the Standford NLP group.) Parsing: The process of analyzing and labelling a string of symbols, either in natural language or in computer languages, according to certain rules. The term parsing comes from Latin pars (orationis), meaning part (of speech). Predicate structure: the underlying logical structure of a predicate (that is, a verb generally), comprising the number and types of arguments in takes. Primitive (word): word that has not undergone any kind of derivative process, or the word from which another word derives. For example, mother is a primate word that derives into motherhood.

279

On

the architecture of words.

Applications

of

Meaning Studies

Productivity: degree of likeliness of a lexical item of undergoing any kind of derivative process. PoS tagging: see tagging. Recategorization: a word whole lexical class changes. For example, Google as a noun is recategorized into a verb, to Google. Recursion: consecutive derivation. For example, the word appoint derived into disappoint, which in turn gives disappointment. Therefore, the word appoint has undergone recursion. Recursivity: see recursion. Referent: arguments in RRG. Referential elements: same as arguments, but in FG-FDG. Retrievability: “Retrievability is a term associated with the ease with which information can be found or retrieved using an information system, specifically a search engine or information retrieval system. A document (or information object) has high retrievability, if there are many queries which retrieve the document via the search engine, and the document is ranked sufficiently high that a user would encounter the document. Conversely, if there are few queries that retrieve the document, or when the document is retrieved the documents are not high enough in the ranked list, then the document has low retrievabi­lity. Retrievability can be considered as one aspect of Findability. Applications of retrievability include detecting search engine bias, evaluating the influence of search technology, tuning information retrieval systems and evaluating the quality of documents in a collection.” (From Wikipedia, based on Azzopardi, L. and Vinay, V. 2008. “Retrievability: an evaluation measure for higher order information access tasks”. Proceedings of the  17th ACM conference on Information and knowledge management. CIKM ‘08. Napa Valley, California, USA: ACM. pp. 561-570. Root: the base form of a lexical item without affixes (Sterkenburg 2003). Root allomorphy: “Root allomorphy is a subset of relationships traditionally called irregular morphology. Root allomorphy comes in two varieties. The first is suppletive allomorphy where the two forms cannot be derived from each other by some sort of phonological process. Some examples of suppletive allomorphy are in go/went, good/better/best, bad/worse, person/people. The other type of allomorphy is what I call irregular allomorphy, in which there is some common phonology between the two forms. This commonality is usually attributable to some type of historically regular phenomena (such as i/j umlaut)

280

Glossary of terms

which has since fallen out of the language.” From: http://babel.ucsc.edu/~hank/ mrg.readings/Siddiqi_Ch3.pdf Semantic change: change of the semantic content of words. It is different from lexical change, where also the word-forms change. Lexical change is related to morphology, semantic change is related to semantics. Semasiological approach: an approach in lexical semantics having lexemes as the starting point and explaining the concepts which they express. Semasiological entry: a dictionary entry which is based on the semasiological approach. That is one where the entry word is explained. Stem: see base word. Stem allomorphy: see root allomorphy. Syntagmatic relationships: syntagmatic relations are based on co-occurrence. Relationships that are hold between the words of a sentence in a horizontal way. Systematic: consistent, constant, based on certain rules. Standardization: process of developing and implementing technical standards. Standardization can help to maximize compatibility, interoperability, safety, repeatability, or quality. Tagging: in information systems, a tag is a non-hierarchical keyword or term assigned to a piece of information. This kind of metadata helps describe and categorize an item. Tags are generally related to word categories (adjective, noun, verb, etc. and are used in term banks, corpora, etc. Taxonomy: a hierarchic classification of part of reality, using one relation type. Term: a lexical item that denotes a concept in a specialized field (Sterkenburg, 2003). Term bank: a database of technical terms in a specialized field (Sterkenburg, 2003). Termhood: degree of likeliness for a certain lexical item to be considered a term. Terminal predicate: the derived word after the whole process of derivation has taken place. The resulting predicate of the derivative chain. Terminal word: see terminal predicate. Terminography: the branch of lexicography concerned with the theory and practice of designing and compiling specialist dictionaries in fields like physics, medicine, law, etc. “The twin fields of terminology and terminography are industrially and commercially important disciplines which are related to lexicology and lexicography, and are concerned with the identification and construction of technical terms in relation to the real world of technical artefacts. Historically, these fields have developed and are in general practised separately from lexicology and lexicography, though there is no a priori reason for this”. (Gibbon 1998.)

281

On

the architecture of words.

Applications

of

Meaning Studies

Terminologies: banks of terms. Terminology: “Terminology is the study of and the field of activity concerned with the collection, description, processing and presentation of terms, i.e. lexical items belonging to specialised areas of usage of one or more languages”. (Sager.) Terminology extraction: the action of extracting candidate terms from specialized corpora. Terminology extraction can be seen as an important step of a larger process of corpus compilation, terminology extraction and terminology management (Gamper & Stock, 1999; Macken et al. 2013) include various stages. First, a special language corpus is collected. Sager (1993) defines these special languages as “(…) whose use is restricted to specialists for conceptualization, classification and communication in the same or closely related fields”. In the terminology extraction phase, terms are identified in a text. In case of multilingual terminology extraction, the corresponding translations are retrieved. The extracted terms and their translations can be stored in bilingual glossaries, which are already a valuable resource for technical translators. If the aim is the creation of a term bank, the extracted terms are structured in concept-oriented databases in the terminology management phase. Each database entry represents a concept and contains all extracted term variants (including synonyms and acronyms) in several languages. Terminology management: Wright (1997) views the process of terminology management as an iterative one in which both the semasiological and onomasiological approach interact. See also terminology extraction. Thesaurus: a reference work listing associated words and phrases, usually undefined, and grouped on the basis of their meaning (Sterkenburg, 2003). Tokenization: the process of grouping characters together to make higher units. A simple example in European languages is the use of the word-space as the boundary of a token, to make words out of a string of letters (Sterk. 2003). “Tokenisation is the identification of character groupings which constitute the basic units of text, such as words and punctuation. A common strategy for initialisation is to surround words and punctuation marks with blanks”. (Gibbon  1998. See more in http://coral.lili.uni-bielefeld.de/Classes/Winter98/ ComLex/GibbonElsnet/elsnetbook.dg/node21.html#SECTI ON00043000000000000000) Transparent word-formation phenomena: word formation phenomena that are overtly realized and can therefore be analyzed in the synchrony. Transparent word-formation phenomena are affixation processes.

282

Glossary of terms

Transposition: see conversion. Type coercion: coercion of the parameters of certain components of specific types. Umlaut: see i-mutation. Unithood: Formally, unithood refers to “the degree of strength or stability of syntagmatic combinations and collocations” (Kageura and Umino, 1996), and termhood is defined as “the degree that a linguistic unit is related to domainspecific concepts” (Kageura and Umino, 1996). While the former is only relevant to complex terms, the latter concerns both simple terms and complex terms”. (Introduction to the book: Determination of Unithood and Termhood for Term Recognition.) Valence: the combining power of a lexical item to form fixed syntactic units. Vowel alternation: see alternation. Word: the smallest meaning lexical item that can fom an utterance on its own or appear between two spaces in written language. Word class: see grammatical category. Zero change: see conversion. Zero derivation: see conversion.

283

13/2/15

10:12:59

On the Architecture of Words. Applications of Meaning Studies

In light of today’s extensive use of digital communication, this volume focuses on how to understand and manage the various types of linguistically-based products that facilitate the use and extraction of information. Including conceptual and terminological databases, digital dictionaries, thesauri, language corpora, and ontologies, they all contribute to the development and improvement of language industries, such as those devoted to automatic translation, knowledge management, knowledge retrieval, linguistic data analysis, and so on. As the theoretical background underlying these applications is outlined in detail in the earlier chapters of the book, the reader is able to establish the necessary links between the various but related kinds of linguistic –and, in particular, semantic– applications. A general review of several theories and linguistic models that influence the practical application of Meaning studies to the new technologies is also included. This book is aimed at students and researchers of Linguistics, as well as those with a basic knowledge of Linguistics and Semantics who are interested in the on-going development of the handling of meaning and its practical usage.

6402413GR01A01

6402413GR01A01.pdf

C

M

Y

CM

MY

CY

CMY

K

ISBN: 978-84-362-6734-1

Editorial

02413

colección Grado 9 788436 267341

6402413GR01A01

GR

On the Architecture of Words. Applications of Meaning Studies Margarita Goded Rambaud Ana Ibáñez Moreno Veronique Hoste