Computational Processing of the Portuguese Language: 6th International Workshop, PROPOR 2003, Faro, Portugal, June 26-27, 2003. Proceedings (Lecture Notes in Computer Science, 2721) 3540404368, 9783540404361

Since 1993, PROPOR Workshops have become an important forum for re- archers involved in the Computational Processing of

118 106 4MB

English Pages 286 [282] Year 2003

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Computational Processing of the Portuguese Language: 6th International Workshop, PROPOR 2003, Faro, Portugal, June 26-27, 2003. Proceedings (Lecture Notes in Computer Science, 2721)
 3540404368, 9783540404361

Table of contents :
Computational Processing of the Portuguese Language
Preface
Organization
Table of Contents
Devoicing Measures of European Portuguese Fricatives
Introduction
Design and Recording of a Corpus of Portuguese Fricatives
Method for Segmentation and Annotation
Measures of Devoicing
Manual Criterion for Devoicing
Automatic Criterion for Devoicing
Results of Devoicing Analysis Using the Manual Criterion
Evaluation of the Automatic Devoicing Criterion
Conclusions
References
AUDIMUS.media: A Broadcast News Speech Recognition System for the European Portuguese Language
Introduction
BN Corpus
Pilot Corpus
Speech Recognition Corpus
AUDIMUS.MEDIA Recognition System
Language Modeling
Vocabulary and Pronunciation Lexicon
Weighted Finite-State Dynamic Decoder
Alignment
Pruning
Speech Recognition Results
Concluding Remarks
References
Pitch Restoration for Robust Speech Recognition
Introduction
Noise Effect on the Baseline Spectral Normalization Domain
Proposed Noise Compensation
Experimental Results
Discussion
References
Grapheme-Phone Transcription Algorithm for a Brazilian Portuguese TTS
Introduction
The Transcription Rules
The Transcription Rules
Experimental Results
Conclusions
Improving the Accuracy of the Speech Synthesis Based Phonetic Alignment Using Multiple Acoustic Features
Introduction
Waveform Generator
Acoustic Features
Feature Normalization
Feature Selection Procedure
Frame Alignment
Results
Conclusions
Evaluation of a Segmental Durations Model for TTS
Introduction
Description of the Model
Duration Features
Neural Network
Model Evaluation
Perceptual Evaluation
Conclusion
References
From Portuguese to Mirandese: Fast Porting of a Letter-to-Sound Module Using FSTs
Introduction
Rule Formalism
Transducer Composition
The SAMPA Phonetic Alphabet for Both Languages
Transducer Approach for European Portuguese
Transducer Approach for Mirandese
FST-Based Concatenative Synthesis
Concluding Remarks
A Methodology to Analyze Homographs for a Brazilian Portuguese TTS System
Introduction
Fundamental Concepts
Hypothesis
Analysis
Nominal Constructions
Prepositional Constructions
Verbal Constructions
Experimental Results
Conclusions
Automatic Discovery of Brazilian Portuguese Letter to Phoneme Conversion Rules through Genetic Programming
Introduction
Modeling the Problem Using Genetic Programming
Experimental Results
Final Conclusions
References
Experimental Phonetics Contributions to the Portuguese Articulatory Synthesizer Development
Introduction
Phonetics Applied to Speech Processing
Multimedia Prosodic Atlas for the Romance Languages
Conclusion
A Study on the Reliability of Two Discourse Segmentation Models
Introduction
Method
Results
Conclusions
References
Reusability of Dictionaries in the Compilation of NLP Lexicons*
Introduction
Preliminaries
The Thesaurus Denotations
The Reference Corpus
The TeP
The "Mining" Strategy and Pitfalls
“Mining”
Pitfalls
Three Classes of Problems
Selected Strategies
Final Remarks
References
Homonymy in Natural Language Processes: A Representation Using Pustejovsky’s Qualia Structure and Ontological Information*
Introduction
The Qualia Structure
The Linguistic Phenomenon of Homonymy
Homonymy in Qualia Structure
Ontology of Concepts for Brazilian Portuguese
Homonymy in Ontological Structuring
LBK Representation Modules
The LKB
Ontological Module
Qualia Structural Module
Final Considerations and Future Perspectives
References
Using Adaptive Formalisms to Describe Context-Dependencies in Natural Language
Introduction
Illustrating Example
Conclusion
Some Regularities of Frozen Expressions in Brazilian Portuguese
Introduction
The Regularities of a Class
Conclusion
Selva: A New Syntactic Parser for Portuguese
Introduction
Related Work
The Grammar
Clause Structure
Coordination
The Pre-processor
The Parser
Evaluation and Comparison
Future Work
An Account of the Challenge of Tagging a Reference Corpus for Brazilian Portuguese*
Introduction
Designing the Tagset
Criteria, Features, and Previous Work
The Current Tagset
Some Emblematic Linguistic Challenges
Current and Future Work
References
Multi-level NER for Portuguese in a CG Framework
Introduction
Previous Work
Methodological and Data Framework
Discussion of Name Categories
System Architecture and Strategies
Preprocessing
The Name Type Predictor
The CG Modules
Evaluation
Conclusion
References
HMM/MLP Hybrid Speech Recognizer for the Portuguese Telephone SpeechDat Corpus
Introduction
Database Description
Call Description
Speaker Recruitment and Characteristics
Database Annotation
ASR System Setup
Training and Test Set Definition
Acoustic Modeling
Vocabulary and Language Modeling
Experiments and Results
Conclusions
Managing Linguistic Resources and Tools
Introduction
Infrastructure
The Interface
Module Definition
Related Work
Conclusions and Future Directions
Using Morphossyntactic Information in TTS Systems: Comparing Strategies for European Portuguese
Introduction
Morphossyntactic Tagging System
Linguistic Resources
textit {Corpus}
textit {Lexica}
Experimental Results
Conclusions
Timber! Issues in Treebank Building and Use
Introduction
Annotation Schemes
Can Our Treebank Type 3 Be Turned into an Evaluation Treebank (Type 2)?
Decisions as to the Process
Decisions as to the Encoding
Águia
Kinds of Queries
Use of IMS CWB
References
A Lexicon-Based Stemming Procedure
Introduction
Stemming Methods
The Proposed Lexicon-Based Stemming Procedure
The Structure of the Lexicon
Stemming Procedure
An Experiment with Portuguese
Concluding Remarks
References
Contractions: Breaking the Tokenization-Tagging Circularity
Tokenizing-Tagging Circularity
Tokenization with Interpolated Tagging
Further Results
References
A Linguistic Approach Proposal for Mechanical Design Using Natural Language Processing
Introduction
Research Background
Preliminary Design Background
The Computational Approach in Mechanical Part Design
Correlation between Semantic/Meaning and Syntax Processing
Challenges in Structure Functional Phrases Composition
Considerations
References
Identification of Direct/Indirect Discourse in Children’s Stories
Introduction
Background
Solution
Discussion
Future Work
Curupira: A Functional Parser for Brazilian Portuguese
Introduction
General Architecture
References
ANELL: A Web System for Portuguese Corpora Annotation
Introduction
System Presentation
The Linguistic Analysis
The Annotation
Final Remarks
References
Email2Vmail – An Email Reader
Introduction
System Architecture
Homograph Disambiguation
Language Identification
Signature Analysis
Conclusions
References
A Large Speech Database for Brazilian Portuguese Spoken Language Research
Introduction
Methodology
Study of Dialects and Geographic Distribution of the Speakers
Determination of Utterance Types
Recordings
Phonetic Transcription
Conclusions and Future Work
Interpretations and Discourse Obligations in a Dialog System
Introduction
Interpretation Manager
Constructing Interpretations
Constructing Discourse Obligations
Building Domain Dependent Discourse Obligations
Using Dialogues to Access Semantic Knowledge in a Web IR System
Introduction
The Dialogue System
Web semantics
Natural Language Dialogue System
Conclusions and Future Work
Managing Dialog and Access Control in Natural Language Querying
Introduction
Motivation
Natural Language System
Dialog Management and Clarification
Access Control
Conclusions and Future Work
GistSumm: A Summarization Tool Based on a New Extractive Method*
Introduction
GistSumm Description
GistSumm Premises
GistSumm Processes
Sentence Ranking
Extract Production
Evaluating GistSumm Performance
Experiment 1: Identifying the Gist Sentence
Experiment 2: Evaluating the Extracts Overall Quality
Final Remarks
References
Topic Indexing of TV Broadcast News Programs
Introduction
Topic Detection Corpus Description
Story Segmentation
Story Indexing
Segmentation Results
Indexation Results
Conclusions and Future Work
References
An Initial Proposal for Cooperative Evaluation on Information Retrieval in Portuguese
Introduction
Question Answering Evaluation
Web Search Evaluation
An Initial Proposal for Cooperative Evaluation
References
Evaluation of Finite-State Lexical Transducers of Temporal Adverbs for Lexical Analysis of Portuguese Texts*
Introduction
Some Families of Multiword Temporal Adverbs in Portuguese
Evaluating Lexical Finite-State Transducers
Methods
Results
Discussion
Lexical Coverage and Linguistic Adequacy
Final Remarks
References
Evaluating Automatically Computed Word Similarity
Introduction
Computing Word Similarity
Syntactic Contexts
Word Similarity
Lists of Words
Using Similar Words for Expanding Queries
Experiments
Concluding Remarks
Future Work
Evaluation of a Thesaurus-Based Query Expansion Technique
Introduction
Thesaurus-Based Query Expansion
Evaluation
Evaluation over a Small Corpus
Evaluation over the Internet
Concluding Remarks
Cooperatively Evaluating Portuguese Morphology
Introduction
Test Materials Creation
Test Texts
Golden List Compilation
Different Linguistic Points of View
Absence of Standard
Different Testing Points of View
Measuring
Tokenization Data
Comparison with the Golden List
Coarse-Grained Comparison of the Output for All Tokens
References
Author Index

Citation preview

Lecture Notes in Artificial Intelligence Edited by J. G. Carbonell and J. Siekmann

Subseries of Lecture Notes in Computer Science

2721

3

Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo

Nuno J. Mamede Jorge Baptista Isabel Trancoso Maria das Gra¸cas Volpe Nunes (Eds.)

Computational Processing of the Portuguese Language 6th International Workshop, PROPOR 2003 Faro, Portugal, June 26-27, 2003 Proceedings

13

Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA J¨org Siekmann, University of Saarland, Saarbr¨ucken, Germany Volume Editors Nuno J. Mamede Isabel Trancoso Technical University of Lisbon, L2F, INESC-ID Lisboa Rua Alves Redol, 9, 1000-029 Lisbon, Portugal E-mail: {nuno.mamede, isabel.trancoso}@inesc-id.pt Jorge Baptista University of Algarve, Faculty of Humanities and Social Sciences Campus de Gambelas, 8005-139 Faro, Portugal E-mail: [email protected] Maria das Gra¸cas Volpe Nunes NILC, ICMC-USP S˜ao Carlos Av. do Trabalhador S˜ao-Carlense, 400, 13560-970 S˜ao Carlos, SP, Brazil E-mail: [email protected]

Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliographie; detailed bibliographic data is available in the Internet at .

CR Subject Classification (1998): I.2.7, F.4.2-3, I.2, H.3, I.7 ISSN 0302-9743 ISBN 3-540-40436-8 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York, a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2003 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin GmbH Printed on acid-free paper SPIN: 10928967 06/3142 543210

Preface

Since 1993, PROPOR Workshops have become an important forum for researchers involved in the Computational Processing of Portuguese, both written and spoken. This PROPOR Workshop follows previous workshops held in 1993 (Lisboa, Portugal), 1996 (Curitiba, Brazil), 1998 (Porto Alegre, Brazil), 1999 ´ (Evora, Portugal) and 2000 (Atibaia, Brazil). The workshop has increasingly contributed to bring together researchers and industry partners from both sides of the Atlantic. The constitution of an international program committee and the adoption of high-standard referee procedures demonstrate the steady development of the field and of its scientific community. This can also be seen in the realization of the satellite workshop AVALON, which constitutes the first evaluation campaign of Portuguese NLP systems. Each one of the 64 submitted papers received a careful, triple blind-review by the program committee. All those who contributed are mentioned in the following pages. The reviewing process led to the selection of 41 papers for oral presentation, 24 regular papers and 17 short papers, which are published in this volume. The workshop and this book were structured around the eight following main topics: (i) speech analysis and recognition; (ii) speech synthesis; (iii) pragmatics, discourse, semantics, syntax, and the lexicon; (iv) tools, resources, and applications; (v) dialogue systems; (vi) summarization and information extraction; and (vii) evaluation. We would like to express here our thanks to all the reviewers for their quick and excellent work. We are specially grateful to our invited speakers, Julia Hirschberg (AT&T, USA) and Robert Gaizauskas (Sheffield University, UK), for their invaluable contribution, which undoubtedly increased the interest in the workshop and its quality. We are indebted to a number of individuals for taking care of specific parts of the workshop program: Jo˜ ao Rochate, for his help in building and maintaining the PROPOR Web site, which served as our main information channel; Susana Mendon¸ca and Nuno Vieira, who provided the indispensable administrative support; and Susana Mendon¸ca, Luc´ılia Chacoto, and Cristina Palma, for their help in the secretariat. We would like to publicly acknowledge the institutions without which this event would not have been possible: The Departamento de Letras Cl´ assicas e Modernas, Faculdade de Ciˆencias Humanas e Sociais, Universidade do Algarve; the Laborat´ orio de Sistemas de L´ıngua Falada, Laborat´ orio de Engenharia da Linguagem; the N´ ucleo Interinstitucional de Ling´ıstica Computacional of the Universidade de S˜ ao Paulo; the Funda¸c˜ao para a Ciˆencia e a Tecnologia, Minist´erio da Ciˆencia e do Ensino Superior; the Funda¸c˜ao Luso-Americana para o Desenvolvimento; and the British Council in Portugal.

VI

Preface

Universidade do Algarve deserves a special mention for having generously hosted the workshop.

April 2003

Nuno J. Mamede Jorge Baptista Isabel Trancoso Maria das Gra¸cas Volpe Nunes

Conference Chair Jorge Baptista, Universidade do Algarve and LabEL (CAUTL-IST)

Program Co-chair Jorge Baptista, Universidade do Algarve and LabEL (CAUTL-IST) Isabel Trancoso, Technical University of Lisbon and L2F (INESC-ID Lisboa) Maria das Gra¸cas Volpe Nunes, University of S˜ ao Paulo and NILC

Publication Chair Nuno J. Mamede, Technical University of Lisbon and L2F (INESC-ID Lisboa)

VIII

Program Committee Ana Frankenberg-Garcia (ISLA, Portugal) Ant´ onio Branco (University of Lisbon, Portugal) Ant´ onio Teixeira (University of Aveiro, Portugal) Belinda Maia (University of Porto, Portugal) Bento Carlos Dias-da-Silva (S˜ao Paulo State University, Brazil) Carlos Lima (University of Minho, Portugal) Caroline Hag`ege (Xerox Research Centre Europe, France) Dante Barone (Federal University of Rio Grande do Sul, Brazil) Diamantino Freitas (University of Porto, Portugal) Diana Santos (Linguateca/SINTEF, Portugal/Norway) Donia Scott (ITRI, UK) Elisabete Ranchhod (University of Lisbon and LabEL, Portugal) ´ Eric Laporte (University of Marne-la-Vall´ee and IGM, CNRS, France) Fernando Perdig˜ ao (University of Coimbra, Portugal) Francisco Casacuberta (Universitat Polit`ecnica de Val`encia and ITI, Spain) Horacio Franco (SRI, USA) ´ Irene Rodrigues (University of Evora, Portugal) Isabel Trancoso (Technical University of Lisbon and L2F, Portugal) Jo˜ ao Paulo Neto (Technical University of Lisbon and L2F, Portugal) Jorge Baptista (University of Algarve and LabEL, Portugal) Julia Hirchsberg (Columbia University and AT&T Labs, USA) L´ ucia H. Machado Rino (S˜ ao Carlos Federal University and NILC, Brazil) Lu´ıs Caldas de Oliveira (Technical University of Lisbon and L2F, Portugal) Manuel C´elio Concei¸c˜ao (University of Algarve and Termip, Portugal) Marcelo Finger (University of S˜ ao Paulo, Brazil) Maria das Gra¸cas Volpe Nunes (University of S˜ao Paulo and NILC, Brazil) Maria do C´eu Viana (CLUL, Portugal) Maria Helena Mira Mateus (University of Lisbon and ILTEC, Portugal) Max Silberztein (University of Franche-Comt´e and GRELIS, France) ´ Michel Gagnon (Ecole Polytechnique de Montr´eal, Canada) N´estor Becerra Yoma (University of Chile, Chile) Nuno Mamede (Technical University of Lisbon and L2F, Portugal) Palmira Marrafa (University of Lisbon, Portugal) Pl´ınio Barbosa (Campinas State University, Brazil) Renata Vieira (UNISINOS, Brazil) Rute Costa (New University of Lisbon and Termip, Portugal) Teresa Lino (New University of Lisbon and Termip, Portugal) Vera L´ ucia de Lima (Pontifical Catholic Univ. of Rio Grande do Sul, Brazil) Violeta Quental (Pontifical Catholic Univ. of Rio de Janeiro, Brazil)

IX

Referees Ana Frankenberg-Garcia Ant´ onio Branco Ant´ onio Teixeira Belinda Maia Bento Dias-da-Silva Carlos Lima Caroline Brun Caroline Hag`ege Dante Barone Diamantino Freitas Diana Santos Donia Scott Elisabete Ranchhod ´ Eric Laporte Fernando Perdig˜ ao Francisco Casacoberta Gagnon Michel Gra¸ca Nunes Herv´e D´ejean Horacio Franco Irene Rodrigues

Isabel Trancoso Jorge Baptista Jo˜ ao P. Neto Lu´ıs Oliveira L´ ucia Rino Manuel Concei¸c˜ao Marcelo Finger Maria Mateus Maria Viana Max Silberztein Michel Gagnon Nestor Yoma Nuno Mamede Palmira Marrafa Paulo Quaresma Pl´ınio Barbosa Renata Vieira Rute Costa Teresa Lino Vera Lima Violeta Quental

Table of Contents

Speech Analysis and Recognition Devoicing Measures of European Portuguese Fricatives . . . . . . . . . . . . . . . . Luis M.T. Jesus, Christine H. Shadle

1

AUDIMUS.media: A Broadcast News Speech Recognition System for the European Portuguese Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hugo Meinedo, Diamantino Caseiro, Jo˜ ao Neto, Isabel Trancoso

9

Pitch Restoration for Robust Speech Recognition . . . . . . . . . . . . . . . . . . . . . Carlos Lima, Adriano Tavares, Carlos Silva

18

Speech Synthesis Grapheme-Phone Transcription Algorithm for a Brazilian Portuguese TTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filipe Barbosa, Guilherme Pinto, Fernando Gil Resende, Carlos Alexandre Gon¸calves, Ruth Monserrat, Maria Carlota Rosa

23

Improving the Accuracy of the Speech Synthesis Based Phonetic Alignment Using Multiple Acoustic Features . . . . . . . . . . . . . . . . . . . . . . . . . S´ergio Paulo, Lu´ıs C. Oliveira

31

Evaluation of a Segmental Durations Model for TTS . . . . . . . . . . . . . . . . . . Jo˜ ao Paulo Teixeira, Diamantino Freitas

40

From Portuguese to Mirandese: Fast Porting of a Letter-to-Sound Module Using FSTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isabel Trancoso, C´eu Viana, Manuela Barros, Diamantino Caseiro, S´ergio Paulo

49

A Methodology to Analyze Homographs for a Brazilian Portuguese TTS System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Filipe Barbosa, Lilian Ferrari, Fernando Gil Resende

57

Automatic Discovery of Brazilian Portuguese Letter to Phoneme Conversion Rules through Genetic Programming . . . . . . . . . . . . . . . . . . . . . . Evandro Franzen, Dante Augusto Couto Barone

62

Experimental Phonetics Contributions to the Portuguese Articulatory Synthesizer Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ant´ onio Teixeira, Lurdes Castro Moutinho, Rosa L´ıdia Coimbra

66

XII

Table of Contents

Pragmatics, Discourse, Semantics, Syntax, and the Lexicon A Study on the Reliability of Two Discourse Segmentation Models . . . . . . Eva Arim, Francisco Costa, Tiago Freitas

70

Reusability of Dictionaries in the Compilation of NLP Lexicons . . . . . . . . Bento C. Dias-da-Silva, Mirna F. de Oliveira, Helio R. de Moraes

78

Homonymy in Natural Language Processes: A Representation Using Pustejovsky’s Qualia Structure and Ontological Information . . . . . . . . . . . Claudia Zavaglia, Juliana Galvani Greghi

86

Using Adaptive Formalisms to Describe Context-Dependencies in Natural Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jo˜ ao Jos´e Neto, Miryam de Moraes

94

Some Regularities of Frozen Expressions in Brazilian Portuguese . . . . . . . . Oto Ara´ ujo Vale

98

Tools, Resources, and Applications Selva: A New Syntactic Parser for Portuguese . . . . . . . . . . . . . . . . . . . . . . . . 102 Sheila de Almeida, Ariadne Carvalho, Lucien Fantin, Jorge Stolfi An Account of the Challenge of Tagging a Reference Corpus for Brazilian Portuguese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Sandra Alu´ısio, Jorge Pelizzoni, Ana Raquel Marchi, Luc´elia de Oliveira, Regiana Manenti, Vanessa Marquiaf´ avel Multi-level NER for Portuguese in a CG Framework . . . . . . . . . . . . . . . . . . . 118 Eckhard Bick HMM/MLP Hybrid Speech Recognizer for the Portuguese Telephone SpeechDat Corpus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Astrid Hagen, Jo˜ ao P. Neto Managing Linguistic Resources and Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 David M. de Matos, Joana L. Paulo, Nuno J. Mamede Using Morphossyntactic Information in TTS Systems: Comparing Strategies for European Portuguese . . . . . . . . . . . . . . . . . . . . . . . 143 Ricardo Ribeiro, Lu´ıs Oliveira, Isabel Trancoso Timber! Issues in Treebank Building and Use . . . . . . . . . . . . . . . . . . . . . . . . . 151 Diana Santos A Lexicon-Based Stemming Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Gilberto Silva, Claudia Oliveira

Table of Contents

XIII

Contractions: Breaking the Tokenization-Tagging Circularity . . . . . . . . . . . 167 Ant´ onio Horta Branco, Jo˜ ao Ricardo Silva A Linguistic Approach Proposal for Mechanical Design Using Natural Language Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Jo˜ ao Carlos Linhares, Altamir Dias Identification of Direct/Indirect Discourse in Children’s Stories . . . . . . . . . 175 Nuno J. Mamede Curupira: A Functional Parser for Brazilian Portuguese . . . . . . . . . . . . . . . . 179 Ronaldo Martins, Gra¸ca Nunes, Ricardo Hasegawa ANELL: A Web System for Portuguese Corpora Annotation . . . . . . . . . . . . 184 Cristina Mota, Pedro Moura Email2Vmail – An Email Reader . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Hugo Pereira, Pedro Teixeira, Lu´ıs C. Oliveira A Large Speech Database for Brazilian Portuguese Spoken Language Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Carlos Alberto Ynoguti, Pl´ınio Almeida Barbosa, F´ abio Violaro

Dialogue Systems Interpretations and Discourse Obligations in a Dialog System . . . . . . . . . . . 197 M´ arcio Mour˜ ao, Pedro Madeira, Nuno J. Mamede Using Dialogues to Access Semantic Knowledge in a Web IR System . . . . 201 Paulo Quaresma, Irene Rodrigues Managing Dialog and Access Control in Natural Language Querying . . . . . 206 Luis Quintano, Irene Rodrigues

Summarization and Information Extraction GistSumm: A Summarization Tool Based on a New Extractive Method . 210 Thiago Alexandre Salgueiro Pardo, Lucia Helena Machado Rino, Maria das Gra¸cas Volpe Nunes Topic Indexing of TV Broadcast News Programs . . . . . . . . . . . . . . . . . . . . . . 219 Rui Amaral, Isabel Trancoso

Evaluation An Initial Proposal for Cooperative Evaluation on Information Retrieval in Portuguese . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Rachel Aires, Sandra Alu´ısio, Paulo Quaresma, Diana Santos, M´ ario J. Silva

XIV

Table of Contents

Evaluation of Finite-State Lexical Transducers of Temporal Adverbs for Lexical Analysis of Portuguese Texts . . . . . . . . . . . . . . . . . . . . . 235 Jorge Baptista Evaluating Automatically Computed Word Similarity . . . . . . . . . . . . . . . . . . 243 Caroline Varaschin Gasperin, Vera L´ ucia Strube de Lima Evaluation of a Thesaurus-Based Query Expansion Technique . . . . . . . . . . 251 Luiz Augusto Sangoi Pizzato, Vera L´ ucia Strube de Lima Cooperatively Evaluating Portuguese Morphology . . . . . . . . . . . . . . . . . . . . . 259 Diana Santos, Lu´ıs Costa, Paulo Rocha

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Devoicing Measures of European Portuguese Fricatives Luis M.T. Jesus1 and Christine H. Shadle2 1

Escola Superior de Sa´ ude da Universidade de Aveiro, and Instituto de Engenharia Electr´ onica e Telem´ atica de Aveiro Universidade de Aveiro, 3810-193 Aveiro, Portugal [email protected] http://www.ieeta.pt/˜lmtj/ 2 Department of Electronics and Computer Science University of Southampton, Southampton, SO17 1BJ, UK [email protected] http://www.ecs.soton.ac.uk/info/people/chs/

Abstract. This paper presents a study of devoicing of European Portuguese fricatives. Two devoicing criteria (a manual criterion and a criterion based on the ratio of variances of the laryngograph signal during the VF transition and during the fricative) were used to classify the examples into two or three categories. The results of the automatic and manual measures of devoicing are compared, and an explanation for observed misclassifications is presented. Devoicing occurs extensively in Portuguese; results are compared to those for English.

1

Introduction

Studies of Portuguese fricatives have been limited in comparison with other languages, yet are important for applications such as speech synthesis. In this paper we focus on devoicing of fricatives using recordings by four subjects of a set of corpora. The segmentation method and two devoicing classification methods using acoustic and laryngograph signals are described. Results are presented and compared to studies of devoicing in English.

2

Design and Recording of a Corpus of Portuguese Fricatives

A speech corpus has been designed to explore the fricatives of standard European Portuguese. The phonetic and phonological evidence underlying the design of the corpus have been described in Jesus (2001) and Jesus and Shadle (2002). Corpora were devised that included the fricatives /f, v, s, z, S, Z/ in the following contexts: sustained (Corpus 1), repeated nonsense words following Portuguese phonological rules (Corpus 2), Portuguese words containing fricatives in frame sentences (Corpus 3), and the same set of words in sentences (Corpus 4). N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 1–8, 2003. c Springer-Verlag Berlin Heidelberg 2003 

2

L.M.T. Jesus and C.H. Shadle

The subjects used in this study were two male (LMTJ and CFGA) and two female (ACC and ISSS) adult Portuguese native speakers, with no reported history of hearing or speech disorders. Subject LMTJ, age 26, is from the city of Aveiro (at the centre of Portugal), and CFGA, age 26, is from Braga (in north Portugal). Speaker ACC, age 33, is from Sintra (a city very close to Lisbon), and ISSS, age 21, is from Lisbon. At the time of the recordings all subjects had been studying in England for a period of two to three years. Recordings were made in a sound treated room using a Bruel & Kjaer 4165 1 inch microphone located 1 m in front of the subject’s mouth, connected to 2 a Bruel & Kjaer 2639 pre-amplifier. The signal was amplified and filtered by a Bruel & Kjaer 2636 measurement amplifier, with high-pass cut-on frequency of 22 Hz and low-pass cut-off frequency of 22 kHz. A laryngograph signal (Lx) was also collected using a laryngograph processor1 . The acoustic speech signal and Lx were recorded with a Sony TCD-D7 DAT system at 16 bits, with a sampling frequency of 48 kHz, and digitally transferred to a computer for post-processing. A 94 dB, 1000 Hz calibration tone produced by a Bruel & Kjaer 4620 calibrator was also recorded on the same tape on which speech was recorded (Jesus 2001).

3

Method for Segmentation and Annotation

The acoustic and laryngographic signals were recorded on DAT tape (16 bit, sampling frequency 48 kHz) and digitally transferred to .wav computer files. The time waveforms of all the corpus words were manually analysed to detect the start of the vowel-fricative (VF) transition, the start of the fricative, the end of the fricative, and the end of the fricative-vowel (FV) transition. During the VF transition, there is a decrease in amplitude, voicing ceases (for unvoiced fricatives) and frication noise starts, as shown in Fig. 1. During the FV transition, there is an increase in amplitude, voicing starts (for unvoiced fricatives) and frication noise ceases (Docherty 1992, pp. 118–119). These events do not occur simultaneously or always in the same order, making the segmentation a somewhat subjective process. However, it is important to segment consistently, because the results of the analysis methods depend on where the boundaries are placed (Docherty 1992, pp. 103–110). The amplitude and voicing changes appear in both acoustic and Lx signals, which aids the segmentation process. For example, as can be seen in Fig. 1, the start of the VF transition was chosen because both signals begin to decrease in amplitude noticeably at that point. The start of the fricative is placed where the noise in the acoustic signal becomes more apparent (in an unvoiced fricative, it would be placed after the Lx signal has lost all signs of periodicity); and the end of the fricative is placed where the frication noise ceases to be visible in the acoustic signal, and where voicing begins again in the LX signal (as in this partially devoiced example, or in an unvoiced fricative). Note that in this case, the end of the fricative is much more abrupt than its start, making the FV transition region easier to segment. 1

Model LxProc, type PCLX produced by Laryngograph Ltd (UK).

Devoicing Measures of European Portuguese Fricatives

3

Laryngograph Signal 0.3

Amplitude

0

−0.3

Start VF Trans.

End Fricative

Start Fricative

End FV Trans.

Acoustic Signal 0.15

0

−0.15 120

140

160

180

200

220

240

Time (ms)

Fig. 1. Laryngograph signal and acoustic signal of fricative /z/ in azar /åÉza|/, showing the start of the vowel-fricative transition, the start of the fricative, the end of the fricative, and the end of the fricative-vowel transition. Corpus 3 (Speaker ISSS)

Once segmented, the sample numbers for each boundary were entered into an annotation file that was used by the analysis programs.

4

Measures of Devoicing

In the word corpora (Corpus 3 and 4), there were large amounts of devoicing. When the vocal tract is constricted for a voiced fricative, voicing is often maintained over only part of the fricative, because there is a trading relation between the pressure drops across the glottal and tongue constrictions (Stevens 1987, p. 388). When voiced fricatives devoice, it is with a whisper phonation (Abercrombie 1967, p. 137), distinguishing them from their voiceless counterparts which are realised with a glottal abduction gesture. Smith (1997) also suggested that the glottis is in a state intermediate between voicing and voicelessness, like the state of the glottis that is used in whisper, with the glottis open but the folds very close together. Haggard (1978) defined devoicing as presence of measurable friction in the absence of continued glottal vibration, i.e., the periodic component of the voiced fricative ceased before the friction component. Smith (1997, p. 478) used a criterion for devoicing in American English based on the amplitude of the electroglottograph (EGG) cycles: “The fricative was considered to be voiced during the portion of its duration that the amplitude of the EGG cycles exceeded one-tenth of the EGG cycle amplitude at the time of maximum energy in the preceding

4

L.M.T. Jesus and C.H. Shadle

vowel”. A study by Pirello et al. (1997, p. 3756) presents an alternative measure of voicing based on the acoustic signal: “An amplitude difference greater than 10 dB between the amplitude of the vowel and frication noise was classified as voiceless. A difference of less than or equal to 10 dB sustained over 30 ms was classified as voiced”. 4.1

Manual Criterion for Devoicing

Both the acoustic signal and the laryngograph signal were used to determine if a fricative was devoiced. A fricative was called devoiced when less than one-third of the frication interval showed periodic structure in the acoustic or laryngograph signals. The term partially devoiced was used when more than one-third but less than half of the frication interval contained steady acoustic and laryngograph signal cycles. A fricative was called voiced when more than half of the frication interval showed steady acoustic and laryngograph signal cycles, even if the amplitude was much lower than in the vowel (this follows Docherty’s definition 1992, p. 13). If the laryngograph signal was clearly periodic, the interval was classified as voiced; if the laryngograph signal was zero or distorted, the signal was classified as voiced only if the acoustic signal was unambiguously periodic. 4.2

Automatic Criterion for Devoicing

As pointed out by Docherty (1992, p. 102), many techniques for automatically detecting whether a portion of a signal is voiced or not have been used in the past, but proved to be unsuitable for fricatives because in this class of speech sounds voicing has low energy. Therefore a new criterion, based on the laryngograph signal, was tested for the corpora used in the present study. The sample mean x=

N 1  xi N i=1

(1)

and sample variance N

σ 2 (x) =

1  2 (xi − x) N − 1 i=1

(2)

of the laryngograph signal samples xi , i = 1, ...N were calculated during the VF transition and during the fricative. The ratio of variances of the two intervals, rσ2 (x) =

σt2 (x) , σf2 (x)

(3)

where σt2 (x) is the variance of the signal during the VF transition and σf2 (x) is the variance of the signal during the fricative, was used as an automatic criterion for devoicing. Obviously the ratio is larger if the fluctuations in the laryngograph signal during the fricative are very small compared to those during the transition region. A heuristic threshold of 15 was used: for rσ2 (x) ≥ 15, the

Devoicing Measures of European Portuguese Fricatives

5

fricative is labelled devoiced; if rσ2 (x) < 15, voiced. Fricatives manually classified as partially voiced were considered to be in the devoiced category when comparing the manual and automatic criteria. The laryngograph signal presents, in some voiced fricative examples and in most unvoiced fricative examples, a slowly increasing or decreasing amplitude over the frication interval, which results in a large variance, and therefore a misclassification as voiced. This problem has been solved using an averaged rσ2 (x) . We have computed the mean x and the variance σ 2 (x) for three consecutive equal length sections of the frication interval, calculated the average frication interval variance, and used it to compute a new ratio of variances. We have tried using a larger number of sections over which we calculate the averaged rσ2 (x) but this does not significantly improve the efficiency of this measure of devoicing.

5

Results of Devoicing Analysis Using the Manual Criterion

In this section we consider first the results for Corpus 3, the words in frame sentences, and then the results for Corpus 4, sentences made using the same words. The full corpus is given in Jesus and Shadle (2002). Corpus 3 results showed that 71% (241 out of 341) of fricatives devoiced (averaged across all positions within word), and the percentage of devoicing increased as the place of articulation moved posteriorly. For all four subjects 55% (70 out of 127) of the examples of fricative /v/ were totally devoiced (see Sect. 4.1 for devoicing criterion); 74% (79 out of 107) of the examples of fricative /z/ were totally devoiced; 86% (92 out of 107) of the examples of fricative /Z/ were totally devoiced. Most word-final fricative examples (93% – 55 out of 59) were totally devoiced. Corpus 4 results showed that 61% (252 out of 413) of fricatives devoiced, and most word-final fricatives were devoiced. For all four subjects 44% (77 out of 177) of the examples of fricative /v/ were totally devoiced; 78% (86 out of 110) of the examples of fricative /z/ were totally devoiced; 71% (89 out of 126) of the examples of fricative /Z/ were totally devoiced. The Corpus 4 fricatives devoiced mostly word-finally, but less often than in Corpus 3: word-initial – 97/157 = 62% (in Corpus 3, 75/127 = 59%); word-medial – 111/195 = 57% (in Corpus 3, 111/120 = 93%); word-final – 44/61 = 72% (in Corpus 3, 55/59 = 93%). Some of the fricatives in the sentences of Corpus 4 that have been classified as wordfinal are followed by voiced phonemes. Some of the words that follow these fricatives even start with a vowel. This might account for the lower word-final average percentage of devoicing in Corpus 4 when compared with Corpus 3. Indeed, some voiceless fricatives become voiced in Corpus 4, likely as a result of cross-word coarticulation: eleven tokens of word-final /S/ were produced as [Z] by speakers LMTJ and ACC when followed by a word starting with a voiced phoneme ([d] or [m]). There is a slightly lower number of devoiced examples of /Z/ than /z/, which contradicts the very clear results of Corpus 3 fricatives (in which the percentage of devoiced examples decreases as the place of articulation moves anteriorly).

6

L.M.T. Jesus and C.H. Shadle

One possible explanation could be that /Z/ is produced in a more anterior place in continuous speech than in isolated word production. This hypothesis can only be confirmed with additional articulatory data, which is planned as future work. These results can be compared to those of Smith (1997) and Pirello et al. (1997) for American English. Smith (1997) studied only /z/ in a range of contexts and measured 47% devoiced and 36% partially devoiced. Pirello et al. (1997) studied /v, z/ in nonsense words (fricatives in initial and stressed position) and measured, respectively, 5%, 20% devoiced, and 35%, 40% partially devoiced. The comparable figures for initial stressed /v, z/ in Corpora 3 and 4 are 32%, 55% devoiced and 16%, 23% partially devoiced. We must of course be cautious, because we have only four subjects and they may have been influenced by the two-to-three years spent in England. But subject to these reservations, it appears that there is more devoicing in European Portuguese than in American English.

6

Evaluation of the Automatic Devoicing Criterion

The ratio of variances (rσ2 ≥ 15), described in Sect. 4.2, was used as the criterion for devoicing for the Corpus 3 and 4 fricatives of Speaker LMTJ. Results are as shown in Table 1. There are some examples which are classified differently from the manual criterion (see Sect. 4.1). Still, the percentage of examples from Corpus 3 which were classified in the same category using the two methods is quite high: /v/ – 86.1% (31 out of 36), /z/ – 93.3% (28 out of 30), and /Z/ – 83.3% (25 out of 30). The percentage of “correctly classified” examples from Corpus 4 was: 79% (27 out of 34) of /v/; 77% (17 out of 22) of /z/; and 64% (14 out of 22) of /Z/. Overall, 83% of voiced fricatives are classified correctly. For Corpus 3 most of the discrepancies result from cases on the partially devoiced/completely devoiced borderline, giving promise that this automatic measure can be reliably used in the future. Some examples present a few peaks in the laryngograph waveform, which contribute to a larger variance than initially expected. Whether these peaks are included in the fricative or in the adjacent Table 1. Inventory of all cases of complete devoicing (using the automatic criterion). Values given are in the form x/y, where x = number of devoiced examples, and y = total number of examples. Corpus 3 (top) and Corpus 4 (bottom). Speaker LMTJ

/v/ /z/ /Z/ All Fric. /v/ /z/ /Z/ All Fric.

Word-Initial 9/14 (64.3%) 6/10 (60%) 5/10 (50%) 20/34 (58.8%) 3/14 (21.4%) 3/8 (37.5%) 4/8 (50%) 10/30 (33.3%)

Word-Medial 8/13 (61.5%) 15/17 (88.2%) 13/15 (86.7%) 36/45 (80%) 7/18 (38.9%) 8/10 (80%) 4/9 (44.4%) 19/37 (51.4%)

Word-Final 8/9 (88.9%) 3/3 (100%) 4/5 (80%) 15/17 (88.2%) 0 3/4 (75%) 3/5 (60%) 6/11 (54.6%)

All Pos. 25/36 (69.4%) 24/30 (80%) 22/30 (73.3%) 71/96 (74%) 10/34 (29.4%) 14/22 (63.6%) 11/22 (50%) 35/78 (44.9%)

Devoicing Measures of European Portuguese Fricatives

7

vowels depends on the criteria used for segmentation. There are also some examples in Corpus 3 manually classified as voiced but with a ratio of variances greater than 15. Although there is voicing throughout the whole frication interval, the amplitude of the laryngograph signal during the fricative is much lower than during the VF transition. Most examples present a significant amplitude reduction of the laryngograph signal for the duration of the voiced fricative. A total of 39 examples from Corpus 4 were misclassified as voiced because there was no VF transition or there was devoicing during the VF transition (51% – 20 out of 39), and because there were a few cycles of the laryngograph during the production of the fricative (41% – 16 out of 39). The remainder of misclassified examples (8% – 3 out of 39) resulted from a dc drift in the laryngograph signal for the duration of the fricative. The rσ2 metric was also successful when used for the unvoiced fricatives /f, s, S/ of Corpus 3 and 4, with an overall 83% correctly classified. The percentage of “correctly classified” unvoiced examples of Corpus 3 was: /f/ – 96.2% (25 out of 26); /s/ – 85.2% (23 out of 27); and /S/ – 93.8% (30 out of 32). For Corpus 4 the results were: 81% (13 out of 16) of /f/; 55% (16 out of 29) of /s/; 85% (17 out of 20) of /S/.

7

Conclusions

Devoicing rate is generally very high for European Portuguese fricatives, especially when compared with studies of other languages, and devoicing occurs more often in word-final than word-initial position. A possible explanation for such high percentages of devoicing could be that, due to the structure of the language and its vocabulary, Portuguese speakers are very seldom faced with confusions between voiced and devoiced examples. It is thought that this is an important characteristic of European Portuguese, which would have to be incorporated in any production model to obtain more natural-sounding synthetic speech. Factors that might be correlated with devoicing were investigated using Corpora 3 and 4 for two of the subjects, LMTJ and ACC. We note that the speakers all produced a large number of repeated tokens of nonsense words in one breath (more than 12 tokens). This high rate of speech (compared with previous recordings of similar corpora by French, American English and German speakers) could be one of the reasons why there are so many devoiced examples. A preliminary evaluation of the automatic criterion for devoicing showed great potential for the use of this technique in future work. The percentage of voiced fricative tokens from Corpus 3 and 4 which were classified in the same category using the two methods (manual and automatic) is quite high (overall 83%; range 64–93%).

Acknowledgements. This work was partially supported by Funda¸c˜ao para a Ciˆencia e a Tecnologia, Portugal.

8

L.M.T. Jesus and C.H. Shadle

References [1] Abercrombie, D. (1967). Elements of General Phonetics. Edinburgh: Edinburgh University Press. [2] Docherty, G.J. (1992). The Timing of Voicing in British English Obstruents. Berlin: Foris Publications. [3] Haggard, M. (1978). The devoicing of voiced fricatives. Journal of Phonetics 6, 95–102. [4] Jesus, L.M.T. (2001). Acoustic Phonetics of European Portuguese Fricative Consonants. Ph.D Thesis, Department of Electronics and Computer Science, University of Southampton, Southampton, UK. [5] Jesus, L.M.T. and C.H. Shadle (2002). A parametric study of the spectral characteristics of European Portuguese fricatives. Journal of Phonetics 30 (3), 437–464. [6] Pirello, K., S.E. Blumstein, and K. Kurowski (1997). The characteristics of voicing in syllable-initial fricatives in American English. Journal of the Acoustical Society of America 101 (6), 3754–3765. [7] Smith, C.L. (1997). The devoicing of /z/ in American English: Effects of local and prosodic context. Journal of Phonetics 25 (4), 471–500. [8] Stevens, K.N. (1987). Interaction between acoustic sources and vocal-tract configurations for consonants. In Proceedings of the 11th International Congress of Phonetic Sciences (ICPhS 87), Volume 3, Tallinn, Estonia, USSR, pp. 385–389.

AUDIMUS.media: A Broadcast News Speech Recognition System for the European Portuguese Language Hugo Meinedo, Diamantino Caseiro, Jo˜ ao Neto, and Isabel Trancoso L2 F – Spoken Language Systems Lab INESC-ID / IST, Rua Alves Redol, 9, 1000-029 Lisboa, Portugal {Hugo.Meinedo,Diamantino.Caseiro,Joao.Neto,Isabel.Trancoso}@l2.inesc-id.pt http://l2f.inesc-id.pt

Abstract. Many applications such as media monitoring are experiencing a large expansion as a consequence of the different emerging media sources and can benefit dramatically by using automatic transcription of audio data. In this paper, we describe the development of a speech recognition engine, AUDIMUS.MEDIA used in the Broadcast News domain. Additionally we describe recent improvements that permitted a relative recognition error decrease of more than 20% and a 4x speed-up.

1

Introduction

The development of speech recognition systems associated to Broadcast News (BN) tasks open the possibility for novel applications where the use of automatic transcriptions is a major attribute. We have been developing a system for selective dissemination of multimedia information in the scope of an IST-HLT European programme project. To accomplish that goal we have been working in the development of a broadcast news speech recognition system associated with automatic topic detection algorithms [1]. The idea was to build a system capable of identifying specific information in multimedia data consisting of audio-visual streams, using continuous speech recognition, audio segmentation and topic detection techniques. The automatic transcriptions produced by our speech recognition system are used for a topic detection module that outputs a set of topics to be sent to end users. This paper describes in detail our BN speech recognition system engine, AUDIMUS.MEDIA. Section 2 introduces the BN corpus used for training the system. Section 3 describes the acoustic models. In Sects. 4 and 5 we present the vocabulary and lexicon building and the language model. The decoder algorithm is described in Sect. 6, followed by our latest BN recognition results in Sect. 7. Finally some concluding remarks are presented in Sect. 8. N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 9–17, 2003. c Springer-Verlag Berlin Heidelberg 2003 

10

2

H. Meinedo et al.

BN Corpus

To support the research and developments associated with this task it was necessary to collect a representative Portuguese BN corpus in terms of amount, characteristics and diversity. We started by defining the type of programs to monitor, in close cooperation with RTP the Portuguese public broadcast company, being selected as primary goals all the news programs, national and regional, from morning to late evening, including both normal broadcasts and specific ones dedicated to sports and financial news. Given its broader scope and larger audience, the 8 PM news program was selected as the prime target. This corpus serves two main tasks: the development of a BN speech recognition system and a system for topic segmentation and indexing. In that sense we divided our corpus in two parts: the Speech Recognition Corpus and the Topic Detection Corpus. Since each of these parts serves different purposes it will also have different features. Prior to the collection of these corpora we started with a relative small Pilot Corpus of approximately 5 hours, including both audio and video, which was used to setup the collection process, and discuss the most appropriate kind of programs to collect. The Speech Recognition Corpus was collected next, including 122 programs of different types and schedules and amounting to 76h of audio data. The main goal of this corpus was the training of the acoustic models and the adaptation of the language models used in the large vocabulary speech recognition component of our system. The last part of our collection effort was the Topic Detection Corpus, containing around 300 hours of audio data related to 133 TV broadcast of the 8 PM news program. The purpose of this corpus was to have a broader coverage of topics and associated topic classification for training our topic indexation module. RTP as data provider was responsible to collect the data in their installations. The transcription process was jointly done by RTP and our institution, and made through the Transcriber tool following the LDC Hub4 (Broadcast Speech) transcription conventions. Most of the audio data was first automatically transcribed. The orthographic transcriptions of the Pilot Corpus and the Speech Recognition Corpus were manually verified. For the Topic Detection Corpus, we have only the automatic transcriptions and the manual segmentation and indexation of the stories made by RTP staff in charge of their daily program indexing. Our institution was responsible for the validation process, training the annotators and packaging the data. 2.1

Pilot Corpus

The Pilot Corpus was collected in April 2000 and served as a test bed for the capture and transcription processes. We selected one of each type of program resulting in a total of 11 programs and a duration of 5h 33m. After removing the jingles and commercial breaks we ended up with a net duration of 4h 47m. The corpus was manually transcribed at RTP. The corpus includes the MPEG-1 files (.mpeg) where the audio stream was recorded at 44.1 kHz at 16 bits/sample, a separated audio file (.wav) extracted from the MPEG-1 data, a transcription file (.trs) resulting from the manual annotation process in the Transcriber tool, and a

AUDIMUS.media: A Broadcast News Speech Recognition System

11

Table 1. Pilot Corpus programs Program Not´ıcias Jornal da Tarde Pa´ıs Regi˜ oes Pa´ıs Regi˜ oes Lisboa Telejornal Remate 24 Horas RTP Economia Acontece Jornal 2 Grande Entrevista Total

Duration 0:08:02 0:57:36 0:16:05 0:24:21 0:45:31 0:07:30 0:24:23 0:09:43 0:20:39 0:49:34 1:09:38 5:33:00

Type Morning news Lunch time news Afternoon news Local news Evening news Daily sports news Late night news Financial news Cultural news Evening news Political / Economic interview (weekly)

.xls (Excel format) file including the division into stories and their corresponding summary that results from the daily process of indexation at RTP. The final contents of the Pilot Corpus in terms of programs, duration and type is presented in Table 1. 2.2

Speech Recognition Corpus

The Speech Recognition Corpus was collected from October 2000 to January 2001, with small changes in the programs when compared with the Pilot Corpus. This corpus was divided in three sets: training, development and evaluation. A complete schedule for the recordings was elaborated leaving some time intervals between the different sets. We got 122 programs in a total of 75h 43m. We adopted the same base configuration as for the Pilot Corpus, except that we did not collected the video stream. Also the audio was recorded at 32 kHz, due to restrictions of the hardware, and later downsampled to 16 kHz which was appropriate to the intended processing. This corpus was automatically transcribed and manually verified. As a result it includes only a audio stream file (.wav) and a transcription file (.trs). The final contents of the Speech Recognition Corpus in terms of programs, duration and type is presented in Table 2.

3

AUDIMUS.MEDIA Recognition System

AUDIMUS.MEDIA is a hybrid speech recognition system that combines the temporal modeling capabilities of Hidden Markov Models (HMMs) with the pattern discriminative classification capabilities of multilayer perceptrons (MLPs). In this hybrid HMM/MLP system a Markov process is used to model the basic temporal nature of the speech signal. The MLP is used as the acoustic model estimating context-independent posterior phone probabilities given the acoustic data at each frame. The acoustic modeling of AUDIMUS.MEDIA combines phone

12

H. Meinedo et al. Table 2. Speech Recognition Corpus programs Training

Programs

Not´ıcias Jornal da Tarde Pa´ıs Regi˜ oes Pa´ıs Regi˜ oes Lx Telejornal 24 Horas RTP Economia Acontece Jornal 2 Total

Number

7 8 12 7 30 4 13 9 7 97

Development

Duration Number

0:43:52 7:55:46 6:43:47 2:16:47 32:41:34 1:18:53 1:53:02 3:05:38 4:53:39 61:32:58

1 1 1 1 3 2 2 1 1 13

Evaluation

Duration Number

0:10:38 1:13:10 0:32:49 0:20:32 3:37:13 1:02:43 0:10:38 0:19:47 0:46:04 8:13:34

1 1 1 1 2 2 2 1 1 12

Duration

0:10:41 1:02:59 0:33:46 0:20:16 1:54:18 0:38:35 0:19:58 0:17:50 0:38:26 5:56:49

Type

Morning news Lunch time news Afternoon news Local news Evening news Late night news Financial news Cultural news Evening news

probabilities generated by several MLPs trained on distinct feature sets resulting from different feature extraction processes. These probabilities are taken at the output of each MLP classifier and combined using an average in the logprobability domain [2]. All MLPs use the same phone set constituted by 38 phones for the Portuguese language plus silence and breath noises. The combination algorithm merges together the probabilities associated to the same phone. We are using three different feature extraction methods and MLPs with the same basic structure, that is, an input layer with 9 context frames, a non-linear hidden layer with over 1000 sigmoidal units and 40 softmax outputs. The feature extraction methods are PLP, Log-RASTA and MSG.

4

Language Modeling

During the few last years we have been collecting Portuguese newspapers from the web, which allowed us to build a considerably large text corpus. Until the end of 2001 we have texts amounting to a total of 24.0M sentences with 434.4M words. A language model generated only from newspaper texts becomes too much adapted to the type of language used in those texts. When this language model is used in a continuous speech recognition system applied to a Broadcast News task it will not perform as well as one would expect because the sentences spoken in Broadcast news do not match the style of the sentences written in the newspaper. A language model from Broadcast News transcriptions would probably be more adequate for this kind of speech recognition task. The problem is that we do not have enough BN transcriptions to generate a satisfactory language model. However we can adapt a newspaper text language model to the BN task by combining it with a model created from BN transcriptions using linear interpolation, and thus improve performance. One of the models is always generated from the newspaper text corpus while the other is a backoff trigram model using absolute discounting and based on the training set transcriptions of our BN database. The optimal weights used in

AUDIMUS.media: A Broadcast News Speech Recognition System

13

the interpolation are computed using the transcriptions from the development set of our BN database. The final interpolated model has a perplexity of 139.5 and the newspapers model has 148.0. It is clear that even using a very small model based in BN transcriptions we can obtain some improvement in the perplexity of the interpolated model.

5

Vocabulary and Pronunciation Lexicon

From the text corpus with 335 million words created from all the newspaper editions collected until the end of 2000, we extracted 427k different words. About 100k of these words occur more than 50 times in the text corpus. Using this smaller set, all the words were classified according to syntactic classes. Different weights were given to each class and a subset with 56k words was created based on the weighted frequencies of occurrence of the words. To this set we added basically all the new words present in the transcripts of the training data of our Broadcast News database then being developed, giving a total of 57,564 words. Currently the transcripts contain 12,812 different words from a total of 142,547. From the vocabulary we were able to build the pronunciation lexicon. To obtain the pronunciations we used different lexica available in our institution. For the words not present in those lexica (mostly proper names, foreign names and some verbal forms) we used an automatic grapheme-phone system to generate corresponding pronunciations. Our final lexicon has a total of 65,895 different pronunciations. For the development test set corpus which has 5,426 different word in a total of 32,319 words, the number of out of vocabulary words (OOVs) using the 57k word vocabulary was 444 words representing a OOV word rate of 1.4%.

6

Weighted Finite-State Dynamic Decoder

The decoder underlying the AUDIMUS.MEDIA system is based the weighted finitestate transducer (WFST) approach to large vocabulary speech recognition [3]. In this approach, the search space used by the decoder is a large WFST that maps observation distributions to words. This WFST consists of the composition of various transducers representing components such as: the acoustic model topology H; context dependency C; the lexicon L and the language model G. The search space is thus H ◦ C ◦ L ◦ G1 , and is traditionally compiled outside of the decoder, which uses it statically. Our approach differs in that our decoder is dynamic and builds the search space ”on-the-fly” as required by the particular utterance being decoded [4]. Among the advantages provided by the dynamic construction of the search space are: a better scalability of the technique to large language models; reduction of 1

We use the matrix notation for composition.

14

H. Meinedo et al.

the memory required in runtime; and easier adaptation of the components in runtime. The key to the dynamic construction of the search space is our WFST composition algorithm [5] specially tailored for the integration of the lexicon with the language model (represented as WFSTs). Our algorithm performs simultaneous the composition and determinization of the lexicon and the language model while also approximating other optimizing operations such as weight pushing and minimization [6]. The goal of the determinization operation is to reduce lexical ambiguity in a more general way than what is achieved with the use of a tree-organized lexicon. Weight pushing allows the early use of language model information, which allows the use of tighter beams thus improving performance. Minimization essentially reduces the memory required for search while also giving a small speed improvement. 6.1

Alignment

In order to allow the registration of time boundaries the decoder allows the use of a special label EOW in the input side of the search space transducer. Whenever that label is crossed while searching, the decoder records the time of the crossing. That label is thus used to mark word or phone boundaries. Using the EOW labels the decoder can be used for alignment, in alignment mode the search space is usually build as H ◦ L ◦ S where S is the orthographic transcription of the utterance being recognized. The fact that the decoder imposes no a priori restrictions on the search space structure gives us great flexibility, for example, alternative pronunciation rules can be used by compiling then in a finite-state transducer R, and building the search space as H ◦ R ◦ L ◦ S[7]. The EOW label is also given other uses in the decoder, for example, it can be used as an aid in the construction of word lattices wherein the labels mark the end of segments corresponding to arcs in the lattice. One other use of the label is to collect word-level confidence features that can be used to compute confidence scores. 6.2

Pruning

Pruning is fundamental to control the search process in large vocabulary recognition. Our decoder uses 3 forms of pruning: beam pruning; histogram pruning; and phone deactivation pruning. Each form of pruning deals with a different aspect of the search process. Beam pruning is probably the most important, and consists of pruning the hypotheses with a score worse than a given amount (the beam) from the best one among the hypotheses ending at a particular frame. This form of pruning is used by most large vocabulary decoders. Our form of beam pruning differs from most in its eagerness, which allows the pruning of hypotheses while they are being generated, by using the cost of the best hypothesis so far as a reference. When an hypothesis with cost ct at time t is

AUDIMUS.media: A Broadcast News Speech Recognition System

15

propagated though an edge with input label d and weight w, its cost in the next frame is updated with two components: a transition weight w that incorporates linguistic constraints; and an acoustic weight distr(d, t + 1) obtained from the speech signal. Because the transition weight is often of the same order of magnitude as the beam, we obtain significant improvements by performing the pruning test twice: first the cost ct + w is tested and then ct + w + distr(d, t + 1). If the first test fails, we avoid the expensive computation of both distr(d, t + 1) and the bookkeeping associated with the expansion of the hypothesis. The function of histogram pruning is to reduce peak resource usage, time and memory, to a reasonable limit. It consists of establishing the maximum number m of hypotheses that are expanded at each frame. Whenever their number is over the limit, only the m best are kept and the other are pruned. If the value of m is set to a reasonable value (such as 100000) then it was virtually no negative effect on the accuracy of the decoder while preventing it from staling when there is a severe acoustic mismatch relative to the training conditions. Phone deactivation pruning [8] takes advantage of the fact that the MLP directly estimates the posterior probability of each phone. This form of pruning consists of flooring the posterior probability of a given phone to a very low value when it is below a given threshold. This has the effect of allowing the MLP to deactivate some unlikely phones. There is usually an optimal value for the threshold, if too large then the search will be faster but more error prone, if it too low, then the search will be slower with no advantage regarding the accuracy (sometimes the accuracy is even worse due to the MLP having difficulty modeling low probabilities).

7

Speech Recognition Results

Speech recognition results were conducted in the development test set which has over 6 hours of BN data. The experiments were conducted in a Pentium III 1GHz computer running Linux with 1Gb RAM. Table 3 summarizes the word error rate (% WER) evaluation obtained by AUDIMUS.MEDIA. The lines in Table 3 show the increase in performance by each successive improvement to the recognition system. The first column of results refers to the F0 focus condition where the sentences contain only prepared speech with low background noise and good quality audio. The second results column refers to the WER obtained in all test sentences, including noise, music, spontaneous speech, telephone speech, non-native accents and also including the F0 focus condition sentences. The first line of results in Table 3 were obtained using MLPs with 1000 hidden units and a stack decoder. Compared with the second line of results we see that there was a significant increase in performance obtained when we switched to the new WFST dynamic decoder, especially in decoding time, expressed in the last column as real-time speed. The third line of results shows the improvement obtained by substituting the determinized lexicon transducer by one that was

16

H. Meinedo et al. Table 3. BN speech recognition evaluation using the development test set

MLPs 1000 1000 1000 4000 4000 4000

Decoder stack WFST + min det L “ + eager pruning + shorter HMMs

% WER F0 All 18.3 33.6 18.8 31.6 18.0 30.7 16.9 29.1 16.7 28.9 14.8 26.5

xRT 30.0 4.8 4.3 3.7 3.9 7.6

also minimized. The fourth line shows 8% relative improvements obtained from increasing the hidden layers to 4000 units. This increase was necessary because the acoustic models MLPs were no longer coping with all the variability present in the Speech Recognition and Pilot corpus that were used as training data. The fifth line shows the positive effect of the eager pruning mechanism described in Sect. 6.2. Our current system, shown in line six, achieves another 8% relative improvement by decreasing the minimum duration of phone models by one frame.

8

Concluding Remarks

Broadcast News speech recognition is a very difficult and resource demanding task. Our recognition engine evolved substantially through the accumulation of relatively small improvements. We are still far from perfect recognition, the ultimate goal, nevertheless our current technology is able to drive a number of very useful applications, including audio archive indexing and topic retrieval. In this paper we have described a number of improvements that permitted a relative recognition error decrease of more than 20% and speed-up from 30x real-time to as little as 7.6 xRT.

References 1. Amaral, R., Langlois, T., Meinedo, H., Neto, J., Souto, N., Trancoso, I.: The development of a portuguese version of a media watch system. In: Proceedings EUROSPEECH 2001, Aalborg, Denmark (2001) 2. Meinedo, H., Neto, J.: Combination of acoustic models in continuous speech recognition. In: Proceedings ICSLP 2000, Beijing, China (2000) 3. Mohri, M., Pereira, F., Riley, M.: Weighted finite-state transducers in speech recognition. In: ASR 2000 Workshop. (2000) 4. Caseiro, D., Trancoso, I.: Using dynamic wfst composition for recognizing broadcast news. In: Proc. ICSLP ’2002, Denver, Colorado, USA (2002) 5. Caseiro, D., Trancoso, I.: On integrating the lexicon with the language model. In: Proc. Eurospeech ’2001, Aalborg, Denmark (2001) 6. Caseiro, D., Trancoso, I.: Transducer composition for “on-the-fly” lexicon and language model integration. In: Proc. ICASSP ’2003, Hong Kong, China (2003)

AUDIMUS.media: A Broadcast News Speech Recognition System

17

7. Caseiro, D., Silva, F.M., Trancoso, I., Viana, C.: Automatic alignment of map task dialogs using wfsts. In: Proc. PMLA, ISCA Tutorial and Research Workshop on Pronunciation Modelling and Lexicon Adaptation, Aspen, Colorado, USA (2002) 8. Renals, S., Hochberg, M.: Efficient search using posterior phone probability estimates. In: Proc. ICASSP ’95, Detroit, MI (1995) 596–599

Pitch Restoration for Robust Speech Recognition Carlos Lima, Adriano Tavares, and Carlos Silva Department of Industrial Electronics of University of Minho, Campus de Azurém 4800-058 Guimarães, Portugal {carlos.lima,adriano.tavares,carlos.silva}@dei.uminho.pt

Abstract. This article suggests a based spectral normalization method, which purpose is to alleviate both the changing on peaks structure and on the flat zones of the speech spectrum, caused by additive noise. The proposed spectral normalization can be viewed as a noise estimate done in a frame by frame basis by assuming the clean database as lightly corrupted. This noise estimate is then used to restore both the peaked and the flat spectral zones of the speech spectrum. This algorithm was implemented over a baseline spectral normalisation method, which purpose is to emphasize the robustness in the regions of less energy of the speech spectrum, since in these regions the noise is more dominant.

1 Introduction In [1] it is argued that a proper spectral normalization, which concentrates essentially on the speech regions of less energy, could improve significantly the robustness of speech recognition systems when operating under additive noise conditions. The spectral regions with small energy would need a large degree of noise robustness since, assuming that the noise is speech independent, they are more corrupted. The spectral regions of small energies usually correspond to unvoiced sounds regions, which are spectrally not very well defined. Roughly speaking nearly half of the consonants can be classified as unvoiced, while the other half and the vowels are generally classified as voiced. Generally the importance of the vowels in classification and representation of written text is very low; however, most practical automatic speech recognition systems rely heavily on vowel recognition to achieve high performance, forgetting the speech regions of small energy, which perhaps contains the most important degraded information regarding to speech recognition tasks. Others authors [2] have also given an increasing importance to the spectral regions of small energy of the speech spectrum, although by using alternative approaches. The algorithm proposed in [1] can be improved by considering the properties of both the voiced and unvoiced speech regions. The voiced speech regions are characterized by peaked spectral zones, which are flattening, while the unvoiced speech regions are raised as the noise becomes more and more dominant. This join effect is perhaps the main cause of performance degradation under additive noise conditions. N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 18–22, 2003. © Springer-Verlag Berlin Heidelberg 2003

Pitch Restoration for Robust Speech Recognition

19

The algorithm proposed in this paper tries to restore partially both the original spectral peaks and the flat spectral zones of a normalized speech spectrum. This approach assumes the clean database lightly contaminated and the noise power is estimated in a frame-by-frame basis by the lowest power of all the sub-bands in each segment. The speech spectrum is normalized (by subtraction) by the smallest component, which is proportional to the noise level and contains minor speech dependence. The algorithm does not assume noise existence, in the sense that the features are extracted exactly in the same way in both noisy and noise free conditions.

2 Noise Effect on the Baseline Spectral Normalization Domain The main goal of a robust features extraction method is providing robustness against noise or other sources of variability by ignoring its presence. Although the noise can be compensated, the effectiveness of this approach becomes very dependent on the accuracy of the noise estimate, which is a very hard task in practical situations. Hence our main goal was searching for a compensation process independent of the noise level or characteristics, although the proposed baseline normalisation assumes a wide band additive noise for maximal performance. More details can be found in [1]. For task uniformity in both clean and noisy conditions the clean database must be considered lightly contaminated. Trying to clean the database, which can be viewed as another kind of normalisation, represents a procedure compatible with the noise compensation paradigm, however the noise compensation is behind the spectral normalization procedure. We propose to estimate the second normalisation factor (the first normalisation factor is behind the baseline normalisation procedure [1]) by taking the value of the smallest component of the power spectrum density in each speech frame. The noise effect can then be alleviated by taking into account that the peaked spectral regions are flattened and the flat spectral regions are raised by the noise effect, as can be seen in [1]. This type of procedure presumes an efficient peak detector, which must be able to distinguish peaks of voiced nature (pitch) from weak peaks occurring in the speech regions of low energy, where the baseline system [1] is efficient concerned to the attenuation of the additive noise effect. This peak classification suggests the use of thresholds, where the key question is how to calculate the threshold level? Based only in practical considerations especially in the inspection of the selected peaks we concluded that roughly speaking a peak which energy is above at least three times the mean of the rest of components in the frame must be classified as a true peak. Otherwise the selected peak must be ignored in order to preserve the benefits of the baseline normalisation [1] on low energy segments.

3 Proposed Noise Compensation To cope simultaneously with the noise effect on the peaked and on the flat spectral regions we must consider two types of compensation procedures, for the peaked and

20

C. Lima, A. Tavares, and C. Silva

for the flat spectral regions. The flat spectral regions are raised by the noise effect, so we suggest subtracting the smallest component to each other component of the observed vector, so we are implicitly considering wide band noise. The procedure must be improved in the future to account for narrow band noise. To account for the second type of normalisation maintaining however compatibility between the two types of normalisation, the features extraction described in [1] must be changed so that

 S i − min{S i } , S i ≠ min{S i }  S . ci =  S i  , otherwise  S

(1)

where Si and S denotes respectively the power in sub-band i and the power of the considered speech segment. The noise compensation in the peaked spectral regions is made by increasing the speech coefficient that was decreased (flattened) by the noise effect. Assuming clean speech (not lightly contaminated speech) the speech features are related [1] by B

∑c

i

= 1.

(2)

i =1

where B is the number of sub-bands. For a speech frame where n peaks are detected, these peaks must be increased by a noise dependent factor. Assuming that each spectral sub-band was decreased proportionally to its value, which seems to be true by analysing figure 1 in [1], the noise compensation can be made by computing cj as

    ( B − n)  (S j )1 + n min{Si } Sj   ∑ j =1   cj = S

(3)

Therefore, the energy subtracted in the flat spectral regions is restored in the peaked zone in order to invert the additive noise effect whereas the sum of all the speech features for each frame is maintained unitary as supposed by the baseline spectral normalisation.

Pitch Restoration for Robust Speech Recognition

21

4 Experimental Results The proposed algorithm was tested in an Isolated Word Recognition system using Continuous Density Hidden Markov models. The database of isolated words used for training and testing is from AT&T Bell and is more accurately described in [1]. The noise has white noise characteristics, is speech independent and computationally generated at various SNR as shown in Table 1. The goal is to compare the performance of the method proposed in this article with the performance of the baseline spectral normalization method when used alone. The performance of the baseline spectral normalization when compared with contemporary speech robust features can be found in [1]. In Table 1 Norm. stands for the baseline spectral normalisation algorithm proposed in [1], PR stands for the pitch restoration algorithm proposed in this paper and N.+MMC stands for the conventional Markov model composition technique over the baseline spectral normalisation. Table 1. Performance of the pitch restoration algorithm under a test set of 400 digits SNR (dB)

15

10

5

0

-5

Norm.

98.5

97.75

93.75

88

42.5

PR

99.25

98.25

95

89.75

61.5

N.+ MMC

99.5

98.75

97.25

92.25

84.75

Table 1 shows that the pitch restoration procedure is really effective in situations where the noise is unknown. However, as expected, if the noise is known, which is not frequently the case in practical situations, the baseline spectral normalization overtake the suggested pitch restoration method suggested in this article.

5

Discussion

The main advantage of this multi-normalisation process is the recognition performance obtained when no knowledge of the noise statistics exists. As a robust extraction features, the baseline method [1] seems to be superior to the most used nowadays, which performance can be increased by the method of restoration of the peaks structure of the speech spectrum suggested in this paper. If isolated noise samples exist, the noise can be estimated and this knowledge can be incorporated into the system, and consequently increasing the recogniser performance.

22

C. Lima, A. Tavares, and C. Silva

References 1.

2.

Lima, C., Almeida, Luís B. and Monteiro, João L.: Improving the Role of Unvoiced Speech Segments by Spectral Normalisation in Robust Speech Recognition. 7th International Conference on Spoken Language Processing (ICSLP’2002). Raj, Biksha: Reconstruction of Incomplete Spectrograms for Robust Speech Recognition. Ph. D. Thesis, Department of Electrical and Computer Engineering, Carnegie Mellon University (2000).

Grapheme-Phone Transcription Algorithm for a Brazilian Portuguese TTS Filipe Barbosa1 , Guilherme Pinto1 , Fernando Gil Resende1 , Carlos Alexandre Gon¸calves2 , Ruth Monserrat2 , and Maria Carlota Rosa2 1 2

Escola Polit´ecnica, Universidade Federal do Rio de Janeiro, Brazil {filipe,guilherme,gil}@lps.ufrj.br Faculdade de Letras, Universidade Federal do Rio de Janeiro, Brazil [email protected] [email protected] [email protected]

Abstract. This paper describes one of the aspects of an ongoing research to improve the synthetic speech quality for a Brazilian Portuguese text-to-speech synthesis system. This paper focuses on the grapheme-phone transcription algorithm. A complete set of rules for every grapheme is presented. Experimental results, obtained running the proposed algorithm through a text database, gave rise to 98,43% of accuracy rate.

1

Introduction

A text-to-speech (TTS) synthesizer is a system which should be able to read, with some degree of intelligibility and naturalness, any text in a given language, Brazilian Portuguese (BP) in the case of this paper. As simple as it seems, this definition poses a problem from the outset: which accent to choose amongst the different accents in BP. If the variety of Portuguese spoken in Rio de Janeiro was preferable to any other in Brazil in the beginning of the 20th century [1,2, 3], its status has been changing drastically since the last decades of the century [4,5], and now it is considered too much nasalized, and sizzingly. So, in place of a no more prestigious accent, it was decided to provide the model with a neuter accent, understood as an accent which most of the Brazilian speakers could identify. According to Ramos [4], the accent in Jornal Nacional, the television news broadcast with the largest indexes of audience in Brazil, is felt in different degrees, by speakers of many parts in the country, as representative of their own dialects. This work describes a complete transcription algorithm intended to support a BP TTS with a neuter accent. The proposed grapheme-phone transcription algorithm was tested using a randomly selected part of the CETEN-Folha text database [6]. The transcribed phones were checked, and 98,43% were correctly transcribed in a set of 7805 phones from 1364 words. In this paper, SAMPA [7] phonetic alphabet is used with a few extensions that we propose for BP. Official BP orthography is represented in italic fonts. N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 23–30, 2003. c Springer-Verlag Berlin Heidelberg 2003 

24

F. Barbosa et al.

The remaining of this paper is organized as follows. Section 2 presents the transcription rules. Section 3 discusses experimental results, and Sect. 4 focuses on our conclusions and future work.

2

The Transcription Rules

This section presents the symbols used in the transcription algorithm, followed by the rules for every grapheme. Table 1 contains the symbolization adopted for the description of grapheme rules. As some units needed for our BP transcription algorithms are not listed in the SAMPA phonetic units, we proposed the units listed in Table 2. 2.1

The Transcription Rules

The starting point for the transcription was the phone list proposed by Alcaim, Solewicz & Moraes [8], modified in Pinto, Barbosa & Resende Jr [9]. The rules for transcribing a given grapheme to the correspondent phone are shown in Tables 3–13. These tables are organized by grapheme blocks of rules. The left column has the grapheme sequences, the middle column contains the phone transcription, and the right column comes with one example for each rule. Table 1. Table of symbols used to express the transcription rules Symbol Value ... Any character (Punctuation or Grapheme) A given grapheme x [y] A given acoustic unit y Hf Sp V

hifen Space between words Any vowel

V s

Stressed Vowel

V us Unstressed vowel C Any consonant

Symbol C v

Value Voiced consonants

C uv

Unvoiced consonants Except for , any consonant Pnt Punctuation ( , . ! ? ; ) W bgn Beginning of word

When the transcription of the grapheme after is [y] STATE modifies x. Possibilities: V us, V s or W bgn

Graphemes x1 or x2 or

Table 2. List of SAMPA extension units proposed for BP [fS] afta [bZ] abnegado [i m] mbi´ a [kS] krenak [gZ] magn´ıfico [m i] amn´esia [pS] piano [vZ] Ambev [i n] ntogapide

Grapheme-Phone Transcription Algorithm for a Brazilian Portuguese TTS Table 3. Table of rules for graphemes Grapheme Sequence for algorithm ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Grapheme Sequence for algorithm ... ... ... ... ... ...

Phone [6] [6˜] [6˜w] [6˜] [6˜] [a] [6˜] [6˜w] [a] Phone [bZ] [bZ] [b]

Example Ela gostou do doce. Ele pertence ` a Febraban. ´ prov´ E avel que cres¸cam. antropofagia, amper´ımetro cama, banho ant´ artica amanh˜ a, ˆ amago avi˜ ao aracnofobia Example abnegado Ele ´e o sub. abacate

Table 4. Table of rules for graphemes Grapheme Sequence for algorithm ... ... ... ... ... ... ... ... ... ... ... ... Grapheme Sequence for algorithm ... ... ... ... ... ... ...

Phone [kS] [kS] [s] [s] [S] [k] Phone [dZ] [dZ] [dZ] [d]

Example icter´ıcia Aquele ´e o tic-tac. aceitar; jacinto almo¸co acho claro Example dia; Ela ir´ aa ` tarde. advogado Ir˜ ao usar o raid. dote

Table 5. Table of rules for graphemes Grapheme Sequence for algorithm ... ... ... ... ... ... ... ... ... ... ... ... ... ... Grapheme Sequence for algorithm ... ... ... ... ... ...

Phone Example [e˜] embora; entoa¸c˜ ao [e˜] tema, cont´em, tˆem [e] bebˆe [E] ´epoca [i] O ´ındice ´e alto. [i] Essas s˜ ao as frases. [e] errado Phone Example [fS] afta [fS] Fez um pif-paf. [f] faca

25

26

F. Barbosa et al. Table 6. Table of rules for graphemes Grapheme Sequence for algorithm Phone Example ... ... [gZ] magn´ıfico ... ... [gZ] Foram os ca¸cas mig. ... ... [Z] gelo, giz ... ... [g] guindaste, guerreiro ... ... [g] garoto Grapheme Sequence for algorithm Phone Example ... ... [] hoje Table 7. Table of rules for graphemes Grapheme Sequence for algorithm Phone Example ... ... [i˜] cacimba, sinto ´ o professor Amin. ... ... [i˜] Fomos sim; E ... ... [i˜] imaginar, ninho ... ... [j] sai ... ... [i] sa´ı Grapheme Sequence for algorithm Phone Example ... ... [Z] jambo Grapheme Sequence for algorithm Phone Example ... ... [kS] O que ´e krenak? ... ... [k] kulina Grapheme Sequence for algorithm Phone Example ... ... [l] ala ... ... [L] alho ... ... [w] vogal Table 8. Table of rules for graphemes Grapheme Sequence for algorithm Phone Example ... ... [m i] amn´esia ... ... [i m] mbi´ a ... ... [J] Quem ´e?; Algu´em usou; Eles tˆem amor; Tim est´ a pronto. ... ... [j˜] Que eles saquem; Est´ a com algu´em; Eles tˆem. ... ... [m] mameluco Grapheme Sequence for algorithm Phone Example ... ... [i n] ntogapide ... ... [j˜] Bonito abdˆ omen. ... ... [J] ganho ... ... [n] nata

Grapheme-Phone Transcription Algorithm for a Brazilian Portuguese TTS

27

Table 9. Table of rules for graphemes Grapheme Sequence for algorithm ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Grapheme Sequence for

algorithm ...

... ...

... ...

... ...

...

Phone [O] [o] [o˜] [o] [u] [o˜] [o˜] [o˜] [u] [o] Phone [pS] [pS] [f] [p]

Example vov´ o vovˆ o cora¸co ˜es co-produ¸ca ˜o O m´ usico ´e feliz. ontem, compositor Comprar na Kibon; Est´ a no tom. soma, sono carros escopo Example pneu piano esphera pato

Table 10. Table of rules for graphemes Grapheme Sequence for algorithm Phone Example ... ... [kS] quito, quente ... ... [k] quando Grapheme Sequence for algorithm Phone Example ... ... [R] carros ... ... [R] Pomar rodeado de fores. ... ... [R] A rua foi interditada. ... ... [r] ratoeira ´ pre... ... [r] Falta acertar apenas uma; E ciso faltar hoje. ... ... [R] Injetar gr˜ aos de arroz. ... ... [R] carga ... ... [X] Ela ir´ a se lascar.

Within each grapheme block, individual rules are (a) disjunctively ordered, so that if a rule has been applied all the others are skipped; and (b) layered in the order they are checked, so that the last rule for every grapheme is applied whenever none of the other rules apply, i.e, it is the default rule. For a given text to be transcribed, the algorithm of mapping a given grapheme into its respective acoustic unit follows the order of appearance of each grapheme. For every grapheme of the sentence, the correspondent algorithm is called, concatenating into the transcribed sentence the generated acoustic unit. It is worth noting that a rule can skip the next grapheme to be analyzed, such as in the fourth rule of grapheme algorithm, where both graphemes or are associated with the phone [6˜]. For the grapheme , listed in

28

F. Barbosa et al. Table 11. Table of rules for graphemes Grapheme Sequence for algorithm Phone Example ... ... [z] asa ... ... [z] transgredir ... ... [s] assar ... ... [s] crescer, crescido ... ... [s] cres¸cam ... ... [S] shiatsu ... ... [Z] Eles jogaram bola. ... ... [z] Os aros s˜ ao cromados; Os helic´ opteros s˜ ao aerodinˆ amicos; Elas gostam disso. ... ... [z] transa¸c˜ ao, trˆ ansito ... ... [z] obs´equio, obsequioso ... ... [s] Eles ficaram com o prˆemio . Grapheme Sequence for algorithm Phone Example ... ... [tS] algoritmo ... ... [tS] Ruth ... ... [t] Arthur ... ... [tS] tia, mete ... ... [tS] Aquele set foi dif´ıcil. ... ... [t] tato Table 12. Table of rules for graphemes Grapheme Sequence for algorithm Phone Example ... ... [w] ling¨ u´ıstica ... ... [u˜] abundante, retumbante ... ... [u˜] Vamos at´e Kunlun; Ele come atum ... ... [u˜] uma, unha. ... ... [u] Ubirajara Grapheme Sequence for algorithm Phone Example ... ... [vZ] Ambev ... ... [v] voando Grapheme Sequence for algorithm Phone Example ... ... [w] watt

Table 6, no phonetic representation is used, as this grapheme is not pronounced in Portuguese. Also, it is important to highlight that a given grapheme can be mapped into more than one acoustic unit, as can be seen in Table 13. In the fifth rule of this table, the grapheme is mapped into two acoustic units, [kS][s], as well as in the sixth rule, if the word that contains belongs to the exception list of Table 14.

Grapheme-Phone Transcription Algorithm for a Brazilian Portuguese TTS

29

Table 13. Table of rules for graphemes Grapheme Sequence for algorithm ... ...

Phone [z]

Example execrar, inexistˆencia ... ... [s] excˆentrico, ˆextase, inexpressivo ...... [z] ex-amigo ...... [s] ex-presidente ... ... [kS][s] Ele era o Marx. ... ... Phone exception Exceptions in Table 14 ... ... [S] faxina Grapheme Sequence for algorithm Phone Example ... ... [i] Ygua¸cu ... ... [j] Yanomami Grapheme Sequence for algorithm Phone Example ... ... [s] Ferraz furou o ferro. ... ... [z] Faz anos que n˜ ao o vejo; Ferraz gosta de ferro; Faz horas que o vejo. ... ... [s] Isso n˜ ao se faz. ... ... [z] zebra Table 14. Table of exceptions for grapheme Phone of the rule 8 for List of words associated [kS][s] ox´ıtono, oxidar, complexo, reflexo, anexar, oxigˆenio, oxi´ uro, oxalato, u ´xer, uxoricida, axila, axiologia, ´ıxia, taxi, sintaxe ixofagia, ixomielite, ixolite, ixˆ omero, ixora, ixoscopia, ox-ac´ etico

3

Experimental Results

The proposed rules were implemented and tested using part of the text from the CETEN-Folha database [6]. The phones originated by the algorithm were checked, and 98,43% of them were correctly transcribed. A resume of the errors can be seen in Table 15.

30

F. Barbosa et al. Table 15. Table of errors on mapping the graphemes Error type Occurrences Occurrence(%) [e] or [E] misplaced 22 0.28% [o] or [O] misplaced 18 0.23% Incorrect foreign word phones 31 0.40% Diphthongs 35 0.44% Incorrect acronym phones 17 0.22%

As can be seen from Table 15, the errors come from diphthongs, foreign words, acronyms and confusion between [O] or [o], and [E] or [e]. Rules to handle these cases are subject of ongoing research.

4

Conclusions

This paper presents rules for generating a sequence of phones from a grapheme sequence to be applied in a BP TTS system. The proposed rules were tested using part of the CETEN-Folha text database giving rise to 98,43% of correctly transcribed phones. Present research concentrates on proposing rules to deal with foreign words, names, diphthongs, and decision for the phones [O] or [o], and [E] or [e]. Rules to discriminate the different levels of nasality and stress for some acoustic units, such as [6] and [6˜], for example, are also subject of future work.

References 1. Cunha, Celso.: L´ıngua portuguesa e realidade nacional. 2a . ed. atualiz. Rio de Janeiro: Tempo Brasileiro, 1970. 2. Anais do Primeiro Congresso Brasileiro de L´ıngua Falada no Teatro, 1958. 3. Anais do Primeiro Congresso da L´ıngua Nacional Cantada, 1938. 4. Ramos, Jˆ ania M.: Avalia¸c˜ ao de dialetos brasileiros: o sotaque. In: Revista de Estudos da Linguagem. Belo Horizonte: UFMG. 6 (5):103–125. jan.–jun. 1997. ´ 5. Almeida, M.J.A.: Etude sur les attitudes linguistiques au Br´ esil Universit´e de Montreal, 1979. 6. Corpus de Extractos de Textos Electrˆ onicos NILC/Folha de S˜ ao Paulo (CETENFolha). http://acdc.linguateca.pt/cetenfolha/ 7. Speech Assessment Methods Phonetic Alphabet (SAMPA). http://www.phon..ucl.ac.uk/home/sampa 8. Alcaim, Solewicz, and Moraes: Frequˆencia de ocorrˆ encia dos fones e listas de frases foneticamente balanceadas no Portuguˆes falado no Rio de Janeiro. Revista da Sociedade Brasileira de Telecomunica¸c˜ oes. Revista da Sociedade Brasileira de Telecomunica¸co ˜es. 7(1): 23–41. 9. Pinto, G.O., F. Barbosa, and F.G. Resende Jr. A.: Brazilian Portuguese TTS based on HMMs. 2002. In: Proceedings of International Telecommunications Symposium. Natal, Rio Grande do Norte.

Improving the Accuracy of the Speech Synthesis Based Phonetic Alignment Using Multiple Acoustic Features S´ergio Paulo and Lu´ıs C. Oliveira L2 F Spoken Language Systems Lab. INESC-ID/IST Rua Alves Redol 9, 1000-029 Lisbon, Portugal {spaulo,lco}@l2f.inesc-id.pt http://www.l2f.inesc-id.pt

Abstract. The phonetic alignment of the spoken utterances for speech research are commonly performed by HMM-based speech recognizers, in forced alignment mode, but the training of the phonetic segment models requires considerable amounts of annotated data. When no such material is available, a possible solution is to synthesize the same phonetic sequence and align the resulting speech signal with the spoken utterances. However, without a careful choice of acoustic features used in this procedure, it can perform poorly when applied to continuous speech utterances. In this paper we propose a new method to select the best features to use in the alignment procedure for each pair of phonetic segment classes. The results show that this selection considerably reduces the segment boundary location errors.

1

Introduction

Phonetic alignment plays an important role in speech research. It is needed in a wide range of applications, from the creation of prosodically labelled databases, for research into natural prosody generation, to the creation of training data for speech recognizers. Furthermore, the development of many corpus-based speech synthesizers [1,2]) requires large amounts of annotated data. Manual phonetic alignment of speech signals is an arduous and very time consuming task. Thus, the size of the speech databases that can be labelled this way are obviously very constrained, and the creation of large speech inventories requires some sort of automatic method to perform the phonetic alignment. While building a system to automatically align a set of utterances, two different problems can be found. First, we have to know the sequence of phonetic segments observed in those utterances. Then, we have to locate the segment boundaries. The sequence of segments can be obtained by using a pronunciation dictionary or by applying a set of pronunciation rules to the orthographic transcription of the utterances. However, it is, usually, not possible to predict the exact sequence uttered by the speaker and we must take into account possible disfluencies, N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 31–39, 2003. c Springer-Verlag Berlin Heidelberg 2003 

32

S. Paulo and L.C. Oliveira

elisions, allophonic variations, etc. In this work, we will assume that we already have the right sequence of segments and we will focus on the task of locating the segment boundaries. Several approaches have been taken to try to solve this problem. The most widely explored technique is the use of HMM-based speech recognizers (sometimes hybrid systems, based on HMM and Artificial Neural Networks) in forced alignment mode. This approach relies on the use of phone models built under the HMM framework. This models are trained using large amounts of labelled data, recorded from several speakers, to take into account the phone’s acoustic properties in very different contexts. For single speaker databases, the performance of the system can be improved by adapting the speaker independent models to the speaker’s voice. The difficulty of this approach is that it requires the availability of segmented data for the speaker. This material must be annotated following strict segmentation rules so that the resulting system can locate segment boundaries with the necessary precision. When no such system is available, a Dynamic Time Warping (DTW, [3]) based approach can be taken. This technique was used in early days of speech recognition to compare and align a spoken utterance with pre-recorded models, taking into account possible variations on the speaker’s rhythm. The recognized utterance corresponded to the model with the minimum accumulated distance after the alignment. The same methodology can be used for the phonetic alignment problem as described in [6] and [7]. This procedure, also known as speech synthesis based phonetic alignment, starts by producing a synthetic speech signal with the desired phonetic sequence that allows us to know the exact location of the phonetic segment boundaries. This can easily be achieved using a modified speech synthesizer. The next step is to compute, every few milliseconds, vectors of acoustic features for both the synthetic and natural speech signals. By using some type of distance measure, the acoustic feature vectors can be aligned with each other using the DTW algorithm. The algorithm result is a time alignment path between the synthetic and natural signal time scales, that allows us to map the segment boundaries from the synthetic signal into the natural utterance. This approach does not require any previously segmented speech from the same speaker but the results depend, in some extent, on the similarity between the synthesizer’s and speaker’s voice, and they should have, at least, the same gender. The performance of this method is strongly dependent on the selection of the acoustic features used in the alignment procedure and on the distance used to compare them. This work is part of an effort to automate the process of multi-level annotation of speech signals. A complete view about this problem can be found in [4]. In this paper, we will describe our work on the use of different features to improve the performance of a DTW-based phonetic alignment algorithm. The results of this study lead us to a new method to perform the alignment that uses multiple acoustic features depending on the class of segments to be aligned. The paper is divided into five sections. The next section describes the process for producing the synthetic reference signal with segmentation marks. The

Improving the Accuracy of the Speech Synthesis Based Phonetic Alignment

33

following section describes an automatic procedure for the selection of the most relevant acoustic features. These results are then applied in the next section, where the alignment procedure is described. The final section compares the results of the new method with a traditional approach.

2

Waveform Generator

An important issue on the DTW-based phonetic alignment is the generation of the reference speech signal. This can be achieved by using some sort of a speech synthesizer, that can be modified to produce the desired phonetic sequence together with the segment boundaries. The problem with this solution is that the signal processing required to impose the rhythm and intonation determined by the prosody module also introduces distortions on the synthetic signal. For our purposes, these prosodic modifications are not necessary and a simple waveform concatenation system was used. Since our goal was to locate the segment boundaries, we used diphones as concatenation units. This way, the concatenation distortion is located in the middle of the phone and does not affect the signal in the phone boundary. In order to have a general purpose system it must be able to produce any phonetic sequence and the inventory must contain all the possible diphones in the language. We followed the common approach of generating a set of nonsense words (logathomes), containing all the required diphones in a context that minimizes the co-articulation with the surrounding phones. A speaker was asked to read the logathomes in a sound proof room and was recorded using a head mounted microphone in order to keep the recording conditions reasonably constant among sessions. We also asked the speaker to keep a constant intonation and rhythm. The recorded material was then manually annotated. We used the unit selection module of the Festival Speech Synthesis System[8] to perform the concatenation. A local search is made around the diphone boundaries to find the best concatenation point. We used the Euclidean distance between the Line Spectral Frequencies (LSF) for costing the spectral discontinuities of the speech units.

3

Acoustic Features

We considered some of the most relevant acoustic features used in speech processing: the mel frequency cepstrum coeficients (MFCC) and their differences (deltas), the four lowest resonances of the vocal tract (formants), the line spectral frequencies (LSF), the energy and its delta and the zero crossing rate of the speech signal. Both the energy and the MFCC coefficients, as well as their deltas, were computed using software from the Edinburgh Speech Tools Library [9]. The formants were computed using the formant program of the Entropic Speech Tools [10] and the remaining features were computed using our own programs.

34

S. Paulo and L.C. Oliveira

Our first experiments showed that each of these features used separately produced uneven results. Depending on the class of phones to be aligned some features proved better than others. For instance, in a vowel-plosive transition, the energy feature was the performer, but for vowel-vowel transition, the best results were achieved using formants as features. This immediately suggested the use of multiple features to distinguish the different phone transition classes. 3.1

Feature Normalization

The combination of multiple features requires a previous normalization step to equalize its influence on the overall alignment cost. It was decided to normalize the values to the range [0, 1]. The first stage was to determine which of the features had values that followed a Gaussian distribution. Observing the histograms of each coefficient, the MFCCs and their deltas were the only ones that matched that distribution. The mean and standard deviation were computed for each one of them, and the normalization was then performed, using the equation: xi =

1 Xi − µi + 2 2σi

(1)

where xi , Xi , µi and σi are the normalized value, the non-normalized value, the mean value, and the standard deviation of the ith MFCC, respectively. The LSF values were divided by π. Since the zero crossing rate was computed by evaluating the ratio between the number of times the speech signal crosses the zero magnitude and the number of signal samples existing in a fixed size window (some milliseconds), its values have already the right magnitude (between 0 and 1). For the energy, its delta and for the formants, maximum and minimum values were found for each utterance, and their mean values were computed (Yimax and Yimin ). The normalized values were calculated using the following equation: yi = 3.2

Yi − Yimin Yimax − Yimin

(2)

Feature Selection Procedure

Having all the features normalized, the next goal was to find which were more relevant in a given phonetic context. That is, which feature allowed us to locate the boundary with greater precision. For this purpose we had a set of 300 manually aligned utterances that we use to evaluate the relevance of each feature. These utterances were spoken by a different speaker than the one used to record the diphone inventory. The waveform generator previously described was used to produce reference synthetic signals for the phonetic sequences of these utterances and sets of feature vectors were computed every 5 milliseconds for both the reference and spoken signals. Using the Euclidean distance, a matrix was computed with the distances between all the feature vectors of the two series. Figure 1 shows a rough representation of this matrix. We then evaluated each distance on its capacity to

Frames of the synthesized speech signal

Improving the Accuracy of the Speech Synthesis Based Phonetic Alignment

Frames of the recorded speech signal u i # (vowel)

(silence)

#

(vowel)

35

# (silence)

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

(silence)

u (vowel)

i (vowel)

#

i i+1

(silence) j j+1

Fig. 1. Graphical representation of the distance matrix regions used for choosing the best feature / pair of features to align the different pairs of phonetic segments

discriminate the difference between two consecutive phones. This was achieved by computing the average distance between feature vectors of the same phone (dists ), and of different phones (distd ). Using the example in Fig. 1, if we want to choose an acoustic feature to distinguish the silence (#) and the vowel u, the dists is the average of the values in regions 1 and 6 on that matrix, while the distd is the average of the values on regions 2 and 5. This procedure was performed for every pair of phones and for every utterance on the training set, and its resulting values were saved at the end of each iteration. Finally, we computed an average value of the ratio between dists and distd for each pair of phonetic segments and for each acoustic feature. The chosen feature is the one that gives a minimal value for this ratio using the equation: Fk = min x

Nk  dists (k, x, i) distd (k, x, i) i=1

(3)

where, x is one of the tested features, k represents the pair of phones that is being analyzed, Nk is the number of instances of this pair in our set of utterances, Fk is the best feature for this type of transition, and dists (k, x, i) and distd (k, x, i) are the mean distances for the instance i using the acoustic feature x. The smaller is that ratio, the greater is probability of having well aligned frames, locally at least. With this approach, we are trying to use the features that assign the greatest penalty for the alignment paths when they fall out of the darkest regions of Fig. 1 (regions 1, 6, 11 and 16). Given the reduced amount of training data, we soon realized that it would be impossible to have a large enough number of instances, for each pair of segments to produce confident results. Thus the different phonetic segments were grouped into phonetic classes: vowels, fricatives, plosives, nasals, liquids and silence. The

36

S. Paulo and L.C. Oliveira Table 1. Best feature pairs for the multiple phonetic segment class transitions

Nasals Fricatives Liquids Plosives # Vowels

Nasals Fricatives Liquids Plosives # Vowels frm+lsf mfcc+zcrs frm+en lsf+en frm+en mfcc+mfcc lsf+lsf mfcc+en en+zcrs lsf+en zcrs+en lsf+lsf lsf+en lsf+en lsf+lsf mfcc+en mfcc+en frm+mfcc lsf+en lsf+lsf lsf+en mfcc+mfcc lsf+zcrs mfcc+en lsf+en lsf+en lsf+en lsf+en x lsf+en mfcc+en zcrs+lsf mfcc+en lsf+en mfcc+en frm+mfcc

semi-vowels were grouped into the class of the vowels. The described procedure for differentiating the phones was then repeated using phone class transitions (vowel-vowel, fricative-vowel, etc.). The analysis of the results showed that, in general, for each pair of phone class transition, at least two of the selected features showed good discriminative capacity. This could suggest some equivalence between the two features but it could also mean that the two features were complementary. This way we performed a combined optimization to select the pair of features for each phone class pair. The process could be extended to a combination of even more features but the results showed that there was no significant improvement in using more than a pair of features. The Table 1 shows the results of this procedure, where mfcc, lsf, frm, en and zcrs are the MFCC coefficients and their deltas, LSFs, formants, energy and its delta, and the zero crossing rate, respectively. The x symbol means that this class transition does not exist in the training set. The best feature pair for a transition x-y, is located on the line of x and column of y.

4

Frame Alignment

Before applying the DTW algorithm the distance measure matrix between the reference and the spoken signal must be built. Since we know the boundary locations of the synthetic segments, the distance matrix can be built iteratively, phone-pair by phone-pair. Taking the example shown in Fig. 1, to build the distance matrix we start by computing the matrix values for all the rows that correspond to the phone-pair # -u using the best pair of features, based on the former results. However, the phone u also belongs to the next phone-pair (u-i ) and the computed distance is multiplied by a decreasing triangular weighting window. The distance for the next phone pair (u-i ) is then computed using the best pair of features for the vowel-vowel transition and its value is added to the rows corresponding to segment u weighted by an increasing triangular window. Figure 2 shows this weighting triangular windows, where the dotted lines are the weighing factor of the previous phone-pair distances and the dashed lines are the weights of the distances for the next phone-pair. After computing all the values of the distance matrix, the DTW algorithm is applied to find the path that links the top left corner of the matrix to the lower right corner with a minimum accumulated

Scale factor 0

# Feature x

(silence)

u (vowel) Feature y

1

37

Frames of the recorded speech signal

i (vowel) Feature z

Frames of the synthesized speech signal

Improving the Accuracy of the Speech Synthesis Based Phonetic Alignment

# (silence)

Fig. 2. Graphical representation of the necessary operations for building the distance matrix

distance. This path will be the alignment function between the time scale of the synthetic reference signal and the spoken utterance.

5

Results

The procedure described in the previous section was applied to the reference corpus of 300 manually annotated sentences. The results are depicted in Fig. 3 where the lower solid line is the annotation accuracy when the entire set is aligned using always a feature vector 12 Mel-frequency cepstrum coefficients and their differences. Only 46% of the phonetic segments were aligned with an error less than 20 ms. Using only the best feature for computing the distance for each phone class pair increases the 20ms accuracy to 59% of the segments (dashed line). This result can be improved to 70% by combining two features for computing the distance measure. The relatively low percentage of agreement for tolerances lower than 20ms can be partially explained by the fact that the segmentation criteria used in the annotation of the reference corpus was not exactly the same as the one used in the segmentation of the logathomes used to produce the synthetic reference. Another difficulty was that the speech material in the reference corpus was uttered by a professional speaker with a very rich prosody and large variations in energy, where several consecutive voiced speech segments become unvoiced. This is, in our opinion the main reason for about 4% of disagreement within high tolerances (about 100 milliseconds). We hope to detect this alignment problems with some confidence measures based on the alignment cost per segment and by phone duration statistics. As soon as we have more annotated material we also plan to

38

S. Paulo and L.C. Oliveira

Automatic/Manual Agreement(%)

100

90

80

70

60

50

Best Feature−Pair 40

Best Feature

30

MFCC+deltas

20

10

0

10

20

30

40

50

60

70

80

90

100

Maximum Allowed Error(msec). Fig. 3. Accuracy of some versions of the proposed algorithm and a classic DTW-based phonetic alignment algorithm

evaluate the annotation accuracy for a corpus on which we had not optimize the feature selection in order to test the generality of the selected features

6

Conclusions

In this work we have presented a method for selecting the most relevant pair of features for aligning two speech signals with the same phone pairs but with different durations. This features were then used in a DTW-based method for performing the phonetic alignment of a spoken utterance. The results clearly show the advantage of selecting the most appropriate features for each class of segments in the alignment of two utterances: the most commonly used feature, MFCCs, performed well bellow the proposed method. Acknowledgements. The authors would like to thank M. C´eu Viana and H. Moniz for providing the manually aligned reference corpus. This work is part of S´ergio Paulo’s PhD Thesis sponsored by a Portuguese Foundation for Science and Technology (FCT) scholarship. INESC-ID Lisboa had support from the POSI Program.

References 1. M. Beutnagel, A. Conkie, J. Schroeter, Y. Stylianou and A. Syrdal, The AT&T Next-Gen TTS System, 137th Acoustical Society of America meeting, Berlin, Germany, 1999. 2. A. Black, CHATR, Version 0.8, a generic speech synthesizer, System documentation, ATR-Interpreting Telecomunications Laboratories, Kyoto, Japan, 1996.

Improving the Accuracy of the Speech Synthesis Based Phonetic Alignment

39

3. Sakoe H. and Chiba, Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. on ASSP, 26(1):43–49, 1978. 4. S. Paulo and L. Oliveira, Multilevel Annotation of Speech Signals Using Weighted Finite State Transducers. In Proceedings of IEEE 2002 Workshop on Speech Synthesis, Santa Monica, California, 2002. 5. D. Caseiro, H. Meinedo, A. Serralheiro, I. Trancoso and J. Neto, Spoken Book alignment using WFST HLT 2002 Human Language Technology Conference, San Diego, California, 2002. 6. F. Malfr`ere and T. Dutoit, High-Quality Speech Synthesis for Phonetic Speech Segmentation. In Proceedings of Eurospeech’97, Rhodes, Greece, 1997. 7. N. Campbell, Autolabelling Japanese TOBI. In Proceedings of ICSLP’96, Philadelphia, USA, 1996. 8. A. Black, P. Taylor and R. Caley, The Festival Speech Synthesis System. System documentation Edition 1.4, for Festival Version 1.4.0, 17th June 1999. 9. P. Taylor R. Caley, A. Black, S. King, Edinburgh Speech Tools Library System Documentation Edition 1.2, 15th June 1999. 10. ESPS Programs Version 5.3 Entropic Research Laboratories Inc., 1998.

Evaluation of a Segmental Durations Model for TTS João Paulo Teixeira and Diamantino Freitas Polytechnic Institute of Bragança and Faculty of Engineering of University of Porto, Portugal [email protected], [email protected]

Abstract: In this paper we present a condensed description of a European Portuguese segmental duration’s model for TTS purposes and concentrate on its evaluation. This model is based on artificial neural networks. The evaluation of the model quality was made by comparison with read speech. The standard deviation reached in test set is 19.5 ms and the linear correlation coefficient is 0.84. The model is perceptually evaluated with 4.12 against 4.30 for natural human read speech in a scale of 5.

1

Introduction

The presented segmental duration’s model is part of a global prosodic model for a European Portuguese TTS system, which is under development in the authors’ Institutions. It is based on artificial neural networks that process the input of linguistic information relative to the context of each phoneme, and outputs the predicted duration for each of its segments. A series of durations’ models can be found in the literature for other languages, mostly for American and British English. The most prominent ones will be mentioned in the following. Campbell introduced the concept of Z-score [1] to distribute the duration estimated, by a neural network, for a syllable, considering that it would be the more stable unit for prediction of duration. He measured a linear correlation coefficient (r) between speakers taking the syllable as unit of r=0.92 and only r=0.76 for segments. He reported an r=0.93 for the syllables in his model. This concept isn’t however generally accepted. Others authors, like Van Santen [2] use the phoneme as the segmental unit in order to predict durations in a Sum-of-Products model. The author reported r=0.93 in his database. Barbosa and Bailly [3] employed the concept of InterPerceptual Center Groups (IPCG) as the stable unit, and applied a neural network to predict its’ duration and subsequently the Z-score to determine the duration of each phoneme inside the IPCG. This model can deal with speech rate. They reported standard deviation for French σ=43ms, and later, Barbosa reported a σ=36ms for Brazilian Portuguese [4]. Other relevant models are the Klatt model [5] based on a Sum-ofProducts; the rule-based algorithm for French presented by Zellner [6] for two different speech rates, obtaining an r=0.85 and arguing that this value corresponds to the typical inter-speaker correlation; the look-up table based model for Galician [7] with a rmse (root-mean squared error) value of 19.6 ms in the training data; the neural netN.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 40–48, 2003. © Springer-Verlag Berlin Heidelberg 2003

Evaluation of a Segmental Durations Model for TTS

41

work based models for Spanish [8] and for Arabic [9], achieving r=0.87; the CARTbased model for Korean [10] with r=0.78. The final model we consider in this introduction was developed by Mixdorff as an integrated model for durations and F0 for German [11], having achieved r=0.80 for the durations. Existing durations’ models can be classified as statistical, mathematical or rulebased models. Besides the present one, examples of other statistical models can be [1,3,4,8–11], although [1] and [3] use the Z-score concept. These types of models became interesting with the availability of large databases. Examples of mathematical models can be [2] and [5]. Rule based models are [6] and [7]. The basic idea behind our approach comes from the fact that the duration of a segment depends, in a complex manner not only on a set of contextual features derived from both the speech signal and the underlying linguistic structure, but also on random causes. We therefore try to take into consideration most of the known relevant features of different kinds that are candidates to be influential on duration value and try to determine the complex dependency function in a robust efficient statistical manner that fits the selected database. This is known in advance not to contain all possible different combinations of features. Additionally the considered set of features is not exhaustive. Inter-speaker and intra-speaker variability is well known and should be considered in the results analysis. In that way, what can be expected from such a model is an acceptable timing for the sequence of phonemes, and not exactly the same timing imposed by the speaker. This can only be evaluated perceptually. The data that was used for the training and testing of the model was extracted from the database described in [12]. This database consists of tagged speech tracks of a set of texts extracted from newspapers that were read by a professional male radio broadcast speaker at the average speech rate of 12.2 phonemes/second. The dimension of the part of the data that was used in the present work, covers a total of 101 paragraphs containing a few hundreds phrases, essentially of declarative and interrogative types, with dimensions from one word to more than one hundred, consisting in a total of 18,700 segments of 21 minutes of speech. Phonemes were selected as the basic segment allowing the smallest granularity of the modeling. Section 2 describes the model and Sect. 3 describes the evaluation.

2 Description of the Model 2.1

Duration Features

A large number of features were considered as candidates in the beginning of the work. One by one, they were studied and taken out in order to evaluate their relative importances. In selected cases, a set of a few features was considered and taken out jointly to check for consistency. The conclusion is, in general, that the result is different from considering the isolated features. This is because these features interact nonlinearly in a significant manner. After several experiments, considering different sets of features and the correlation with segment’s duration, one was finally established as giving the best optimization of the performance of the neural network approximation. The coding of features’ values is also an important issue, so some features were coded

42

J.P. Teixeira and D. Freitas

in varying ways, in order to find the best trend and solution. The final set of features of the model and their codifications is listed in order of decreasing importance: a.

Identity of segment – one of the 44 phoneme segments considered in the inventory of the data-base (Table 3);

b.

Position relative to the tonic syllable in the so-called accent group – coded in 5 levels according to its correlation with durations;

c.

Contextual segments identities – previous (-1) and next three (+1, +2, +3) segments – signaling some significant specific phones in referred position and silences (20 phones in position -1; 12 phones in position +1; 4 phones in position +2; 2 phones in position +3);

d.

Type of vowel length in the syllable – coded in 5 levels according to its correlation with durations;

e.

Length of the accent group – number of syllables and phonemes;

f.

Relative position of the accent group in the sentence – first; other; last;

g.

Suppression or non-suppression of last vowel;

h.

Type of syllable – coded in 9 levels according to the correlation with durations;

i.

Type of previous syllable – same as previous;

j.

Type of vowel in previous syllable – same as d;

k.

Type of vowel in next syllable – same as d;

Features b, e and f are linked with the so-called accent groups that we consider as random groups of words with more than two syllables, aggregating neighbor particles. These groups work like prosodic words having only one tonic syllable. Any how they aren’t exactly prosodic words if one considers the multiple definitions in the literature. In feature d we consider 5 types of vowels according to the average duration. These 5 types are: long {a, E, e, O, o}; medium {6, i}; short {u, @}; diphthong and nasal. Feature g codes the eventual suppression of the last vowel in the word as can be found in [12], because this event usually lengthens the remaining consonant, like in the word ‘sete’ (read {sEt} – SAMPA code). The type of syllables mentioned in features h, i and j, are: V, C, CC, (both resulting from suppressed vowel) VC, CV, VCC, CVC, CCV, CCVC. During the above described process of selecting the features to be used, a qualitative measurement of the relative importance comes out. Three groups of features can be distinguishing according to relevance. The first is feature a, clearly the most important one. The second group in relevance is composed of features b, c, d, e, f and g. The third group, with features that alone are not very important, but together assume some relevance, is formed by features h, i, j and k.

Evaluation of a Segmental Durations Model for TTS

2.2

43

Neural Network

The model consists in a feed-forward neural network, fully connected. The output is one neuron that codes the desired duration in values between 0 and 1. This codification is linear in correspondence to the range 0 and 250 ms. The input neurons receive the set of coded features. Similar levels of performance (r=0.833 to 0.839) are achieved with different network architectures (2-4-1, 4-2-1, 6-1, 10-1), activating functions (hyperbolic logarithmic, hyperbolic tangent and linear) and training algorithms (LevenbergMarquardt [13] and Resilient Back-propagation [14]). If the number of weights of the net is not fewer than the number of training situations, and the training is excessive, an over-fitting may occur. In order to avoid this problem, two sets of data were used. One set for training with 14,900 segments and another set for test with 3,000 segments. The test vectors were used to stop training early if further training on the training set will hurt generalization to the test set. The cost function used for training was the mean squared error between output and target values.

3 Model Evaluation Two indicators were used to evaluate the performance of the model: the standard deviation (σ) (Eq. (1)) of the error (e) and the linear correlation coefficient (r) (Eq. (3)). Considering the error as the difference between target and predicted values of duration of segments, the standard deviation of the error (σ) was used, according to the following expressions:

σ=

∑x

2 i

i

N

, xi = ei − e , ei = d i _ original − di _ predicted

(1)

where, xi is the difference between the error of each segment and the mean error. The error is given by the difference between predicted and original duration, for each segment. When the mean error is null, as happens in this case, σ is equal to the rmse, given by Eq. (2):

rmse =

rA, B =

VA , B

σ A .σ B

, VA , B =

∑e

2 i

i

(2)

N

∑ ( a − a ). (b − b ) i

i

i

(3)

N

The linear correlation coefficient (r) was the second indicator selected, and is given by Eq. (3), where VA,B is the variance between vectors A=[a a … a ] and B=[b b … b ]. A and B are predicted and target duration vectors. 1

2

N

2

N

1

44

J.P. Teixeira and D. Freitas

The performance in the test and training sets, considering all types of phonemes, is given in Table 1. Table 1. General best performance set Test Training

σ (ms)

r 0.839 0.834

19.46 19.85

Table 2. Performance of the model (r and σ), and average duration for each type of segment r Vowel a 6 E e @ i O o u j w j~ w~ 6~ e~ i~ o~ u~ Aver.

σ (ms)

Aver. (ms) 110 68 97 95 53 69 106 97 57 49 44 64 53 75 107 109 98 86

Phonemes are presented in SAMPA code. l* is a velar l.

Cons. p !p t !t k !k b !b d !d g !g m n J l l* L r R v f z s S

! Represents the occlusive part of stop consonants.

Z Aver.

0.63 0.65 0.62 0.71 0.63 0.58 0.61 0.63 0.56 0.59 0.68 0.28 0.53 0.69 0.65 0.74 0.69 0.79 0.63

26.8 21.1 23.1 28.2 29.5 23 25.8 26.4 24.1 21.2 20 18.6 25.2 24.9 23.6 27.9 25.9 26.9 23.8

r

σ (ms)

0.25 0.39 0.76 0.59 0.41 0.27 0.79 0.23 0.76 0.2 0.73 0.26 0.31 0.33 0.3 0.23 0.73 0.68 0.63 0.38 0.45 0.56 0.37 0.59

9.2 17.8 12.8 16.5 14.4 16.6 11.2 15.1 10.9 17.2 8.9 12.8 18.6 17.9 16.7 19.4 20.9 15.3 12.4 19 19.7 22.7 16.6 24.7

0.68 0.54

24.5 21.4

0.50

16.3

Aver. (ms) 20 64 29 48 37 59 17 43 20 41 20 44 62 54 68 52 68 56 32 73 65 93 70 103 89 78

In the left part of Table 2, the vowels have, in a weighted average, r=0.63 and σ=24 ms. In the right part of the table, r=0.50 and σ=16 ms are presented as weighted average values for consonants. The average value for each phone is very well fitted by the neural network.

Evaluation of a Segmental Durations Model for TTS

45

Figure 1 plots the original versus the predicted durations in the test set for one simulation with r=0.839. There are no major errors. These errors are quite low in short segments and naturally higher in longer ones. Best Linear Fit: A = (0.68) T + (16.9) 250 R = 0.839

Data Points Best Linear Fit A=T

200

A

150

100

50

0

0

50

100

150

200

250

T

Fig. 1. Best linear fit for original and predicted duration for one simulation in the test set with r=0.839

3.1 Perceptual Evaluation One last evaluation of the model presented in this paper is the perceptual test. Five paragraphs from the test set were used for this purpose. Three realizations of each paragraph were presented to 19 subjects (8 experts and 11 non-experts) for evaluation in a scale from 0 to 5. One realization was natural speech (original); another was a time-warped natural speech with durations predicted by the model (model); and the last realization, also time-warped speech with the average duration value for each phone (average). Time-warped modifications were done with a TD-PSOLA algorithm. Table 3. Scores of model for the paragraphs presented to the listeners Paragraph 1 2 3 4 5

N. of seg. 36 164 177 209 204

σ (ms) 19.0 18.9 22.6 19.0 19.8

r 0.97 0.89 0.94 0.91 0.94

46

J.P. Teixeira and D. Freitas

The subjects didn’t know which stimulus corresponds to each realization and they could hear as many times as they want. Table 3 presents, for each paragraph, its number of segments, plus σ and r values, for the predicted durations. In all the cases the scores between experts and non-experts subjects were very similar, so they were merged.

Model

Average

Original

5

5

4

4 Average

Average

Original

3 2

Model

Average

3 2 1

1

0

0 1

3

5

7

9

11 13 15 17 19

Subjects

1

2

3

4

5

Paragraph

Fig. 2. Average score of perceptual test by subject (left side) and by paragraph (right side)

Figure 2 (left side) shows the average evaluation by subject for original, modified by the model and fixed average duration. For most of the listeners the model is very close to the original, and in four cases the model is even preferred. Figure 2 (right side) presents the average evaluation by paragraph. Again, the model is very close to the original, and in paragraph 3 is even preferred. Finally, Fig. 3 characterizes the subjects’ opinions representing for each of the three sets of realizations the minimum, the ensemble of the lower quartile, median and upper quartile in the notched box, the maximum, the mean with thick lines and outliers. The original utterances achieved a mean score of 4.30, the ones with durations imposed by the model achieved 4.12 and the ones with durations imposed with average value for each phoneme achieved 3.53. In one way, ANOVA gives p g _g \ / NULL ___ NULL The compilation of the rules may result in a very large number of FSTs that may be composed in order to build a single grapheme-to-phone transducer. Alternatively, to avoid the excessive size of this single transducer, one can selectively compose the FSTs in order to obtain a smaller set that can be later composed with the grapheme FST in runtime to obtain the phone FST .

4

The SAMPA Phonetic Alphabet for Both Languages

The SAMPA phonetic alphabet for EP2 was defined in the framework of the SAM A European project and includes 38 phonetic symbols. Table 1 lists the additional symbols that had to be defined for Mirandese, together with some examples. They cover two nasal vowels, 3 non-strident fricatives corresponding to b, d, g in intervocalic position or after r, and 2 retroflex fricatives.

2

http://www.l2f.inesc-id.pt/˜imt/sampa.html

From Portuguese to Mirandese

53

Table 1. Additional SAMPA symbols for Mirandese SAMPA @˜ E˜ B D G s z

5

Orthography centelha benga chuba roda pega sol rosa

Transcription s@˜t”ejL6 b”E˜g6 tS”uB6 R”OD6 p”EG6 s ”Ol˜ R”Oz 6

Transducer Approach for European Portuguese

The transducer approach for EP involved a large number of rules: 27 for the stress transducer, 92 for the prefix-lexicon transducer, and 340 for the gr2ph transducer. The most problematic one was the latter. We started by composing each of the other phases into a single FST . gr2ph was first converted to a FST for each grapheme. Some graphemes, such as e, lead to large transducers, while others lead to very small ones. Due to the way we specified the rules, the order of composition of these FSTs was irrelevant. Thus we had much flexibility in grouping them and managed to obtain 8 transducers with an average size of 410k. Finally, introduce-phones and remove-graphemes were composed with other FSTs and we obtained the final set of 10 FSTs. In runtime, we can either compose the grapheme FST in sequence with each FST , removing dead-end paths at each step, or we can perform a lazy simultaneous composition of all FSTs. This last method is slightly faster than the DIXI system. In order to assess the performance of the FST -based approach, we used a pronunciation lexicon built on the PF (“Portuguˆes Fundamental”) corpus. The lexicon contains around 26,000 forms. 25% of the corpus was randomly selected for evaluation. The remaining portion of the corpus was used for training or debugging. As a reference, we ran the same evaluation set through the DIXI system, obtaining an error rate of 3.25% at a word level and 0.50% at a segmental level. The first test of the FST -based approach was done without the exception lexicon. The FST achieved almost the error rate of the DIXI system it is emulating, both at a word level (3.56%) and at a segmental level (0.54%). When we integrate the exception lexicon used in DIXI, the performance is exactly the same as for DIXI. We plan to replace some rules that apply to just a few words with lexicon entries, thus hopefully achieving a better balance between the size of the lexicon and the number of rules.

6

Transducer Approach for Mirandese

The porting of the FST -based approach from EP to Mirandese involved changing the stress and gr2ph transducers. The stress rules showed only small differences

54

I. Trancoso et al.

compared to the ones for EP (e.g. stress of the words ending in ¸c, n, and ie). The gr2ph transducer was significantly smaller than the one developed for EP (around 100 rules), reflecting the much closer grapheme-phone relationship. The hardest step in the porting effort involved the definition of a development corpus for Mirandese. Whereas for EP the choice of the reference pronunciation (the one spoken in the Lisbon area and most often observed in the media), was fairly easy, for Mirandese it was a very hard task, given the differences between the pronunciations observed in the different villages of the region. This called for a thorough review of the lexicon, and checking with native speakers. For development, we used a small lexicon of about 300 words extracted from the examples in [1]. For testing, we used a manually transcribed lexicon of around 1,100 words, built from a corpus of oral interviews conducted by CLUL in the framework of the ALEPG project (Atlas Lingu´ıstico-Etnogr´ afico de Portugal e da Galiza). As a starting point, we selected the interviews collected in the village of Duas Igrejas, which was also the object of the pioneering studies of Mirandese by Jos´e Leite de Vasconcelos [9]. Our first tests were done without an exceptions lexicon. In our very small development set, we obtained 11 errors (3.68% error rate at a word level), all of which are exceptions (foreign words, function words, etc.). For the test set, a similar error rate was obtained (3.09%). Roughly half of the errors will have to be treated as exceptions, and half correspond to stress errors. For more details concerning differences between the two rule sets, and a discussion of the types of error, see [10].

7

FST-Based Concatenative Synthesis

This section describes on-going work toward the development of other modules of a text-to-speech (TTS) system using FSTs. In particular, it covers the waveform generation module, which is based on the concatenation of diphones. A diphone is a recorded speech segment that starts at the steady phase of a first phone (generally close to the mid part of the phone) and ends at the steady phase of the second one. By concatenating diphones, one can capture all the events that occur in the phone transitions, which are otherwise difficult to model. Our FST -based system is in fact based on the concatenation of triphones which builds on this widely used diphone concatenation principle. A triphone is a phone that occurs in a particular left and right context. For example, the triphone a-b-c is the version of b that has a on the left and c on the right. In order to synthesize a-b-c, we concatenate the diphones a—b and b—c and then remove the portions corresponding to phones a and c. Our first step in the development of this type of system for EP was the construction of a diphone database. A common approach is to generate a set of nonsense words (logathomes), containing a center diphone as well as surrounding carrier phones. After generating the list of prompts, they were recorded in a sound proof room, with a head mounted microphone to keep the recording con-

From Portuguese to Mirandese

55

ditions reasonably constant among sessions. We also tried to avoid variations on the speaker’s rhythm and intonation, in order to reduce concatenation problems. The following step was the phonetic alignment of the recorded prompts, which was made manually. Rather than marking the phone boundaries, we need to select phone mid parts. For each triphone a-b-c, we tried to minimize discontinuities on both diphones a—b and b—c, by performing a local search for the best concatenation point in the mid parts of the two samples of b. We used the Euclidean distance between the Line Spectral Frequencies (LSF), because of their relationship to formant frequencies and their bandwidths. By avoiding discontinuities on the formants, we solve some of the concatenation problems, but not all of them. Since at the chosen points for concatenation, the signal energy may differ, the last step is to scale the speech signals at the diphone boundaries. The scale factor is the ratio between the energy of the last pitch period of the first diphone and the energy at the first pitch period of the second diphone. This scale factor will approach one as we approach the phone boundary, to avoid changing the energy of other phones. We were not very concerned with the discontinuities of the signal fundamental frequency, because, during the recording procedure, the speaker kept it pretty constant. Using the triphone database, speech synthesis can be performed by first converting graphemes into phones, then phones into triphones, and finally concatenating the sound waves corresponding to the triphones. This process can be represented as the transducer composition cascade W ◦ G2P ◦ T r ◦ DB, where W is the sentence, G2P is the grapheme-to-phone transducer, T r is the phoneto-triphone transducer and finally DB is a transducer that maps triphones into sequences of samples. The phone-to-triphone transducer T r is constructed as the composition of two bigram transducers T r = Bdi ◦Bph . The bigram transducers map their input symbols into pairs of symbols, for example, given a sequence a, b they produce (#, a), (a, b), (b, #). The bigram transducer can be built by creating a state for each possible input symbol and creating, for each symbol pair (a, b), an edge linking state a with state b with input b and output (a, b). This prototype system, which, for the time being is completed devoided of prosody modules, was only build for EP. However, the system can be used with the Mirandese letter-to-sound transducer composed with a phone mapping transducer in order to produce an approximation of the acoustic realization of an utterance in Mirandese as spoken by an EP speaker. We expect to have funding in the near future to record a native Mirandese speaker and process the corresponding database.

8

Concluding Remarks

This paper described an FST -based approach to letter-to-sound conversion that was first developed for European Portuguese and later ported to the other official language in Portugal - Mirandese. The hardest part of this task turned out to

56

I. Trancoso et al.

be the establishment of a reference pronunciation lexicon that could be used as the development corpus, given the observed differences in pronunciation between the inhabitants of the small villages in that region. The use of finite state transducers allows a very flexible and modular framework for deriving new rule sets, and testing the consistency of the orthographic conventions. Based on this experience, we think that letter-to-sound systems could be useful tools for researchers involved in the establishment of orthographic conventions for lesser spoken languages. Moreover, such tools could be helpful in the design of such conventions for other partner languages in the CPLP community. Acknowledgments. We gratefully acknowledge the help of Ant´onio Alves, Matilde Miguel, and Domingos Raposo.

References 1. M. Barros-Ferreira and D. Raposo, editors. Conven¸ca ˜o Ortogr´ afica da L´ıngua Mirandesa. Cˆ amara Municipal de Miranda do Douro – Centro de Lingu´ıstica da Universidade de Lisboa, 1999. 2. I. Trancoso, M. Viana, F. Silva, G. Marques, and L. Oliveira. Rule-based vs. neural network based approaches to letter-to-phone conversion for portuguese common and proper names. In Proc. ICSLP ’94, Yokohama, Japan, September 1994. 3. L. Oliveira, M.C. Viana, A.I. Mata, and I. Trancoso. Progress report of project dixi+: A portuguese text-to-speech synthesizer for alternative and augmentative communication. Technical report, FCT, January 2001. 4. D. Caseiro, I. Trancoso, L. Oliveira, and C. Viana. Grapheme-to-phone using finite state transducers. In Proc. 2002 IEEE Workshop on Speech Synthesis, Santa Monica, CA, USA, September 2002. 5. L. Oliveira, M. Viana, and I. Trancoso. A rule-based text-to-speech system for portuguese. In Proc. ICASSP ’1992, San Francisco, USA, March 1992. 6. K. Koskenniemi. Two-Level morphology: A general Computational Model for WordForm Recognition and Production. PhD thesis, University of Helsinki, 1983. 7. E.L. Antworth. Pc-kimmo: A two-level processor for morphological analysis. Technical report, Occasional Publications in Academic Computing No 16. Dallas, TX: Summer Institute of Linguistics, 1990. 8. M. Mohri and R. Sproat. An efficient compiler for weighted rewrite rules. In 34th Annual Meeting of the Association for Computational Linguistics, Santa Cruz, USA, 1996. 9. J. Vasconcellos. Estudos de Philologia Mirandesa. Imprensa Nacional, Lisboa, 1900. 10. D. Caseiro, I. Trancoso, C. Viana, and M. Barros. A comparative description of gtop modules for portuguese and mirandese using finite state transducers. In Proc. ICPhS’ 2003, Barcelona, Spain, August 2003.

A Methodology to Analyze Homographs for a Brazilian Portuguese TTS System Filipe Barbosa1 , Lilian Ferrari2 , and Fernando Gil Resende1 1 2

Escola Polit´ecnica, Universidade Federal do Rio de Janeiro, Brazil {filipe,gil}@lps.ufrj.br Faculdade de Letras, Universidade Federal do Rio de Janeiro, Brazil [email protected]

Abstract. In this work, a methodology to analyze words that are homographs and heterophones is proposed to be applied in a Brazilian Portuguese text-to-speech system. The reasoning is based on grammatical construction. An algorithm structured on the presented methodology was implemented to solve the reading decision problem for the word sede giving rise to 95,0% of accuracy rate when tested on the CETEN-Folha text database.

1

Introduction

Homographs are words which have the same spell, but different meanings. For the development of a text-to-speech (TTS) system, cases of homographs which are heterophones are specially problematic, because whenever they occur, the algorithm that transcribes graphemes into phonemes has to decide between two possible readings. This paper provides a detailed analysis of the word sede, as a case study for the problem of homographs which are heterophones. The phonetic forms [sedi] or [sEdi] can be achieved, depending on the context. The proposed methodology relies on the notion of grammatical construction, which is being developed by workers on cognitive linguistics. Based on the presented analysis an algorithm to decide how the Brazilian Portuguese (BP) TTS system should read the word sede was implemented and tested on the CETENFolha text database [1], which contains 24 million words, with 2278 occurrences of sede. An accuracy rate of 95,0% was achieved. This article is organized as follows. In Sect. 2, some fundamental concepts of cognitive grammar are presented. Sections 3 and 4 deals with hypothesis and the corresponding analysis, respectively. In Sect. 5, experimental results are shown and discussed. Section 6 presents our conclusions. For the sake of clarity, in this article, Portuguese words and phrases will be printed in italic fonts, immediately followed by the corresponding English translation, in parenthesis. The SAMPA phonetic alphabet [2] is used in this work.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 57–61, 2003. c Springer-Verlag Berlin Heidelberg 2003 

58

2

F. Barbosa, L. Ferrari, and F.G. Resende

Fundamental Concepts

The analysis developed here lies on the framework usually referred to as cognitive grammar [3,4,5]. The central notion of this framework is the idea that grammatical structure is inherently symbolic. It can be characterized as a structured inventory of conventional linguistic units, which provides the means for expressing ideas in linguistic form. For our purposes, a particularly interesting kind of unit is a constructional schema. Constructional schemas are symbolic units which are complex and schematic that is to say, more abstract, specified in less detail. There are lowlevel schemas, such as “ANIMATE want THING”, which can be instantiated by “I want chocolate”; and higher order schema, such as “ANIMATE PROCESS THING”, that represents a structure where the subject precedes the verb, which comes before the direct object.

3

Hypothesis

The following hypothesis guided the analysis: I. For the distinction between the nouns [sedi] and [sEdi], the relevant constructions are: noun phrases, prepositional phrases and verb phrases, in which these nouns occur. II. Given its difference in meaning, [sEdi] and [sedi] will also differ in regard to the low-level schema that they instantiate. III. Although each slot in these schemas can be filled by any element of the word class associated to it, only a limited number of words will productively occur.

4

Analysis

Since the nouns [sedi] and [sEdi ] can take part in noun phrases, prepositional phrases or verb phrases, the analysis focused on different types of constructional schemas that are relevant for the distinction between them. The examples given below are samples of the occurrences for each construction. 4.1

Nominal Constructions

The following analysis presents three kinds of nominal constructions, shown in Table 1, which have to be taken into account for the choice between [sedi] and [sEdi]. The adjective slot for the Right Adjective Modified Nominal Construction can be filled by words like principal (“main”), for [sEdi], and insaci´ avel (“uncontrolled”) for [sedi]. For the Left Adjective Modified Nominal Construction the occurrence of the adjective preceding the noun is more productive with [sEdi], which tends to co-occur with words, such as: futura(“future”)

A Methodology to Analyze Homographs

59

Table 1. Schemas for nominal constructions Rigth Adjective Modified Nominal Construction Type Noun Adjective Noun Phrase [sedi] or [sEdi] adjective Left Adjective Modified Nominal Construction Type Adjective Noun Noun Phrase adjective [sedi] or [sEdi] Noun-PP construction for [sedi] Type Noun1 Preposition Noun2 Noun Phrase [sedi] de noun Noun-PP construction for [sEdi] Type Noun1 Preposition + Article Noun2 Noun Phrase [sEdi] de, em + a,o =da, do noun

and nova(“new”). As for [sedi],the adjectives muita(“much”), pouca(“little”) and bastante(“much”) often appear. Finally, for the Noun Prepositional Phrase (Noun-PP) Construction, examples of the two forms, [sedi] and [sEdi], are: sede de justi¸ca (“thirst of justice”) and sede da organiza¸c˜ ao (“seat of the organization”), respectively.

4.2

Prepositional Constructions

Since the noun [sEdi] is semantically a locative, prepositional constructions headed by locative prepositions are important for the prediction of this form. Two types of prepositional constructions are shown in Table 2. Regarding the Locative Prepositional Construction schema, it is worth noting that the prepositions a and em are normally contracted, giving the forms na and a. Examples of the Complex Locative Prepositional Construction are those who ` Table 2. Schemas for prepositional and verbal constructions Locative Prepositional Construction Type Preposition Determiner Noun Prepositional Phrase em, a, para a, uma, aquela [sEdi] Complex Locative Prepositional Construction Type Noun1 Preposition + Article Noun2 Prepositional Phrase noun de + a = da [sEdi] or [sedi] Transitive Verbal Constructions Type Adverb Preposition + Article Noun2 Prepositional Phrase adverb de + “a” = da [sEdi] Intransitive Verbal Constructions Type Verb Preposition Noun Verb Phrase verb com , de [sedi]

60

F. Barbosa, L. Ferrari, and F.G. Resende

have the Noun1 slot filled by entrada(“entrance”) or abertura(“opening”), for [sEdi] and hora(“time”) or momento(“moment”), for [sedi]. 4.3

Verbal Constructions

As for verbal constructions, described in Table 2, two main types can be found. For the Transitive Verbal Construction, verbs inaugurar (“to inaugurate”) and matar (“to kill”) occur with [sEdi] and [sedi], respectively. The intransitive verbal constructions are specially productive for [sedi]. The verbs morrer (“to die”) and estar (“to be”) are the most frequent.

5

Experimental Results

The algorithm developed to deal with the word sede was tested using the CETEN-Folha text database [1]. This database was extracted from Folha de S˜ ao Paulo, a Brazilian newspaper, and contains around 24 million words, with 2278 occurrences of sede, divided in 298 occurrences of [sedi] and 1891 of [sEdi]. Accuracy rate results for the forms [sedi] and [sEdi], as well as the overall statistics are given in Table 3. With the proposed method, the total accuracy rate achieves 95.0%, while the individual accuracy rate for [sedi] and [sEdi] are 90.6% and 95.6%, respectively. Table 3. Results for the word sede Phonetic form occurrences accuracy [sEdi] 1891 95.6% [sedi] 297 90.6% [sEdi] + [sedi] 2278 95.0%

6

Conclusions

In this paper, a methodology to deal with homographs in BP TTS systems is proposed. The basic idea relies on cognitive grammar. The word sede was used as a case study and the corresponding analysis was presented. The implemented algorithm was applied to CETEN-Folha text database giving rise to an accuracy rate of 95.0%. Using a similar framework to solve the problem for other heterophone and homograph words is subject of ongoing research.

References 1. Corpus de Extractos de Textos Electrˆ onicos NILC/Folha de S˜ ao Paulo (CETENFolha). http://acdc.linguateca.pt/cetenfolha/ 2. Speech Assessment Methods Phonetic Alphabet (SAMPA). http://www.phon..ucl.ac.uk/home/sampa

A Methodology to Analyze Homographs

61

3. Goldberg, A.: Constructions: A Construction Grammar Approach to Argument Structure. university of Chicago Press.(1995). 4. Langacker, R.: Foundations of Cognitive Grammar, vol 1: Theoretical Prerequisites. Stanford, California: Stanford University Press.(1987). 5. Langacker, R. : Foundations of Cognitive Grammar, vol 2: Descriptive Application. Stanford, California: Stanford University Press.(1991)

Automatic Discovery of Brazilian Portuguese Letter to Phoneme Conversion Rules through Genetic Programming 1

2

Evandro Franzen and Dante Augusto Couto Barone 1

UNISC – Universidade de Santa Cruz do Sul Av. Independência, 2293-Bairro Universitário CEP 96815-900 Santa C. do Sul – RS [email protected] 2 UFRGS – Universidade Federal do Rio Grande do Sul. Instituto de Informática Av. Bento Gonçalves, 9500 – Campus do Vale – Bloco IV. Bairro Agronomia – Porto Alegre – RS – Brasil. CEP 91501-970 Caixa Postal: 15064 [email protected]

Abstract. Letter to phoneme conversion is a basic step in Speech Synthesis processes. Traditionally, the activity involves the implementation of rules that define the mapping of letters into sounds. This paper presents results of the application of an evolutionary computation technique (Genetic Programming), in Brazilian Portuguese synthesis, aiming to discover automatically programs implementing specific synthesis rules.

1 Introduction The spoken language is the most used form of communication between humans, being simultaneously powerful and simple. The interaction between men and machines continues to be a hard problem to solve, and the application of Artificial Intelligence techniques to perform these tasks is not straight forward. [5]. Automatic speech processing performed by computers is mainly performed by two different kinds of problems: speech recognition and speech synthesis [5]. The first aims to convert an acoustic signal, captured by a microphone or by a telephone, into a set of intelligible words or phrases [8]. The second one consists in automatic generation of voice waveforms, commonly generated from a written or a stored text [8]. One of the most common approaches for performing speech synthesis is Text To Speech (TTS) technique. In this approach, a text is converted into a set of phonemes which are gathered to produce synthetic "voice" signals [4]. In most of the world's spoken languages, a written text does not correspond to its pronunciation; thus to describe the correct pronunciation, a set of symbolic representations become necessary. Each language possesses some intrinsic characteristics as different phonetic alphabet, set of possible phonemes and its combinations. Each language possess a specific set of phonemes, which can be defined as "elementary" sounds, used as "bricks" to construct any used sound found in speech productions, using that considered language [8]. In many languages there isn't an exact consistency N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 62–65, 2003. © Springer-Verlag Berlin Heidelberg 2003

Automatic Discovery of Brazilian Portuguese Letter to Phoneme Conversion Rules

63

between phonemes and the corresponding letters (graphemes) that can produce them [11]. The present work is part of the Spoltech Project [2], which aims to create, develop and provide technologies of speech processing: speech recognition and synthesis to Brazilian Portuguese. As synthesis procedure we use concatenation of diphones. Letter to phoneme conversion is done through described rules in the LTS (letter to sounds) module. One of the major goals of this present work is to provide the Spoltech synthetizer used tool (Festival environment [12]) with additional advanced technologies, based in Genetic Programming, to accomplish the processes that compose the speech synthesis.

2 Modeling the Problem Using Genetic Programming The rules to convert letters into phonemes can be represented through computer programs. If they were implemented by human programmers, they probably would have this form: IF (current letter is x) THEN the phoneme is y ELSE... In accordance with [6], there are five major steps in preparing the utilization of genetic programming to solve a specific problem: i) determining the set of terminals; ii) determining the set of functions; iii) determining the fitness measure and the fitness cases; iv) determining the parameters and variables for controlling the run, and; v) determining the method of designating a result and the criterion for stopping a run. The activity of converting letter into phonemes can be summarized as the application of rules on words or letters to discover corresponding phonemes. Being this, the initial step for determining one or more strategies to find solutions through the Genetic Programming is to define if fitness cases will consist of a set of letters or words. In our case, we are using words to have the fitness measured. The set of fitness cases is defined as a word list, each one composed by a list of letters ((p a t o) (t i p o)). The correct answers for each case is specified as lists of phonemes in the same way ((p a t o) (ts i p o)). The definition of cases and respective answers in a list form was chosen by the easiness found in the LISP language to deal with list processing; however other data structures can be used, not compromising the solution search process. To evaluate the produced answer for each individual, the Genetic Programming system [3] calculates the raw fitness using the following rules: • In the case that the produced phoneme is equal to the expected in the specific position, three points are credited to the solution; • in the case that the phoneme produces differently in a considered position, but represents an expected phoneme in any other word's position ,one point is credited to the solution. Standardized fitness must always indicate better solutions having lesser values, tending to zero. In this problem we start with the biggest expected value for fitness, diminishing it as solutions evolve.

64

E. Franzen and D.A.C. Barone

3 Experimental Results The set of rules to convert a letter into a phoneme in Portuguese is extremely wide. Silva [11] describes a set of more than eighty rules that cover the diverse contexts where letters are used. The main rules that can be considered in our Genetic Programming system are: i) direct relation between letter and phones, searching the special occurrence of letters "t" and "d" before "i"; ii) letter "c" when used before letters "e" or "i" is represented by phoneme [s], in the other cases by [k] phoneme; iii) letter "s" represents phoneme [z] in two situations, when it occurs between two vowels or when it is used before voiced consonants ("d", "b", "m", "g", "n"); iv) the use of "ss" results in the production of only one phoneme [s]; v) letter "z" correspond to phoneme [s] when there is a vowel at the end of the word. In the cases where it’s followed by a vowel, the phoneme corresponds to the letter itself. P o p u la tio n fi tn e ss a v e r a g e B e st g e n e r a ti o n fitn e ss B e st e x e c u tio n fitn e ss W o r st g e n e r a t io n fit n e ss

F it n e s s e v o lu t io n

250

200

150

100

50

96

92

88

84

80

76

72

68

64

60

56

52

48

44

40

36

32

28

24

20

16

8

12

4

0

0

Fig. 1. Graphic of fitness evolution

The standard fitness of the best individual was found at generation 60, corresponding to the value 30. Optimized solutions tend to lower values. A "perfect" solution corresponds to 0, since in our modelling we have used as fitness measure the difference between the expected value of the pronunciation of a string of words (correct one by definition) and the obtained fitness value through the application of rules described above. In Fig. 1, we show the fitness evolution to one of the tested cases in the work.

4 Final Conclusions This article has presented a technique for the automatic discovery of programs and also has showed the results of its application in the automatic obtention of rules to convert a letter into a phoneme to Brazilian Portuguese. The realized experiments

Automatic Discovery of Brazilian Portuguese Letter to Phoneme Conversion Rules

65

have demonstrated that it is possible to construct systems able to discover rules through a supervised learning technique using Genetic Programming. The research was developped in the context of the SPOLTECH project (International cooperation between Brazil (UFRGS) and USA (University of Colorado), adding a specific tool for Portuguese phonetic rules and using Genetic Programming approach. The easiness offered by the representation of the technique which uses common instructions in a programming language offers a flexibility and easiness for the approach of different problems in speech synthesis. One of the major difficulties found in the results analysis consists in describing properly the implemented activity by each solution. This comes directly from the increase of complexity of the solutions and also the number of instructions that constitute the solution individuals. Another important issue is the definition of a set of fitness cases which can represent properly the rules to be discovered, without compromising other contexts where some letters can belong to.

References 1.

Banzhaf, Wolfgang; Nordin, Peter; Keller Robert; Francone, F.D. Genetic Programming An Introduction. On the Automatic Evolution of Computer Programs and Its Applications. San Francisco:Morgan Kaufmann, 1998. 470 p. 2. Spoltech. Advancing Human Language Technology in Brazil and the United States Through Collaborative Research on Portuguese Spoken Language Systems. In: PROTEMCC, 4., 2001, Rio de Janeiro. Projects Evaluation Workshop: international cooperation: procedings. Brasília: CNPw, 2001. p. 118–142. 3. Branko Soucek; Iris Group. Dynamic, Genetic and Chaotic Programming. New York: John Wiley & Sons, 1992. 4. Dutoit, Thierry. An Introduction to Text-To-Speech Synthesis. Dordrecht: Kluwer Academic, 1996. 280 p. 5. Hausser, Roland. Fundations of Computacional Linguistics. Man-Machine Comunication in Natural Language. Berlin: Springer-Verlag, 1999. 534 p. 6. Koza, John R. Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge: The MIT Press, 1992. 819 p. 7. Koza, John R. Genetic Programming II: Automatic Discovery of Reusable Programs. Cambridge: The MIT Press, 1998. 746 p. 8. Lemmetty, Sami. Review of speech synthesis technology. Disponível em: . Acesso em 20 dez. 2000. 9. Mitchell, Melanie. An Introduction to Genetic Algorithms. Cambridge: The MIT Press, 1992. 205 p. 10. Silva, T.C. Fonética e Fonologia do Português, roteiro de estudos e guia de exercícios. São Paulo: Contexto, 1999. 254 p. 11. Silva, Miriam B. Ensaios. Leitura, Ortografia e Fonologia. São Paulo: Ática, 1993. 110 p. 12. Festival. The Festival Speech System. System Documentation. Disponível em: . Acesso em 06 fev 2003.

Experimental Phonetics Contributions to the Portuguese Articulatory Synthesizer Development Ant´ onio Teixeira , Lurdes Castro Moutinho, and Rosa L´ıdia Coimbra Universidade de Aveiro 3810 193 Aveiro, Portugal [email protected]

Abstract. In this paper we present current work and results in two Experimental Phonetics projects motivated by our ongoing development of an articulatory synthesizer for Portuguese. Examples of analyses and results regarding glottal source parameters and from EMMA and acoustic analyses related to the tongue position are presented. In our studies contextual and regional variation is considered.

1

Introduction

Our articulatory synthesizer for Portuguese consists, currently, in an application that runs in Windows environment and allows to synthesize, among others, nasal sounds, vowels and nasal consonants with acceptable quality [1]. It is however our intention to integrate other knowledge, obtained through an integrated and multidisciplinary contribution of researchers from different areas. The aim of this communication is to give account of ongoing projects that will contribute to the improvement of the synthesizer.

2

Phonetics Applied to Speech Processing

The research accomplished previous to this project showed that the variation of the velum, and even of some other articulators, influences the production and perception of nasality. However, detailed production and acoustic studies do not exist. Information concerning the behavior of the glottal source during the production of these vowels is also necessary for the continuation of this work, as well as information concerning regional variation. EMMA Corpus. In a previous project, information concerning the position of the tongue, lips and velum during the production of words and phrases containing nasal sounds, using a system of ElectroMagnetic Midsagittal Articulography (EMMA) has already been collected. This technique, however, is not viable for 

Funded by FCT projects POSI/36427/PLP/2000 and PRAXIS P/PLP/11222/1998.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 66–69, 2003. c Springer-Verlag Berlin Heidelberg 2003 

5.2

5.6

5.0

5.4

5.8

0.0

1.0 0.0 −1.0

4.5 0.0

5.0 5.5 un u un u uun u u u unuuuunu u uu uu unun u un un u u un un un uun u u ununun un un un un un ununun unun un u un un un unun un unun un unununun un unun un un un unun

67

an an an an an an an an an an anananan an an an a an an 6 an a a a an6 an an an an an an 6 an 6 a an an a an 6 an an a a6 6 a an an 6an 6 an an aan 6 an an an 6 6 an an 6 an an a 6anaan an an anan an a6an an an an an an 6an an an an an an 6an an an an an an an an anaan 6 a a an an an an an 6an an aan an aa6an an an ananan aan 6an6anan an an aan an an an an aan an a

4.5

−0.4

5.2 5.6 un un un u un un u u u u un un unun unun un un u u unun un u uuun un uun ununun unu unun un uun uun un un un unuun un u un u uun un unun un uuun uu un unununun unun

4.8

4.8

anan an an an an an an an ananan an an an an an an an an an 66aan an an anan 6anan anan 66ana an an an an an an an an an an an an an an an an an an an an an 6an an 6an66an 6an an an an an an an an an an aan an an an anan an 6a6aana6 an an an an an 6 an an an an an an an an an 6 an 6an aaanan an an an a an 6 an an an a an an an anan an aaa6aa6aaanaaaaan 6 a a 6 aaan

−1.0

−0.8

0.0

4.4

−1.0

6 an 6 an an an 6an anananan 6anan aan an an an an an an 6 an an an an an an an an an an aanan an anan an an an an an an an anan an anan 6an an an anan an 66 6an an an6an an aan an an an anan an an an an aa an an aan n6an aan 6 an 6 an anan 6 an an 6 aan 6 an a a an a6aan an anan a6an 66 an aa an a a an an an anan an a a an an an a an an a a an a anaan a 6 aanan 6anan an anan an an an an6

−1.0

−1.5

−0.5

0.5

Experimental Phonetics Contributions

5.0

5.5 un

6.0

un

u u u u u u un un u un uunuuun u u ununuu ununun unuunun u un u un un un un un ununun un un u un un un uun nunun unun unun unununu un un un un ununun un 4.6

5.0

5.4

5.8

Fig. 1. Plot of the dorsum tongue sensor at 3 different points (10%, 50% and 90% of duration) of [6∼], [a] and [6] pronunciations (top) and [u∼] and [u] (bottom)

a wide number of speakers nor does it supply information concerning the phonation process. Analysis of this corpus already contemplated the study of velum movement between stops and after nasal consonants [2] and is now addressing new questions. When trying to perform articulatory synthesis of Portuguese nasal vowels we were faced with the inexistence of accurate information regarding oral articulators and velum positions used in their production. Situation is worst for vowels [6∼], [e∼] and [o∼] ”corresponding” each to two oral vowels. Figure 1 presents information regarding tongue position for the set [6∼], [a] and [6] and for [u∼]/[u]. For [6∼], tongue configurations cover both oral [a] and [6], being more noticeable in configurations near the beginning. Observing the ellipses, representing points at 1 std from the mean, [6∼] assumes mostly an [6] configuration. At the end tongue seems to use also configurations somewhat different from [a] and [6], possibly due to coarticulation with the following segment. Configurations for [u∼] are very similar to the ones used for [u], especially at the beginning of the nasal vowel. Our data and analysis method allow similar studies with the other nasal vowels and factoring context and accent. New Acoustic Corpus. With the objective to continue this study, a new corpus was defined and organized so as to contemplate the different phonetic contexts [3]. The recordings include already several regional variants: Minho, Douro Litoral, Beiras (Litoral and Interior), Alentejo and Algarve. Speakers aged 35 and more were chosen. They were born and living in the selected areas, and did not have more than compulsory schooling. The recordings have always been done locally. The signal was registered directly into the hard disk. During recordings

68

A. Teixeira, L.C. Moutinho, and R.L. Coimbra Open Quotient for EP female oral and nasal vowels

0.0

0.0

0.4

0.4

0.8

0.8

Open Quotient for EP male oral and nasal vowels

a

6

6~

e

e~

E

i

i~

o

o~

O

u

u~

a

6

6~

e

e~

E

i

i~

o

o~

O

u

u~

Fig. 2. Open quotient for EP vowels. At left, values as function of vowel and nasality (separated by gender). Gray is used for nasals

visual stimuli were used, whenever possible. Pictures lead the informer to produce the intended words, thus avoiding reading. Two repetitions of the corpus were requested to each speaker. One area where information for developing the articulatory synthesizer is scarce is source related parameters. Having recorded EGG signal simultaneously with speech our corpus is well suited to extract such parameters. A detailed analysis was already performed using 6 male speakers of 3 regions. Results have been presented in a conference and are submitted [6]. In Fig. 2, the boxplot of the open quotient by vowel and nasality is presented. The figure does not show significant differences on average values nor dispersion for the vowels, oral or nasal. This parameter proved to be idiolectal. To complement EMMA corpus information, we are now starting the extraction and analysis of tract related parameters from our new acoustic corpus. As part of a study of EP nasal vowel height [4], we are analyzing first two formants at the very beginning of nasal vowels after stop consonants. Using F1 as a measure of vowel height, for the average of all regions, speakers and contexts, [6∼] height is between [a] and [6], [o∼] height is between [O] and [o], and [e∼] height is between [E] and [e]. The other two, [u∼] and [i∼], have height similar to the corresponding orals. Looking at F1 results for different regions, in Fig. 3, the overall tendency is not observed in some situations: for the Beira Litoral informants [o∼] is more like [o], regarding height; for speakers native of Beira Interior [o∼] is as high as [O] and [e∼] as high as [E]; for Minho speakers raising of [6∼] seems not to occur, being [6∼] more like [a].

3

Multimedia Prosodic Atlas for the Romance Languages

This project is part of a research supervised by the Centre de Dialectologie, Universit´e Stendhal, involving several European universities, and its main goal is to study the prosodic configuration of the spoken linguistic varieties in the Romance dialectological space. The study focus on vocalic segments, since it is considered that they are the ones that carry most of the relevant prosodic information, and also because this is the methodology used by all other European AMPER teams. The parameters analysed are duration, pitch and intensity of vowels [5].

Experimental Phonetics Contributions

800

Region 3 − Beira Litoral

800

Region 2 − Alentejo

69

400 200

200

400

600

ORAL NASAL

600

ORAL NASAL

a

6

6~

e

e~

E

i

i~

o

o~

O

u

u~

a

6

6~

e

e~

i

i~

o

o~

O

u

u~

u

u~

800

Region 5 − Minho

800

Region 4 − Beira Interior

400 200

200

400

600

ORAL NASAL

600

ORAL NASAL

a

6

6~

e

e~

E

i

i~

o

o~

O

u

u~

a

6

6~

e

e~

E

i

i~

o

o~

O

Fig. 3. First formant for EP oral and nasal vowels for 4 different regions

4

Conclusion

Results presented allowed to minor some gaps in information needed for articulatory synthesis of EP vowels, especially the nasals. Information was obtained about: open quotient and F0 for the different nasal vowels; oral tract configuration employed in the production of nasal vowels. At this moment synthesis experiments may be done using new data for European Portuguese. Another important result of these two projects is the formation of new corpora, including data available for the first time for EP, having a great potential for further studies.

References 1. Teixeira, A., et al: SAPWindows - Towards a versatile modular articulatory synthesizer. IEEE-SP Workshop on Speech Synthesis (2002). 2. Teixeira, A., Vaz, F., European Portuguese nasal vowels: An EMMA study. Proc. Eurospeech (2001). 3. Moutinho, L. et al: Contributos para o Estuda da Varia¸c˜ ao Contextual e Regional do Portuguˆes Europeu. Encontro Comemorativo dos 25 Anos do CLUP (2002) 5–17. 4. Teixeira, A. et al: Production, Acoustic and Perceptual Studies on European Portuguese Nasal Vowels Height. ICPhS (2003) . (accepted). 5. Contini, M. et al: Un projet d’Atlas Multim´edia Prosodique de l’Espace Roman. Speech Prosody (2002) 227–230. 6. Teixeira, A.: Para a melhoria da s´ıntese articulat´ oria das vogais nasais do Portuguˆes Europeu: Estudo da dura¸ca ˜o e de caracter´ısticas relacionadas com a fonte glotal. 1o. Cong. Int. Fon´etica e Fonologia, Belo Horizonte, (2002).

A Study on the Reliability of Two Discourse Segmentation Models Eva Arim, Francisco Costa, and Tiago Freitas ILTEC, Rua do Conde de Redondo 74, 5º, 1100-109 Lisboa, Portugal {earim,fcosta,taf}@iltec.pt

Abstract. This paper describes an experiment we conducted in order to test the reliability of two discourse segmentation models which have been widely used in computational linguistics. The main purpose of the test is to pick one of them for our future research, which aims to assess the role of prosody in structuring discourse in European Portuguese. We compared the models of Grosz and Sidner (1986) and Passonneau and Litman (1997) using spontaneous speech. The latter displayed a higher level of consensus among coders. We also observed that listening to the original speech influenced the level of agreement among coders.

1

Introduction

The present study describes one of the initial tasks of a project currently being developed with the purpose of investigating how certain prosodic features are used to mark the information structure of spoken discourse and which cues are most relevant for the listeners to identify this structure. There have been no such studies concerning the Portuguese language so far. This project can thus contribute for a better understanding of the role of prosody in natural language, providing valuable information for computational linguistics. Additionally, it will enable us to compare Portuguese with other languages regarding macro-level prosody. Following the claims of several authors, we assume that there is a relationship between discourse structure and prosodic features. Crucially, our long term goal is to explain how exactly that relationship holds in European Portuguese. There have been some studies showing that prosody is constrained by discourse structure in several aspects, and this structure has been characterized in terms of paragraphs, discourse segments, topic units or theme shifts, all of which can be regarded as essentially the same type of discourse constituent. It has been stated that if we want to identify the role of prosody in the structuring of information, we must compare it with an independently obtained discourse structure, in order to minimize the risks of circularity [1–5]. Previous work on other languages has shown that there is no direct match between syntactic structure and prosodic constituency – see [6] and [7]. Instead, prosody seems to be constrained by semantic and pragmatic aspects. Therefore, we should not rely on syntax for that matter, which would otherwise be the most immediate choice. N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 70–77, 2003. © Springer-Verlag Berlin Heidelberg 2003

A Study on the Reliability of Two Discourse Segmentation Models

71

In order to have some sort of information structure against which prosody can be confronted, some authors elicit instruction monologues, a method which yields speech with a discourse structure determined a priori [2–3; 5]. Others rely on discourse segmentations resulting from discourse analysis [8–17], whereas still others ask subjects to segment texts according to their idea of paragraph [4]. All these approaches thus assume that spoken discourse exhibits a structure somewhat similar to that of written texts, on what concerns the grouping of sentences into larger units like paragraphs, for instance. We opted for the second method, which has the advantage of making it possible to study different speech styles. This would be impossible if we were to follow the instruction monologues approach, since it generates a very specific kind of data. The problem with using the discourse analysis approach is that a priori we do not know whether it will yield more than an individual’s intuition of discourse structure. If we are to depend on a discourse segmentation method, we must assure that we are employing one that is reproducible, because the more replicable a discourse segmentation model is, the stronger the evidence that discourse structure does exist. This paper reports an experiment we have conducted in order to compare two discourse segmentation models. We chose the models of Grosz and Sidner [9] and Passonneau and Litman [17]. These have been widely used and there is extensive research on them, which allows us to compare our results with those obtained in work done for other languages. Both models produce intention based segmentations. The difference is that while the former generates a hierarchical structure the latter generates only a linear kind of segmentation, and it actually comes very close to asking subjects to segment texts based on an intuitive notion of paragraph, which is actually a method that has been used by some authors (e.g. [4]) in order to elicit discourse structure. Grosz and Sidner's model, on the other hand, produces not only segmentation, but also a hierarchical organization among discourse units similar to that of chapters and subchapters (but the units involved are obviously much smaller). The experiment consisted in asking naïve coders to segment two texts following one of the two models and then measuring the level of consistency among annotators. In Examples 1 and 2 we present examples of segmentation produced with the two models, taken from two subjects' responses. Example 1. Discourse segmentation produced with Grosz and Sidner's Model (the I label precedes a segment's description; indentation denotes a segment's embeddedness) I: Interview about the referendum I: The interviewer begins the interview I: The interviewer presents the interviewee Inês Serra Lopes, eh, directora do jornal independente, I: The interviewer asks the interviewee's opinion destes relatos que ouviste e agora das opiniões do Zé Manuel e do… e do Miguel Portas, há alguma coisa que… que te tenha chamado a atenção? I: The interviewee answers the question Não… O que me chama de facto mais a atenção, e eles já falaram sobre isso, I: The interviewee introduces the problem at hand é… o… problema da enorme distância entre a regionalização proposta e o envolvimento das pessoas nela.

72

E. Arim, F. Costa, and T. Freitas

Example 2. Examples of discourse segmentation produced with Passonneau and Litman's model (the I label precedes a segment's description) I: Presentation of the newspaper's director Inês Serra Lopes, eh, directora do jornal independente, I: Contextualization and question destes relatos que ouviste e agora das opiniões do Zé Manuel e do… e do Miguel Portas, há alguma coisa que… que te tenha chamado a atenção? I: Answer Não… I: Tells what strikes her most O que me chama de facto mais a atenção, e eles já falaram sobre isso, é… o… problema da enorme distância entre a regionalização proposta e o envolvimento das pessoas nela.

We will eventually choose one of these models for our future research on prosody. Such choice will be based on a test we carried out with the purpose of evaluating inter-coder agreement. Several works have correlated discourse structure with some prosodic variables. For instance, it appears that the relevant domain for F0 declination is a discourse segment [2]. In fact, some authors have discovered that segment initial utterances correlate with changes in pitch range [8] and display higher average and maximum F0, whereas segment final phrases present lower values for both maximum F0 and average F0 [11]; low F0 is associated with listeners' perception of both sentence and paragraph boundaries [26]. Low-ending contours seem to convey finality, and high-ending ones convey continuation [3, 4]. Pause has also been identified as a marker of discourse organization, coinciding with the boundaries of discourse segments [2, 3, 4, 8, 11, 26] or narrative boundaries [25], and the final word in the final utterance of these units also tends to be lengthened [5].

2

Method

The data used have previously been collected for REDIP [18], a project that aims at collecting and studying the language of Portuguese media, dealing mostly with radio and TV broadcasts. One of the reasons we are using this corpus is because it contains a large amount of spontaneous speech. The importance of using spontaneous speech in this kind of work has to do with the fact that spontaneous discourse can be prosodically different from prepared or read speech. One of the applications of this type of work is to make speech technology more natural sounding and more efficient in recognizing natural speech. For this test, we have selected two excerpts from the corpus. Both were digitally recorded and feature a total time of 192 seconds, containing 504 words. They consist of interviews from the radio, involving both male and female speakers. Using dialogues in this kind of work is novelty, and it will allow comparison to other speech styles. One of our concerns in choosing the dialogue samples was to make sure that they contained speech turns long enough for coders to identify more than one dis-

A Study on the Reliability of Two Discourse Segmentation Models

73

course segment within each turn. That way we prevented our participants from placing discourse segment boundaries exclusively at turn boundaries, since our long term interest is in the prosodic means of signaling discourse structure and not in the prosodic strategies used to signal turn taking. We have asked sixteen naïve coders to annotate these two transcripts using the previously mentioned models. The participants were split into two different groups according to the model they were asked to work with. They all received an orthographic transcription of the selected texts, but for each model only four of them listened to the original recordings. Since we hypothesize that there is a relation between discourse structure and prosody, we expected the listening and non-listening groups to display a different behavior. Each participant received a set of instructions which were basically the explanatory texts of [15] and [17] translated with slight modifications. The most significant change we introduced was that people were not restricted to placing segment boundaries at prosodic phrase boundaries previously determined. They could place them between any two words in the text instead. We believe the results obtained this way are more independent from prosody.

3

Results

In order to measure inter-coder agreement, we employed the kappa coefficient (κ), which has recently been considered to be the most adequate for that purpose (see [19] and [20]). Kappa values under 0.6 indicate there is no statistical correlation among coders, whereas results over 0.7 point to replicable coder agreement. Although percent agreement might seem a valid statistic for this purpose, it actually overestimates inter-subject agreement, because it does not take into account the fact that from all the possible boundary sites only a few will be considered a discourse boundary (discourse segments identified by our subjects averaged 26 words in length, and a priori one expects a large number of possible boundary sites where no subject will place a segment boundary). Because of this, percent agreement will report a high consensus among annotators even if they all assign discourse segment boundaries to different places. The kappa coefficient, on the other hand, is not influenced by the fact that, to a large extent, subjects will agree due to the nature of the task, because it subtracts chance agreement from the observed agreement. We computed the pair-wise kappa coefficient between all the possible pairs of coders within the same group. This yielded a total of six pairs for each of the four groupings (four coders each), and twenty eight pairs for each model (eight coders each). The coefficient is computed as follows (from [20]):

74

E. Arim, F. Costa, and T. Freitas

C = number of boundaries agreed upon by both subjects D = number of boundaries assigned by subject A but not by subject B I = number of boundaries assigned by subject B but not by subject A N = number of non-boundaries agreed upon by both subjects

T = C + D + I + N C + N Po = T Pc =

κ =

( C + D )( C + I ) ( N + D )( N + I ) + T T

Po − Pc 1 − Pc

(Percent agreement corresponds to Po in the above formulas, and Pc stands for chance agreement.) It should be noted that in order to compare these two models we had to discard the hierarchical information that Grosz and Sidner’s framework supplies, since Litman and Passonneau’s produces linear segmentation. Therefore, the results obtained pertain only to the location of discourse segment boundaries. As can be seen in the table below, our results show that Passonneau & Litman’s discourse segmentation model produces higher inter-coder agreement values (average kappa= 0.73, min.=0.58, max.= 0.92), outpacing those of Grosz and Sidner’s (average kappa=0.65, min.=0.39, max.=0.86) by almost ten points. This is a significant contrast in terms of reproducibility, with Grosz and Sidner’s model below the 0.7 mark and Passonneau and Litman’s above it. We also found that 67% of the pair-wise comparisons result in a kappa value of at least 0.7 (96% for the 0.6 threshold) for Passonneau and Litman's model, while this number is only 33% (respectively 79%) for Grosz and Sidner's. We also present percent agreement, for the sake of comparison with other studies. Table 1. Observed coder agreement

Grosz & Sidner’s Model

kappa coefficient Listening Non-Listening Overall percent agreement Listening Non-Listening Overall

avg.

min.

max.

Passonneau & Litman’s Model avg. min. max.

0.59 0.68 0.65

0.39 0.55 0.39

0.72 0.86 0.86

0.74 0.69 0.73

0.66 0.58 0.58

0.85 0.87 0.92

0.96 0.97 0.96

0.93 0.95 0.93

0.97 0.99 0.99

0.98 0.98 0.98

0.98 0.97 0.97

0.99 0.99 0.996

A Study on the Reliability of Two Discourse Segmentation Models

75

We think that the poorer results of Grosz and Sidner’s model might be ascribed to its inherent complexity. The fact that coders had to identify relations between segments caused higher variation among subjects. Listening to the speech recordings did influence the results, but not quite as we expected, considering that other studies report higher levels of agreement in the listening condition. Our findings show that coders using Grosz and Sidner’s model agreed less when listening to the recordings. The different scores between the listening and the non-listening groups corroborate the hypothesis that discourse structure is reflected in prosody. In Litman and Passonneau’s model the effect of hearing the speech shows up in a positive way, suggesting that prosody can make discourse structure more explicit. On the contrary, in Grosz and Sidner’s model, access to prosodic information might have caused people to look for prosodic means of signaling hierarchy between segments, resulting in a more disparate segmentation. In fact, some authors comment that it has not been proved if prosody can signal the embeddedness level of discourse segments – [4]. It is important to remember that these results were obtained using spontaneous dialogues, whereas other authors have used monologues. The fact that Litman and Passonneau’s model scored well demonstrates that it can be applied to dialogues and suggests that an identifiable discourse structure can be found not only in monologues but also in dialogues. We now compare our results with others reported in studies by other authors using Grosz and Sidner's model. [8] arrived at 95.1% consensus among subjects labeling from text alone and 87.7% agreement among subjects that segmented the texts while listening to the speech recordings; [11] presents several figures, according to whether the annotators listened to the sound files or not and whether the speech samples consisted of spontaneous or read speech (38% among subjects who did not listen to the recordings and worked with read speech; 75% among subjects who listened to the recordings and worked with read speech; 46% among subjects who did not listen to the recordings and worked with spontaneous speech; 78% among subjects who listened to the recordings and worked with spontaneous speech), but it should be noted that hierarchical relations among segments were included in the computation of the figures exhibited in [11] (which, at least in part, accounts for why our figures are much higher). The picture that seems to emerge from these data, and which is consistent with our findings, is that coders seem to agree less on boundary location when they have access to sound, but agree more on the hierarchical relations among segments when they label from both speech and text. Passonneau and Litman [17] report the following measures of inter-coder agreement: 0.74 recall, 0.55 precision, 0.09 fallout; 0.11 error (see [17] or [20] on how to compute these). They were calculated using the majority opinion as reference (i.e., assuming the majority is right). If a subject identifies all 'correct' boundaries, recall is 1; if a subject does not identify more boundaries than the ones that are 'correct', precision is 1, fallout measures how many non-boundaries were identified as boundaries and error tells how deviant a subject response was from the majority opinion (ideally, recall and precision should be 1 and fallout and error 0). We present our corresponding results (for Passonneau and Litman's model): 0.79 recall, 0.9 precision, 0 fallout, 0.01 error for the listening group; 0.82 recall, 0.84 precision, 0.01 fallout and error for the non-listening group; 0.81 recall, 0.87 precision, 0 fallout and 0.01 error overall (we used the segmentation resulting from the boundaries identified by at least 9 sub-

76

E. Arim, F. Costa, and T. Freitas

jects out of 16 as reference). Our results show that listening to the speech files increased precision, but also caused a small decrease in recall (meaning that the listening group that used this method of segmentation divided our texts in not as many discourse units as the corresponding, non-listening group). Our findings strongly suggest that dialogues do possess some sort of clear information structure. However, we must acknowledge that the consensus level obtained in this experiment (reported with percent agreement and recall, precision, etc.) is much higher than the one obtained with other experiments to a large extent because we allowed subjects to segment texts between any two words, which greatly increased the number of possible boundary sites that were not classified as the boundary of a discourse unit by any subject. Nonetheless, we also present kappa values, which presumably are not affected by this.

4

Conclusions

The two models employed in this experiment use speaker intention as a criterion to segment discourse. When participants were instructed to segment discourse, they were also asked to provide a description of the intentions underlying each segment. We want to use that information in a future analysis to check whether different segmentations were caused by discourse ambiguity. This may lead to different results. The experiment described in this paper was only a preliminary study to enable us to choose a discourse segmentation method that will be used in our work on the relationship between discourse and prosody in European Portuguese. Testing the two models is not the end goal of our project, but simply a preliminary experiment. This meant that we did not work with a large corpus, but we are aware that a larger one could have created a different picture. The results observed so far lead us to choose Passonneau and Litman’s model for our future research. As was shown, this method displayed a fair level of inter-coder consensus, well above Grosz and Sidner’s. If the level of agreement obtained proves not to be satisfactory for the purpose of our research, we may adapt the chosen model in order for it to produce results further above the 0.7 mark.

References 1. 2. 3. 4. 5. 6.

Swerts, M. and R. Collier: On the Controlled Elicitation of Spontaneous Speech. Speech Communication 11 (4–5) (1992) 463–468 Swerts, M. and R. Geluykens: The Prosody of Information Units in Spontaneous Monologue. Phonetica 50 (1993) 189–196 Swerts, M. and R. Geluykens: Prosody as a Marker of Information Flow in Spoken Discourse. Language and Speech 37 (1) (1994) 21–43 Swerts, M.: Prosodic Features at Discourse Boundaries of Different Strength. Journal of the Acoustical Society of America 101 (1) (1997) 514–521 Swerts, M., R. Collier and J. Terken: Prosodic Predictors of Discourse Finality in Spontaneous Monologues. Speech Communication 15 (1994) 79–90 Cutler, A., D. Dahan and W. Donselaar: Prosody in the Comprehension of Spoken Language: A Literature Review. Language and Speech 40 (2) (1997) 141–201

A Study on the Reliability of Two Discourse Segmentation Models 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26.

77

Pijper, J.R. and A.A. Sanderman: On the Perceptual Strength of Prosodic Boundaries and its Relation to Suprasegmental Cues. Journal of the Acoustical Society of America 96 (4) (1994) 2037–2047 Grosz, B. and J. Hirschberg: Some Intentional Characteristics of Discourse Structure. Proceeding of the International Conference on Spoken Language Processing (1992) 429–432 Grosz, B.J. and C.L. Sidner: Attention, Intention and the Structure of Discourse. Computational Linguistics 12(3) (1986) 175–204 Hirschberg, J. and B. Grosz: Intonational Features of Local and Global Discourse Structure. Proceedings of the Workshop on Spoken Language Systems (1992) 441–446 Hirschberg, J., C.H. Nakatani and B.J. Grosz: Conveying Discourse Structure through Intonation Variation. Proceeding of the ESCA Workshop on Spoken Dialogue Systems: Theories and Applications, Virgo, Denmark, ESCA (1995) Litman, D.J. and R. Passonneau: Empirical Evidence for Intention-Based Discourse Segmentation. Proc. of the ACL Workshop on Intentionality and Structure in Discourse Relations (1993) Litman, D.J. and R. Passonneau: Combining Multiple Knowledge Sources for Discourse Segmentation. Proc. of 33rd ACL (1995) 108–115 Nakatani, C.H., B.J. Grosz and J. Hirschberg: Discourse Structure in Spoken Language: Studies on Speech Corpora. Proceeding of the AAAI Symposium Series: Empirical Methods in Discourse Interpretation and Generation (1995) Nakatani, C.H., B.J. Grosz, D.D. Ahn and J. Hirschberg: Instructions for Annotating Discourses. Technical Report Number TR-21-95. Center for Research in Computing Technology, Harvard University, Cambridge, MA (1995) Passonneau, R.J. and D.J. Litman: Intention-Based Segmentation: Human Reliability and Correlation with Linguistic Cues. Proc. of the ACL (1993) Passonneau, R.J. and D.J. Litman: Discourse Segmentation by Human and Automated Means. Computational Linguistics (1997) Ramilo, M.C. and T. Freitas: A Linguística e a Linguagem dos Média em Portugal: descrição do Projecto REDIP. Paper presented at the XIII International Congress of ALFAL, San José, Costa Rica (2002) Carletta, J.: Assessing Agreement on Classification Tasks: The Kappa Statistic. Computational Linguistics 22 (2) (1996) 249–254 Flammia, G.: Discourse Segmentation of Spoken Dialogue: An Empirical Approach. Ph.D. thesis, MIT (1998) Beckman, M.E.: A Typology of Spontaneous Speech. In Y. Sagisaka, N. Campbell and N. Higuchi. Computing Prosody: Computational Models for Processing Spontaneous Speech. Springer, New York (1997) 7–26 Collier, R.: On the Communicative Function of Prosody: Some Experiments. IPO Annual Progress Report 28 (1993) 67–75 Oliveira, M.: Pausing Strategies as Means of Information Processing in Spontaneous Narratives. In: B. Bel and I. Marlien (eds.): Proceedings of the 1st International Conference on Speech Prosody, Aix-en-Provence, France (2002) 539–542 Oliveira, M.: Prosodic Features in Spontaneous Narratives. Ph.D. thesis, Simon Fraser University (2000) Oliveira, M.: The Role of Pause Occurrence and Pause Duration in the Signalling of Narrative Structure. In: E. Ranchhod and N. Mamede (eds.): Advances in Natural Language Processing. Third International Conference, PorTAL 2002, Faro, Portugal (2002) 43–51 Lehiste, I.: Some Phonetic Characteristics of Discourse. Studia Linguistica 36:2 (1982)

Reusability of Dictionaries in the Compilation of NLP Lexicons∗ Bento C. Dias-da-Silva, Mirna F. de Oliveira, and Helio R. de Moraes Faculdade de Ciências e Letras, Universidade Estadual Paulista Rodovia Araraquara-Jau Km 1, 14800-901 Araraquara, São Paulo, Brazil [email protected], [email protected] [email protected]

Abstract: This paper discusses particular linguistic challenges in the task of reusing published dictionaries, conceived as structured sources of lexical information, in the compilation process of a machine-tractable thesaurus-like lexical database for Brazilian Portuguese. After delimiting the scope of the polysemous term thesaurus, the paper focuses on the improvement of the resulting object by a small team, in a form compatible with and inspired by WordNet guidelines, comments on the dictionary entries, addresses selected problems found in the process of extracting the relevant lexical information form the selected dictionaries, and provides some strategies to overcome them.

1

Introduction

In their most ordinary use, published dictionaries are restricted to supply the general public with the "correct" spelling and the "attested" senses of unknown words. But for Human Language Technology researchers published dictionaries are an important resource for mining for a considerable amount of different sorts of lexical information. It must be recognized that most of them offer much more information than just spelling and word sense records. They are "fruits of the cumulative wisdom of generations of lexicographers", and "the sheer breadth of coverage makes them indispensable" for natural language processing [11, page 365]. Dictionary entries, in fact, specify not only etymological, phonological, syntactic, definitional, collocational, variational, and register information about words, but sense relations as synonymy and antonymy as well. It is also a fact that lexicographers are aware that compiling dictionary entries involves making a very hard decision as to dealing with polysemy and homonymy. In other words, they have to decide on whether to lump or split word senses, or on whether to create fresh new entries for the same word form. Such decisions, however, are arbitrary, for lexicographers take their own personal experience and expertise to make their decisions; and probably that is the only way they manage to compile their unique store of words. Thus, reusing lexicographical information requires caution. ∗

This research is sponsored by CNPq and FAPESP-São Paulo, Brazil.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 78–85, 2003. © Springer-Verlag Berlin Heidelberg 2003

Reusability of Dictionaries in the Compilation of NLP Lexicons

79

As a rule, if we want to use dictionary lexicographical information in natural language processing projects, it must be mined and filtered carefully.1 Accordingly, the purpose of this paper is to discuss real decision problems we had to face during the task of extracting and inferring either the explicit or implicit synonymy and antonymy relations from five Brazilian Portuguese published dictionaries, our reference corpus (henceforth RC), in the compilation process of a machine-tractable Thesaurus-like Lexical Database for Brazilian Portuguese, henceforth TeP (see [6] for a complete description of the database itself). The TeP was developed in a two-year span (2000– 2001) by a small team of four linguists and a computer scientist. Resorting to Dias-daSilva’s [5] methodology for developing natural language processing projects, the team split up the task into three complementary phases: Linguistic, Representational, and Computational. This paper focuses on the discussion of selected problems that emerged from a specific task that was part of the Linguistic domain: the extraction process of lexical information from the RC. In the next sections and subsections, we delimit the scope of the term thesaurus, present the RC and comment on the key features of the published dictionary entries that make it up, describe the mining procedure, address selected problems we encountered in the process of extracting the relevant lexical information form the RC, and, when possible, provide strategies to overcome them. Kilgarriff's classification scheme of word sense distinctions the lexicographer attempts to capture will serve us well in our discussion (see [11] for details). 2

2 Preliminaries 2.1

The Thesaurus Denotations

Instead of searching for an answer to Kilgarriff´s query "What's in a Thesaurus?" (see [13] for the relevant discussion), we list below the denotations the term thesaurus has in Brazilian Portuguese (henceforth BP), and single out the one we had in mind when we embarked on the compilation of the TeP: the Object 6. Dias-da-Silva, Oliveira, Moraes [7, page 190] surveyed six different types of objects that are referred to by the term thesaurus: 1. An inventory of the vocabulary items in use in a particular language (Object1); 2. A thematically based dictionary, that is, an onomasiologic dictionary (Object 2); 3. A dictionary containing a store of synonyms and antonyms (Object 3); 4. An index to information stored in a computer, consisting of a comprehensive list of subjects concerning which information may be retrieved by using the proper key terms (Object 4); 5. A file containing a store of synonyms that are displayed to the user during the automatic proofreading process (Object 5); 6. A dictionary of synonyms and antonyms stored in memory for use in word processing (Object 6). 1

2

Acquiring such information is a hard problem and has been usually approached by reusing, merging, and tuning existing lexical material. This initiative has been frequently reported in the literature (see [11, 12], and the papers cited therein). The authors gratefully thank and acknowledge the anonymous reviewers for their comments and suggestions.

80

2.2

B.C. Dias-da-Silva, M.F. de Oliveira, and H.R. de Moraes

The Reference Corpus

The compilation of a dictionary is a time consuming activity and requires a team of more than fifty lexicographers, each responsible for (i) selecting the headwords which will head the dictionary entries, (ii) defining the number of senses for each headword, and (iii) exemplifying the senses with sentences and expressions from their corpora. The advent of computers have allowed lexicographers to use machine-readable large-scale corpora in their work, establishing procedures as follows [12]: (a) to gather concordances from the corpus; (b) to cluster the concordances around nuclear sense clusters; (c) to lump or split nuclear clusters; (d) to encode the relevant lexical information by means of the highly-constrained language of dictionary definitions. Given our small team, and the two-year time stipulated for the project, we bypassed those procedures and decided to reuse the five published dictionaries, which were chosen for the following reasons: (i) their being "fruits of the cumulative wisdom of generations of lexicographers", and their "sheer breadth of coverage"; (ii) the relevant sense relations one of the five dictionaries registers can be complemented by similar pieces of information found in the other four dictionaries; (iii) instead of using the Aristotelian analytical definition (i.e., genus and differentiae) to define word senses, they extensively use the synonymy and antonymy word forms in their defining procedure, feature that speeded up the process of collecting lots of synonym and antonym word forms. Two of them [10,15] are the most traditional and bulkier BP dictionaries. Their electronic versions speeded up the process of synonym and antonym mining. Barbosa [1] and Fernandes [9] are specific dictionaries of synonyms and antonyms. The fifth dictionary is a dictionary of verbs [2] that uses Chafe´s semantic classification of verbs [3]. For each verb headword, the dictionary registers the relevant Chafe´s categories ("state", "action", "process", and "action-process"), its sense definitions and/or synonyms for it, its grammatical features, its potential argument structures, its selectional restrictions, and sample sentences extracted from corpora. 2.3

The TeP

The RC, the Thesaurus Editor, i.e., the graphical authoring tool created to feed and manage the TeP (see [6] for details), and the strategy of "mining" lexical information from published dictionaries we present in this paper made it possible to compile 44678 word forms that are distributed throughout 19868 synonym sets [7].

3 3.1

The "Mining" Strategy and Pitfalls “Mining”

First, it is necessary to define the synonymy concept we have adopted. The TeP compilers had to agree upon a specific notion of synonymy throughout the compilation process as to assure the consistency of the synonym sets. Considering that absolute synonyms are rare in language, if they exist at all, Cruse´s [4, page 88] synonymy definition was adopted: "X is a cognitive synonym of Y if (i) X and Y are syntacti-

Reusability of Dictionaries in the Compilation of NLP Lexicons

81

cally identical, and (ii) any grammatical declarative sentence S containing X has equivalent truth conditions to another sentence S1, which is identical to S, except that S is replaced by Y." The best way to understand how the compilers "mined" for synonyms into the RC is to follow a real example. Let us take, as our starting point of the process, the BP verb lembrar (English: "to remember"). Weiszflog [15] distinguishes seven senses. After collecting the synonyms, and disregarding their definitions, the following synonym sets can be compiled: 1. {lembrar, recordar} (English: {"to remember", "to recall"}) 2. {lembrar, advertir, notar} (English: {"to remember", "to warn", "to notify"}) 3. {lembrar, sugerir} (English: {"to suggest", "to evoke", "to hint"}) 4. {lembrar, recomendar} (English: {"to remember", "to commend"}) After that preliminary analysis, the linguist checks the consistency of the four synonym sets by looking up the dictionary synonym entries for the remaining five verbs: recordar, advertir, notar, sugerir, and recomendar. Accordingly, the linguist, for example, looks up the dictionary entry for the verb recordar. Its first sense is given by the paraphrase trazer à memória (English; "to call back to memory"), and its fourth sense by the synonym lembrar. As these two senses are very close, and the examples confirm the similarity between the two, the synonym set 1 said to be consistent. The very same process is repeated to the every verb listed above until the list is exhausted. The analytical cycle begins again by collecting the synonyms from the next dictionary entry in the alphabetical order. It should be pointed up that, when the linguist analyzes the verb esquecer (English: "to forget"), the canonical BP antonym for lembrar, he finds only one synonym for it: the verb olvidar (Vulgar Latin: "oblitare"; English: "to efface") so, after the consistency analysis, the following synonym set is compiled: 5. {esquecer, olvidar}. The dictionary also registers this antonymy indirectly: lembrar and esquecer are defined by means of the paraphrases trazer à memória and perder a memória de (English: "to stop remembering"), respectively. Thus, the information checked through cross-reference of entries confirms the antonymic pair (lembrar, esquecer), which stresses the importance of examining paraphrases carefully. Just for the record: the synonym set 1 and its antonym synonym sets are transcribed bellow: 6. {amentar2, comemorar, ementar, escordar1, lembrar, memorar, reconstituir, recordar, relembrar, rememorar, rever1, revisitar, reviver, revivescer, ver} 7. {deslembrar, desmemoriar, esquecer, olvidar} In the next section, some real problems are presented. The examples are occurrences of specific kinds of problems, and reveal the necessity of data checking during the reuse process. 3.2

Pitfalls

In the heart of the task of compiling dictionaries for the general public is the specification of word sense distinctions. Kilgarriff [11, pages 372–374], on analyzing the LDOCE entries [14], proposed a classification scheme for categorizing widespread word sense distinctions made by lexicographers: "Generalizing Metaphors", i.e., a

82

B.C. Dias-da-Silva, M.F. de Oliveira, and H.R. de Moraes

sense that is the generalization of a specific sense; e.g.: martelar ("to hammer") sense 1: "to hit with a hammer", sense 2: "to insist". "Must-be-theres", i.e., one sense is a logical consequence the other sense; e.g.: casar ("to marry") - sense 1: "to unite by marriage", sense 2: "to ally". "Domain Shift", i.e., one sense is the extension of the application of the other sense to another situation; e.g.: leve ("light") - sense 1: "not heavy, with little weight", sense 2: "nimble, agile". "Natural and social kinds", i.e., "owing to a non-linguistic fact, the entities or situations identified by the different word senses have distinct denotata, and although the denotata have many attributes in common, they will always remain classes of things; e.g.: asa ("wing") - sense 1: "feathered bird´s member used to fly", sense 2: "one of the horizontal airfoils on either side of the fuselage of an airplane". This typology aided the TeP team of linguists in both (i) identifying the kind of distinctions the lexicographers had in mind when they had to take their decisions during the compilation process of their dictionaries, and (ii) avoiding carrying over published dictionary flaws to the TeP. 3.2.1 Three Classes of Problems In the process of extracting information from the dictionary entries, three categories of problems first identified by Kilgarriff [11] were detected: (a) "necessity"; (b) "consistency"; (c) "centrality". Compiling the TeP synonym sets required the reflection on whether a particular semantic feature or grammatical specification was a "necessary" feature for a lexical item in a particular sense. Checking the RC entry "consistency" implied observing symmetry, an important characteristic of synonymy, which, in general, was not observed in the RC. This relation establishes that if A is a synonym for B, B is necessarily a synonym for A. The issue of "centrality" focused on the sense variation of a particular sense, i.e., how wide the sense variation is so that a second sense should be posited instead of only one. With respect to the compilation of the TeP, this problem was pervasive because it is hard to solve because synonymy is not a transitive relation: if A is a synonym for B, and B is a synonym for C, C is not necessarily a synonym for A [8]. 3.2.2 Selected Strategies As the majority of the problems dealt with in this section are tokens of specific types, one example of each will be presented using (a) the sense distinctions, and (b) the problem types, sketched in previous sections. As the structure of the lexicon is complex, (a) and (b) alone may not be enough to solve the problems. Although the linguists focused on the specification of synonymy and antonymy, they had to be aware of logical-conceptual relations such as hyponymy, for lexicographers often consider superordinate terms (hypernyms) synonyms. The first problem to be addressed is the "Generalizing Metaphors". The BP verbs acarar, encarar, arrostar ("to stare") mean ficar face a face ("to be face to face with"), and they also mean, enfrentar ("to face"). At a first glance, one is tempted to merge the two verbs into the same synonym set: {acarar, encarar, arrostar, confrontar, enfrentar}. This sense lumping is mistaken though. Although acarar may denote a less specific sense than the other members of its original set, a TeP user would not be able to identify its most specific sense. This example demonstrates how useful the identification of generalizing metaphors in the resolution of meaning centrality problems is. The cases related to generalizing metaphors, which generate two synonym

Reusability of Dictionaries in the Compilation of NLP Lexicons

83

sets with common elements can be easily solved by the insertion of glosses for each sense, a future work. The splitting over of two similar senses was adopted: {acarar, encarar, arrostar} (English: {"to stare", "to gaze"}) and {acarar, encarar, arrostar, confrontar, enfrentar} (English: {"to face", "to confront"}). The other category of problems has to do with the "Must-be-theres". Borba [2] distinguishes only one sense for the verb visualizar ("to visualize"): perceber pela visão, conceber (sem ver) uma imagem mental de ("to perceive through vision; to conceive, without seeing, a mental image of"). The first part of the definition (perceber pela visão) ("to perceive through vision") is clearly a paraphrase of ver ("to see") and it can be confirmed by the example: Assustei-me ao visualizar à minha frente a imagem de dois homens de clã ("I got scared when I visualized the image of two clansmen before me"). In this example, we can replace visualizar by ver without any change to the sentence sense. But if visualizar is replaced by imaginar ("to imagine"), which is a synonym of the second part of the definition ("to conceive, without seeing, a mental image of"), illustrated with the sentence podemos talvez alimentar a esperança de visualizar/imaginar todas a novas dimensões da realidade ("perhaps we can hope to visualize/imagine all new dimensions of reality"), a different sense can be distinguished. Borba [2] precisely identified both senses and illustrated them with clear examples, but the two senses were not split over two different definitions. Maybe the lumping together of the two senses is the result of the lexicographer’s personal judgment to consider that the first sense ("to perceive through vision") is predictable from the sense "vision", explicit in the verb stem. The other RC dictionaries present only the sense "to imagine". Once the occurrence of two distinct senses were clearly identified, two different synonym sets were inserted in the TeP: {ver, visualizar, enxergar,...} (English: {"to see 1"}) and {ver, visualizar, imaginar} ({English: "to visualize", "to envision", "to project", "to fancy", "to see", "to figure", ...}). The third problem has to do with "necessity" and is a "Domain shift", illustrated by exalar ("to exhale"), which is defined as emitir ou lançar de si emanações odoríficas ou fétidas ("to exhale odoriferous or fetid odor"). According to this definition, the verb exalar should be inserted in two different synonym sets related to each other by antonymy: {exalar, feder, catingar} (English: {"to stink", "to reek"}), and {exalar, recender, i.e., exalar cheiro bom ("to exhale good smell)}, an inconsistent pair. To solve the problem, we point up that exalar needs a specific complement to define its sense. Something similar occurs with the verb cheirar ("to smell"). Compare O cadáver já está cheirando ("The corpse is already smelling") with O assado já está cheirando ("The roast is already smelling"). As a solution a subspecified synonym set was inserted into the TeP: {cheirar, exalar, trescalar,...} (English: {"to exhale", "to give forth", "to emanate"}) with the sense of “to exhale strong (either good or bad) smell”. This problem has to do with "centrality" and "consistency". Borba [2] considers the verbs urgir, forçar, obrigar, impelir (“urge", "force", "obligate", "exhort”) synonyms because they are interchangeable in the following context: Urgiam-nos de todos os lados para que caminhássemos ("They urged us in all possible manners for us to walk"). Weiszflog [15] also registers the same lexical items as synonyms, but exemplifies them with an example whose sense is specified by the verbs empurrar ("to push", "to force") and compelir ("to impel", "to force"). The information checking process (see 3.1), though, showed that the synonym set {urgir, compelir, forçar, obri-

84

B.C. Dias-da-Silva, M.F. de Oliveira, and H.R. de Moraes

gar, impelir,...} could be created, even though the dictionaries did not register empurrar ("to push") with this, or with any other sense of urgir ("to urge") whatsoever. Thus, although Weiszflog discriminated two different senses, the compilers agreed to establish only one. Two kinds of problems can be illustrated by this example: (i) "centrality", because the central problem is to define whether empurrar ("to push") should be inserted in that synonym set; (ii) "consistency", because Weiszflog established two senses, where the compilers expected only one. In this case, where the dictionaries registered two different senses, while the compilers identified only one, only one sense was inserted: {urgir, compelir, forçar, obrigar, impelir, ...}. The lexical item empurrar ("to push") was not inserted in the synonym set, for no relevant contextual occurrence was found in the RC.

4

Final Remarks

As discussed in the introduction, the reusability of published dictionaries for Human Language Technology purposes is a very productive work strategy. As the paper showed, however, care must be taken not to carry over their flaws into machinetractable lexicons. An inconsistency sample from BP dictionaries was presented, and some ways to overcome them were sketched. Despite their imperfections, the dictionaries we selected as our RC proved to be valuable resources of lexical-semantic information. Thanks to them, and to the systematic "mining" process and filtering strategies, the TeP, with its 20000 synonym sets, can be refined and updated to the Wordnet.Br. Accordingly, further steps will involve the specification of glosses for each sense, of example sentences and expressions for each word form, and of the logical-conceptual relations of meronymy/holonymy and hyponymy/hypernymy.

References 1. 2. 3. 4. 5. 6.

7. 8.

Barbosa, O. Grande Dicionário de Sinônimos e Antônimos. Ediouro, Rio de Janeiro (1999) Borba, F.S. (coord.) Dicionário Gramatical de Verbos do Português Contemporâneo do Brasil. Editora da Unesp, São Paulo (1990) Chafe, W. Meaning and the Structure of Language. The University of Chicago Press, Chicago (1970) Cruse, D.A. Lexical Semantics. Cambridge University Press, New York (1986) Dias-da-silva, B.C. Bridging the gap between linguistic theory and natural language processing In: Caron, B. (ed.) 16th International Congress of Linguists. Pergamon-Elsevier Science, Oxford (1998) 10 p. Dias-da-Silva, B.C., Oliveira, M.F., Hasegawa, R., Moraes, H.R., Amorim, D., Paschoalino, C. Nascimento, A.C. A construção de um thesaurus eletrônico para o português do Brasil. In: Proceedings of the 5th PROPOR – Encontro para o processamento computacional da língua portuguesa escrita e falada, Atibaia, Brazil, (2000) 01–10 Dias-da-Silva, B.C., Oliveira, M. F., Moraes, h. r. Groundwork for the Development of the Brazilian Portuguese Wordnet. In: Ranchhold, E.M.; Mamede, N.J. (eds.) Advances in natural language processing.: Springer-Verlag, Berlin (2002) 189–196 Fellbaum, C. (ed.) WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, Mass (1998)

Reusability of Dictionaries in the Compilation of NLP Lexicons 9. 10. 11. 12. 13. 14. 15.

85

Fernandes, F. Dicionário de Sinônimos e Antônimos da Língua Portuguesa. Globo, São Paulo (1997) Ferreira, A.B.H. Dicionário Aurélio Eletrônico Século XXI (versão 3.0). Lexicon, São Paulo (1999) Kilgarriff, a. Dictionary word sense distinctions: an inquiry into their nature. Computer and the Humanities, 26 (1993) 365–387 Kilgarriff, A. I don´t Believe in Word Senses. Computer and the Humanities 31 (1997) 91113 Kilgariff, A., Yallop, C. "What's in a Thesaurus?". In: Proceedings of the 2nd Conference on Language Resources and Evaluation, Athens, Greece (2000) 8 p. Summers, D. (ed.) Longman Dictionary of Contemporary English. Longman, Essex (1995) Weiszflog, W. (ed) Michaelis português – moderno dicionário da língua portuguesa (versão 1.1). DTS Software Brasil Ltda, São Paulo (1998)

Homonymy in Natural Language Processes: A Representation Using Pustejovsky’s Qualia Structure and Ontological Information∗ 1

Claudia Zavaglia and Juliana Galvani Greghi

2

1

Universidade Estadual Paulista, UNESP/IBILCE São José do Rio Preto, SP, Brazil [email protected] 2 Núcleo Interinstitucional de Lingüística Computacional NILC, USP/São Carlos, Brazil [email protected]

Abstract. This paper presents a proposal for the semantic treatment of ambiguous homographic forms in Brazilian Portuguese, and to offer linguistic strategies for its computational implementation in Systems of Natural Language Processing (SNLP). Pustejovsky’s Generative Lexicon was used as a theoretical model. From this model, the Qualia Structure – QS (and the Formal, Telic, Agentive and Constitutive roles) was selected as one of the linguistic and semantic expedients for the achievement of disambiguation of homonym forms. So that analyzed and treated data could be manipulated, we elaborated a Lexical Knowledge Base (LKB) where lexical items are correlated and interconnected by different kinds of semantic relations in the QS and ontological information.

1

Introduction

The objective of this paper is to give researchers in computational linguistics, specialists in computational implementation, computational lexicographers, that is, scientists and all involved in sciences that work with and are interested in Natural Language Processing (NLP), procedures and strategies of a linguistic nature to be used in the elaboration of lexical repertoire for computational treatment and construction of Linguistic Resources for Brazilian Portuguese. Our attention is turned to one of the linguistic phenomenons present in natural language that becomes a real obstacle for the efficient elaboration and treatment of this type of lexicon, that is: the ambiguity brought about by homonyms. Taking this as the starting point, we propose to present and suggest a type of computational-linguistic treatment for homonyms in Brazilian Portuguese, specifically for homographs. For the proposed lexical structure, one of the aspects of James Pustejovsky’s [1] Generative Lexicon (GL) model, namely: the Qualia Structure and its Formal, Constitutive, Telic and Agentive roles used as theoretical framework. Based on these aspects we suggest a structural-semantic approach for the homographic forms studied, furthermore, we suggest the use of ontology of ∗

Work partially financed by the CNPq – Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 86–93, 2003. © Springer-Verlag Berlin Heidelberg 2003

Homonymy in Natural Language Processes

87

concepts to categorize these forms. By suggesting these tactics for the description of homographic items our goal is to provide resources to recover the amplitude and multiplicity of meanings, considering the disambiguation of meanings contained in each one of these forms.

2

The Qualia Structure

To Pustejovsky [1], different word meanings are associated to distinct lexical items. In his decompositional view, lexical items are minimally decomposed into templates of set features. Thus, the emergence of a generative structure to compose lexical meanings is possible, defining the format of the conditions for a semantic expression of language. This same author proposes a new path for the decomposition view, focused on the generative or compositional aspect of semantics and not that of the specific decomposition in primitive numbers. In this way, a generative lexicon is characterized as a computational system that involves, at least, four levels of representation: (1) Argument Structure, which specifies the number and type of logical arguments and how they are syntactically expressed; (2) Event Structure, defining event type of a lexical item and a sentence, and including event types such as STATE, PROCESS and TRANSITION that may have a subevent structure; (3) Qualia Structure, which includes modes of explanation distributed among four roles FORMAL, CONSTITUTIVE, TELIC, and AGENTIVE (4) Lexical Inheritance Structure, identifying how a lexical structure is related to other structures and its contribution to global organization of the lexicon. The Qualia Structure specifies four essential roles of a word meaning (or Qualia): (i) Constitutive, i.e., that which expresses the relation between an object and its constitutive parts; (ii) Formal, or, that which distinguishes the object within a larger domain; (iii) Telic, that which expresses the objective/scope and function of the object; (iv) Agentive, i.e., that which considers the factors involved in the origin of the object. The Qualia structure is, in fact, much closer to the structural description of a sentence in a syntactic analysis, inasmuch as it admits something like the transformational operations used to capture or retrieve both the polymorphic behavior as well as the meaning of a lexical item in the phenomenon of novel word creation. For Pustejovsky, Qualia is, in every way, like a set of property events associated to the lexical item that best explains what that word means. For example, to understand what lexical items like cookie and beer mean, one should recognize that they are, respectfully, a type of food and a type of drink. While cookie is a term that describes a specific type of object in the world, the expression “foodstuff” denotes a functional reference of what is “done with” something, i.e., how this same thing is used. In this case, the term is partly defined by the fact that food is something to be eaten. Similar observations can be made for beer. The Telic quale for the name food encodes the functional aspect of the meaning, represented as [TELIC = to eat]. In the same way, the distinction between semantically related names such as novel and dictionary is derived from “what is done with” these objects, which is different. Thus, although these two objects may be “books”, in the general sense, the use made for each one of them is different: while a “novel” is for “reading”, a “dictionary” is for “consultation”. Consequently, the Qualia values encode the functional information for “novel” and “dictionary” in a distinct form: [TELIC = to read] for “novel” and [TELIC = to consult]

88

C. Zavaglia and J.G. Greghi

for “dictionary”. Obviously, the distinction between these two objects is not made only by means of these different roles in the Qualia telic structure. The type of textual structure for each one of them is recovered in the “constitutive” role of Qualia Structure. Whereas “novel” is characterized as a narrative or story, “dictionary” is defined as a list of words. Thus, we have the representation [CONST1 = narrative] for “novel” and [CONST = list of words] for “dictionary”. These two objects are characterized in an identical form in the formal role: [FORMAL = book] for “novel” and [FORMAL = book] for “dictionary”. On the other hand, they also differ in the agentive role of the Qualia Structure, that is: on how their “existence” came about, or, while a “novel” is written, a “dictionary” is compiled, that is, organized: [AGENT2 = written] for “novel” and [AGENT = organized] for “dictionary”.

3

The Linguistic Phenomenon of Homonymy

By homonymy we understand it to be a linguistic phenomenon that registers the identity of two words at the expression level, that is, perfectly identical forms distinguished semantically (one significant for two meanings, at the content level) or the identity of two grammatical constructions that lead to ambiguity. The first refers to lexical homonymy and the second to structural homonymy. Our specific interest for this paper is on lexical homonymy, as defined in detail by Zavaglia [3]: lexical homonymy possesses equal graphic or phonetic forms. In the first case, the words retain their graphic identities (homographs) and in the second, their sound identities (homophones). Thus, we have homographic words that: (i) have distinct meanings and are either grammatically or orally identical, which is then called Semantical Homonymy; as in: banco1: “object made for sitting” X banco2: “place where we make money deposits”; ponto1 : “portion of space designated with precision” X ponto2: “degree determined on a scale of values” X ponto3: “each part of a speech, text, of a list of topics of a program” X ponto4: “every extension of the wire between two holes made by needles3”; importar1: “bring something from a foreign country” X importar2 :“to be significant, to amount to” ; (ii) are distinct because they belong to diverse grammatical classes and are identical in pronunciation, in this case, called Categorial Homonymy, as, for example; abandono1 (noun) X abandono2 (verb); ameaça1 (noun) X ameaça2 (verb); (iii) are distinct in their etyma and phonetically and graphically identical, in this case, named Etymological Homonymy; as, for example: manga1: “fruit” [From malaiala manga.] X manga2: “part of clothing” [From lat. manica, 'tunic sleeve'.]; (iv) are phonetically distinct, and are thus named Heterophonous Homonymy4, where the noun is phonetically identical to [e] and the verb to [ε], for the vowel “e”, as in the following examples: sossego1 (noun) X sossego2 (verb); aperto1 (noun) X aperto2 (verb)b

1

Constitutive Agentive 3 [4] 4 Form possessing identical spelling but different pronunciation 2

Homonymy in Natural Language Processes

4

89

Homonymy in Qualia Structure

In homonymous forms, the Qualia Structure plays a decisive part in the verification and distinction of meanings. Let’s look at an example of representation based on Homonymous Single-Category Monosemous Forms – HSMF, that is, homographic forms of identical grammatical category, although each one of them contains only one sense of “banco”: {banco$0_1 CONST = furniture; FORMAL = object; TELIC = to sit; AGENT = material} e {banco$0_2 CONST = company; FORMAL = institution; TELIC = to negotiate; AGENT = place}.

5

Ontology of Concepts for Brazilian Portuguese

For Gruber [5], ontologies share and reuse the knowledge of the world. In fact, according to the author: “the term ontology means a specification of concepts, that is, an ontology is a formal description of concepts and existing relations between these in a determined domain” [5] apud [6]. According to Ortiz [7], p.2, semantics based on ontology in Natural Languages Processes (NLP) serves as: (a) support for the translation of lexical blanks; (b) support for disambiguation, both lexical as well as structural; (c) to give adequate treatment to the phenomenon of synonymy. At the same time, Tiscornia, [8], p.1, says that for the development of computational applications it is necessary to treat the models of human cognitive mechanisms and the process of knowledge formation individually, and that formal ontology, one of the most recent approaches to knowledge modulation, is, in reality, a revisitation of philosophical and linguistic theories. In this sense, the ontological categories are “subdivisions of a classification system used to catalog knowledge, for example, based on data” [8], p.4. The most common taxonomy of ontology is of the hereditary type where classes and sub-classes maintain hierarchical relationships in the shape of trees. The hierarchical taxonomy can be verified from the moment we have axioms of the type: (1) All land animals are an animal, therefore, it is a living entity, a concrete entity and an entity: a dog is an animal, a living being and concrete. The members of the same category or sub-category have some properties in common: in the sub-category “land animal”, for example, its members “bull”, “dog”, “rabbit” have paws, walk, don’t speak; their common properties are, therefore, inherited by the insertion of a word in one or another category. In Zavaglia [3], you will find the continuity of the above mentioned Ontology for Brazilian Portuguese.

6

Homonymy in Ontological Structuring

We would like to point out that at the moment, in the NLP field, especially when dealing with Knowledge Based Lexical Systems, it is agreed upon that the inclusion of this type of semantic repository, i.e., of the ontological type for meaning representation, is essential. There is a need to offer, in a structured and organized form, a common lexicon used in conformity by a determined community. On-

90

C. Zavaglia and J.G. Greghi

tologies has been widely used in knowledge representation of restricted domains, especially for document information or indexing search systems, where its application can be more efficient because it deals, exactly, with lexical sets of finite numbers. In a Lexical Knowledge Base – LKB, for example, the use of ontology can serve as a support resource to the information contained in the lexical repository of the base to make it possible to retrieve the meaning of a lexical item in an unambiguous form. True, the linguistic-classification resources that the use of an ontology may offer the linguist and/or lexicographer serve to allow him to uniformly individualize, within the various meanings or various attributable meanings of the same lexical item, the pertinent meaning from within the array of polysemic meanings that the word may contain, and in this way, neutralizing the polysemy that characterizes this very same homonymous forms.

7

LBK Representation Modules

The proposed Lexical Knowledge Base – LKB contains five modules of representation. In this paper, we will present only the ones pertinent to the qualia structure and to ontological information. All these modules are correlated in a way that allows the information contained in them to be linked and interconnected, depending on the type of research/search the user intends to make in the system. Each one of the modules presents the word, that is, the Semantic Unit [SemU] that is being researched, along with its characterization, i.e., what type of homonym it is. All the terms used in all the modules were projected to be explanatory links, that is, the user will receive information, definitions and explanations about the linguistic phenomenon of homonymy, with the objective of being instructive, according to what is expected the user to “learn” about what is “homonymy”, what is a “Homonymous Single-Category Monosemous Form”, what is a [SemU] or [HomoU]. Furthermore, it was not only the linguistic phenomenon of homonymy, which was studied; the LKB contains information on polysemy and monosemy. Information about Pustejovsky’s Qualia Structure [1] was also included; consequently, the user may learn the meaning of a “formal role”, a “constitutive role”, a “telic role” or an “agentive role”. Thus every term or acronym or abbreviation designated as a link will be underlined: [SemU], Homonymous Single-Category Monosemous Form, Semantic Homonymy, etc. 7.1

The LKB

With the objective of visualizing some advantages of having information of a diversified nature stored in an electronic database, we created and propose an interface for access to the Lexical Knowledge Base – LKB data. See the prototype of the LKB in [3]. The LKB development process can be divided into two distinct steps: (i) Modeling and implementation of the database and (ii) Implementation of the data access interface. To be able to build the LKB it was necessary to model the database according to the Relational Data Model. This model was presented for the first time in 1970, by Codd, and has been widely used along the years in the development of applications that use databases [9]. This model uses as relationships fundamental data structure, represented by tables that list the stored data. The real world categories that

Homonymy in Natural Language Processes

91

must be analyzed and stored are called entities, and each entity must be stored in a register (or line) of the table. The fundamental singularity of this model is that it avoids data redundancy by using the normalization of the tables. In this way, data pertinent to the same entity is stored in different tables, and, at the moment the data is accessed, these tables are carefully analyzed and correlated. In the beginning, the LKB data was stored in text files. So this data could be automatically transferred to the base it became essential to develop a computational tool that could make the necessary conversions. This tool can read the entry file, line by line; adequately separates the data and inserts them in the proper table, relating them to one another. The computational language used for the implementation of this tool was Delphi. After the insertion of the data, the next step was to develop the interface to access the data. It should be noted that this project is in progress and the prototypes of the interfaces are gradually being modified, as well as being fed with new linguistic information. One of the objectives of developing an application of such a nature is to make it available to the greatest possible number of users, and, to do this, we chose to develop an interface with Web access (World Wide Web). Data search is set off by a word in the Portuguese language and to have access to stored data, the user must chose one of the five available modules. As previously mentioned, this paper will detail the ontological and qualia modules. 7.2

Ontological Module

The Ontological Model contains information about the fundamental Classes and the Domain, that is, the distribution of the homonymous forms in conceptual categories5. In this way, the conceptual organization of a homonymous form begins with hyponemous relations [it’s a kind of...] and hyperonomy [it’s a superkind of...], also being included in a specific world domain [belongs to domain...]. Besides this, with LKB it is possible to retrieve all categories with which the homonymous form is correlated to up to its first category, as in the word CABO(1a): {4live entity4concrete entity4entity}.

7.3

Qualia Structural Module

The Qualia Structure contains information about existing semantic relations between two Semantic Units, according to their roles (formal, telic, constitutive, agentive) in the Qualia Structure. These semantic relations retrieve the multiple dimension of meaning of a homonymous form. The Qualia roles were designed as links to provide the user with its meaning.

5

Ontological distribution is, essentially, a manual and human activity, at least at the present state of art

92

8

C. Zavaglia and J.G. Greghi

Final Considerations and Future Perspectives

The scope of this paper, that is, its computational version, was initially prompted by two motives: (i) the fact that we could demonstrate that the result of our proposal could be real and that, on the contrary, would not be destined to the “virtual” world. Consequently, we established the validity of the linguistic analyses we made to build the linguistic framework by using homonymous items as entry words, since they were capable of supporting computational implementation; (ii) the fact of being able to demonstrate the advantages of having information of a diversified nature stored in an electronic database. Among these uses we can highlight: (i) the quick retrieval of varied linguistic information about homonymous items; (ii) specialized search for certain linguistic information, by means of automatically generated lists that may be used in several types of research; (iii) the potential possibility of using the linguistic data contained in the LBK lexical repertoire to be applied to Systems of Natural Language Processing, in Search Engines, Semantic Parsers, Disambiguators, Automatic Translation, Taggers, etc. In effect, the fact that we included a varied range of linguistic information of a pluridimensional nature (lexical, morphosyntactic, ontological, qualia, disambiguator) permits us to foresee its diverse applications Concurrently, the first perspective of the future study that causes tremendous enthusiasm is the possibility of enriching and expanding the Lexical Knowledge Base with other types of lexical items that goes beyond homonyms. In fact, studies should be made with monosemic and polysemic words. When looking at the homonymy phenomenon there is still a great deal to be done, since we only dealt with one type of homonymy, that is, with Semantics. We should also work with Categorical Homonyms, Heterophonous Homonyms and Etymological Homonyms, even though we have already considered some cases where homonyms can be distinct in regard to their etyma, such as the case with “manga”. Finally, there is still the possibility of including new semantic relations, syntactic information, information on the argumental structure, of insertion of synonyms and antonyms for the lexical items in a systematic manner, to name a few.

References 1. 2. 3. 4. 5.

Pustejovsky, J. The Generative Lexicon. Cambridge: The MIT Press (1995). Moravcsik, J.M.E. Sameness and Individuation. Journal of Philosophy, 70:513–526, (1973) Zavaglia, C. Análise da homonímia no português: tratamento semântico com vistas a procedimentos computacionais. Tese de Doutorado. Araraquara: Universidade Estadual Paulista (2002). Biderman, M.T.C. Dicionário didático de português. 2 ed. São Paulo: Ática1 (1998). Gruber, T.R. Toward principles for the design of ontologies used for knowledge sharing. Presented at the Padua workshop on Formal Ontology, March 1993, to appear in an edited collection by Nicola Guarino. In: Acess in: 06 de janeiro de 2002.

Homonymy in Natural Language Processes 6.

7. 8. 9.

93

Braga, J.L.; Torres, K.S.; Botelho, F.C. Reengenharia e Visualização de Conceitos no WordNet. Universidade Federal de Viçosa. In: Acess in: 28/10/2002 Ortiz, A.M. Diseño e implementación de un Lexicón Computacional para lexicografía y Traducción Automática. Estudios de Lingüística Española. Vol. 9, (2000). In: . Acess in: 14/06/2002. Tiscornia, D. Una metodologia per la rappresentazione della conoscenza giuridica; l’ontologia formale applicata al diritto. Articolo per conferenza di filosofia del diritto. Bologna, (1995). Elmasri, R; Navathe, S.B. Fundamentals of Database Systems Addison Wesley Pub., 3ed, (2000).

Using Adaptive Formalisms to Describe Context-Dependencies in Natural Language Jo˜ ao Jos´e Neto and Miryam de Moraes Lab. de Ling. e Tecnologias Adaptativas, Esc. Polit´ecnica da Univ. de S. Paulo Av. Prof. Luciano Gualberto tr. 3 n. 158, Cid. Universit´ aria 05508-900 S.Paulo, Brazil {joao.jose,miryam.moraes}@poli.usp.br http://www.pcs.usp.br/˜lta

Abstract. This text sketches a method based on adaptive technology for representing context-dependencies in NL processing. Based on a previous work [4] dedicated to syntactical ambiguities and non-determinisms in NL handling we extend it to consider context-dependencies not previously addressed. Although based on the powerful adaptive formalism [3], our method relies on adaptive structured pushdown automata [1] and grammars [2] – resulting simplicity, low-cost and efficiency.

1

Introduction

Since low-complexity language formalisms are too weak to handle NL, stronger formalisms are required, most of them resource demanding, hard to use or unpractical. Structured pushdown automata are excellent to represent regular and context-free aspects of NLs by allowing them to be split into a regular layer (implemented as finite-state machines) and a context-free one (represented by a pushdown store). Such device accepts deterministic context-free languages in linear time, and is suitable as an underlying mechanism for adaptive automata, allowing handling – without loss of simplicity and efficiency – languages more complex than context-free ones. Classical grammars may describe non-trivial interdependencies between and inside sentences: attribute-, two-level-, evolving- and adaptive- grammars. Here, context dependency is handled with adaptive grammars (which may be converted [2] into structured pushdown adaptive automata [3]) by executing adaptive actions attached to the rule being used (stating self-modifications – rule addition and deletion – to be imposed). Upon a context-dependency is detected, one of such rules is applied, and the attached adaptive action learns to handle of the context dependency by adequately changing the underlying grammar. Starting from an initial grammar, the adaptive device follows its rules until some new context-dependency is handled. Thereafter, its operation follows the modified underlying grammar until either the sentence is fully derived or no matching rule is found. Complex languages, e.g. NLs, may be handled in this way, since adaptive grammars have type-0 power [1], [2]. By converting them into adaptive N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 94–97, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Using Adaptive Formalisms to Describe Context-Dependencies

95

structured pushdown automata, simplicity and efficiency are achieved through piecewise-regular handling of the language, validating adaptive devices as practical and efficient for NL handling [5].

2

Illustrating Example

This example illustrates nominal agreement in Portuguese using an adaptive grammar [2] and considers: attractive agreement for adjectives placed before nouns coordinated with preposition “e” or comma (e.g. As antigas mans˜ oes e parques) and grammatical agreement for adjectives placed after such nouns (e.g. Os parques e mans˜ oes restaurados). Our adaptive grammar is defined as a 3tuple (G0 , T, R0 ) where: T = finite set of adaptive functions 0 G0 = (VN0 , VT0 , VC , PL0 , PD , S) initial grammar VN0 = non-empty finite set of non-terminals VT = non-empty finite set of terminals VN0 ∩ VT = Ø VC = finite set of context symbols V 0 = VN0 ∪ VT ∪ VC VN0 , VT , VC are disjoint sets S ∈ VN0 start symbol of the grammar PL0 rules used in CF situations

0 PD rules used in CD situations 0 0 R = PL0 ∪ PD The example refers to G = (G0 , T, R0 ): 0 G0 = (VN0 , VT0 , VC , PL0 , PD , S) 0 VN = {C, C1 , C2 , C3, C4 , C5 , C6 , C7 , D, A, S, C8a , C8l , ESM, ESF, EP M, EP F } VC = {sm, sf, pm, pf } VT = {as, e, antigas, mans˜ oes, parques, restaurados, pra¸cas, “,”}

Context symbols sm, sf , pm, pf denote attributes: singular/plural masc./fem. D, A, S denote determinants, adjectives and nouns, respectively. Starting symbol is C. Adaptive functions dynamically handle optional elements: further nouns, a determinant, an adjective placed before/after the noun. A context-dependency is handled by an adaptive function when the noun is processed. It checks its agreement with the previous determinant and adjective. Another adaptive function 0 enforces agreement between the adjective and multiple nouns.PL0 and PD are: C → {A1 (C, C1 )}DC1 sf C1 → {A1c (C1 , C, ES, EF )}ESF pf C1 → {A1c (C1 , C, EP, EF )}EP F smC1 → {A1c (C1 , C, ES, EM )}ESM pmC1 → {A1c (C1 , C, EP, EM )}EP M C → {A2 (C, C2 , C1 )}AC2 sf C2 → {A1c (C2 , C, ES, EF )}ESF pf C2 → {A1c (C2 , C, EP, EF )}ESF smC2 → {A1c (C2 , C, ES, EM )}ESM pmC2 → {A1c (C2 , C, EP, EM )}EP M C → SC3 sf C3 → {A3c (C6 , C1 , C2 , ES, EF )}ESF smC3 → {A3c (C6 , C1 , C2 , ES, EM )}ESM ESM → C ESF → C EP M → C EP F → C

pf C3 → {A3c (C6 , C1 , C2 , EP, EF )}EP F pmC3 → {A3c (C6 , C1 , C2 , EP, EM )}EP M C7 → Ø C3 → ε C3 → C6 C3 → “e”C4 C3 → “, ”C4 C4 → SC5 smC5 → {A4 (C6 , ES, EM )}ESM sf C5 → {A4 (C6 , ES, EF )}ESF pmC5 → {A4 (C6 , EP, EM )}EP M pf C5 → {A4 (C6 , EP, EF )}EP F C3 → {A5 (C3 , C6 ), C7 , C8a , C8l )}AC7 ESM → C3 ESF → C3 EP F → C3 EP M → C3

96

J.J. Neto and M. de Moraes

Adaptive Actions: A1 (XC, XC1) = /*remove extra determinant*/ {−[XC → {A1 (XC, XC1 )}DXC1]} A1C (x1,x2,xn,xg) = /*Delete transitions with improper context symbols*/ {−[smx1 → {A1c (x1, x2, xn, xg)}ESM ] −[sf x1 → {A1c (x1, x2, xn, xg)}ESF ] −[pmx1 → {A1c (x1, x2, xn, xg)}EP M ] −[pf x1 → {A1c (x1, x2, xn, xg)}EP F ] /*ATK: dummy adaptive action. It memorizes determinant attributes*/ +[x1 → {AT K(xn, xg)}x2]} A2 (XC, XC2, XC1) = {−[XC → {A1 (XC, XC1)}DXC1] /*disable further det. or adj.*/ −[XC → {A2 (XC, XC2, XC1)}AXC2]} A3c (xc6, xc1,xc2,xc,xn,xg) = {dn, dg, an, ag : /*memorize noun attributes*/ A4 (xc6, xn, xg) /*fix inflexion of determinant and adjective before noun*/ −[xc1 → {AT K(dn, dg)}xc] −[xc2 → {AT K(an, ag)}xc] +[xc1 → {AT K(xn, xg)}xc] +[xc2 → {AT K(xn, xg)}xc] A4 (xc6,xn,xg) ={x, s∗ : /*ATKS: dummy adaptive action. It memorizes noun attributes*/ −[x → xc6] +[x → {AT KS(xn, xg)}s] +[s → xc6]} A5 (xc3,xc6,xc7,xc8a,xc8l) = {x, xsm, xsf, xpm, xpf, xn1, xn2, xn3, xn4, xg1, xg2, xf 1, xf 2, x1, x2, xaux1∗, xaux2∗ : /* imposes attractive agreement*/ ?[x → xc6] ?[xsm → {AT KS(ES, EM )}x] ?[xsf → {AT KS(ES, EF )}x] ?[xpf → {AT KS(EP, EF )}x] ?[xpm → {AT KS(EP, EM )}x]

+[smxc7 → {AT (xsm, xc7, xaux1)}xaux1] +[sf xc7 → {AT (xsf, xc7, xaux1)}xaux1] +[pmxc7 → {AT (xpm, xc7, xaux1)}xaux1] +[pf xc7 → {AT (xpf, xc7, xaux1)}xaux1] /*initializes logical agreement */ ?[xc3 → {AT KS(xn1, EF )}xf 1] ?[xc3 → {AT KS(xn2, EM )}xm1] ?[xf 1 → {AT KS(xn3, xg1)}x1] ?[xm1 → {AT KS(xn4, xg2)}x2] +[pf xc7 → {Z(xf 1, x1, xaux2, xc8l)}xaux2] +[pmxc7 → {Z(xm1, x2, xaux2, xc8l)}xaux2] CL(xf 1, xaux2, xc8l, xc7)} CL(xf1,xaux2,xc8l,xc7) = /* completes logical agreement*/ {xn1, xn2, xm, xf : ?[xf 1 → {AT KS(xn1, EM )}xm] ?[xf 1 → {AT K(xn2, EF )}xf ] +[pmxc7 → {Z(xm, xm, xaux2, xc8l)}xaux2] EliminaP F (xm, xc7, xaux2) CL(xf, xaux2, xc8l, xc7)} EliminaPF(xm,xc7,xaux2) = /* removes the pl. fem. agreement*/ {x, y, z : −[pf xc7 → {Z(x, y, xaux2, z)}xaux2]} Z(x,y,z,xc8l) = /*inserts a transition to a final state*/ {+[z → xc8l]} AT(x,xc7,xaux1)={xsmx, xsf x, xpmx, xpf x : /*performs transition self removal*/ −[smxc7 → {AT (x, xc7, xaux1)}xsmx] −[sf xc7 → {AT (x, xc7, xaux1)}xsf x] −[pmxc7 → {AT (x, xc7, xaux1)}xpmx] −[pf xc7 → {AT (x, xc7, xaux1)}xpf x] /*inserts an adequate transition*/ +[smxc7 → xsmx] +[sf xc7 → xsf x] +[pmxc7 → xpmx] +[pf xc7 → xpf x]} ATK (xn,xg) = { } ATKS (xn,xg) = { }

This simple example illustrates NL processing through adaptive formalisms. The following is a simplified derivation of the sentence “as antigas pra¸cas, parques e mans˜ oes restaurados” in our grammar.

Using Adaptive Formalisms to Describe Context-Dependencies

97

C ⇒0 {A1 (C,C1 )}DC1 ⇒1 as pf C1 ⇒1 {A1c (C1 , C, EP, EF)} EPF ⇒2 as C ⇒2 as {A2 (C,C2 ,C1 )} AC2 ⇒3 as antigas pf C2 ⇒3 as antigas {A1c (C2 , C, EP, EF) }EPF ⇒4 as antigas C ⇒4 as antigas S C3 ⇒4 as antigas pra¸cas pf C3 ⇒4 as antigas pra¸cas {A3c (C6 ,C1 ,C2 ,C,EP,EF)}EPF ⇒5 as antigas pra¸cas C3 ⇒5 as antigas pra¸cas, C4 ⇒5 as antigas pra¸cas, S C5 ⇒5 as antigas pra¸cas, parques pm C5 ⇒5 as antigas pra¸cas, parques {A4 (C6 ,EP,EM)}EPM ⇒6 as antigas pra¸cas, parques C3 ⇒6 as antigas pra¸cas, parques e C4 ⇒6 as antigas pra¸cas, parques e mans˜ oes pf C5 ⇒6 as antigas pra¸cas, parques e mans˜ oes {A4 (C6 , EP, EF) }EPF ⇒7 as antigas pra¸cas, parques e mans˜ oes C3 ⇒7 as antigas pra¸cas, parques e mans˜ oes {A5 (C3 ,C6 , C7 ,C8a ,C8l )} A C7 ⇒8 as antigas pra¸cas, parques e mans˜ oes restaurados pm C7 ⇒8 as antigas pra¸cas, parques e mans˜ oes restaurados {Z(S2,S2,xaux2,C8l )} xaux21 ⇒9 as antigas pra¸cas, parques e mans˜ oes restaurados C8l ⇒9 as antigas pra¸cas, parques e mans˜ oes restaurados.

Remark. ”⇒i ”, i ∈ N, denotes derivation over the rules PL i ∪ PD i . They are available after the execution of the adaptive actions.

3

Conclusion

Many forms are reported in the literature [5] for the representation of NLs and their processing by a computer. Extending the results achieved in our previous works, this paper reports a proposal for the implementation of a method for modeling, representing and handling context-dependencies in NLs by means of adaptive devices [3]. The incremental dynamic nature of our device turns it into an attractive and low-cost option with good static and dynamic time and space performance.

References [1] Jo˜ ao Jos´e Neto: Contribui¸c˜ ao a ` metodologia de constru¸c˜ ao de compiladores. S˜ ao Paulo, 1993, 272p. Thesis (Livre-Docˆencia) Escola Polit´ecnica, Universidade de S˜ ao Paulo.[In Portuguese] [2] Iwai, M.K. Um formalismo gramatical adaptativo para linguagens dependentes de contexto. S˜ ao Paulo 2000, 191p. Doctoral Thesis. Escola Polit´ecnica, Universidade de S˜ ao Paulo. [3] Neto, J.J.: Adaptive rule-driven devices – general formulation and case study – CIAA’2001 Sixth International Conference on Implementation and Application of Automata. Lecture Notes in Computer Science, Springer-Verlag, Pretoria (2001) [4] Neto, J.J., Moraes, M.: Formalismo adaptativo aplicado ao reconhecimento de linguagem natural – Anais da Conferencia Iberoamericana en Sistemas, Cibern´etica e Inform´ atica, 19–21 de Julio, 2002, Orlando, Florida (2002) [5] Taniwaki, C.Y.O.: Formalismos Adaptativos na An´ alise Sint´ atica de Linguagem Natural – MSc Dissertation, Escola Polit´ecnica da Universidade de S˜ ao Paulo (2002)

Some Regularities of Frozen Expressions in Brazilian Portuguese Oto Araújo Vale Faculdade de Letras, Universidade Federal de Goiás Caixa Postal 131 74001-970 Goiânia, GO, Brazil [email protected]

Abstract. In this paper we examine a class of 125 frozen expression in Brazilian Portuguese. This class is one of the ten classes established after a typology of 3.550 verbal expressions, according to the distribution of the fixed and free components of each expression. Some regularity could be observed in this class, which resulted in the construction of a graph in order to identify the expressions of this class in large corpora.

1

Introduction

Frozen expressions constitute a great problem in natural language processing. Gross [1] has shown that for French the number of verbal frozen expressions (VFE) is much larger than simple verbs. In this paper we present a class of 125 VFE, which belongs to a typology of Brazilian Portuguese VFE we established [2]. This typology was constructed in Lexicon-Grammar tables [3]. The Lexicon-Grammar tables are binary matrixes with the expression in the lines and the syntactic and semantic properties in the columns. This kind of matrix is useful to visualize the most frequent properties and is also easily used for Natural language Processing: some softwares – Intex [4] or Unitex [5] – utilize these matrixes to construct graphs and Finite State automata to make search in large corpora [6]. In Vale [2] a typology was established of ten classes containing 3.550 verbal frozen expressions. This typology was set up by the distribution of the frozen and free elements in each expression. Only the expressions with a free subject were classified in this typology: It can be observed in Table 1 that the more simple constructions have the most numbers of VFE. The PB-C1 class, with only one frozen element without preposition, has a significant number of expressions, whereas the PB-CPP, with two frozen elements, each of which introduced by a preposition, presents a reduced number of VFE. This classification allows us to observe many regularities in the constitution of VFE. Those regularities are interesting to the theoretical approach and also to NLP.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 98–101, 2003. © Springer-Verlag Berlin Heidelberg 2003

Some Regularities of Frozen Expressions in Brazilian Portuguese

99

Table 1. Ten classes established by Vale [2]1 Class PB-C1 PB-CP1 PB-CDH PB-CDN PB-C1PN PB-CP1PN PB-CNP2 PB-C1P2 PB-CPP PB-C1P2DN

2

Structure N0V C1 N0V Prep C1 N0V (C de Nhum)1 N0V (C de N)1 N0V C1 Prep N2 N0V Prep C1 Prep N N0V N Prep C2 N0V C1 Prep C2 N0V Prep C1 Prep C2 N0V Prep C1 Prep (C de N)2

Example Rui bateu as botas Rui entrou pelo cano O filme encheu o saco de Rui O aviso acendeu o pavio da crise Ana arrasta uma asa por Rui Rui pisou no calo de Ana Rui colocou Ana para escanteio O governo pôs as cartas na mesa Rui mudou da água para o vinho Rui pôs água na fervura da CPI

Quantity 1206 660 157 100 321 127 341 423 90 125

The Regularities of a Class

The class choosed to be presented here has some regularities that can show some of the procedures we think necessary to approach the treatment of VFE in NLP. The class, named PB-C1P2DN, was constructed after the observation that a set of expressions from the PB-C1P2 class accepts that the last frozen element is unfolded into an NP constituted of a frozen element and a free element: 1. As explicações de Rui puseram lenha na fogueira 2. A descoberta dos documentos pôs lenha na fogueira da CPI It was observed that most expressions like (1) could be considered as a substructure of (2) with the omission of the free element on the right. We considered this kind of regularity sufficient to constitute a new class of expressions. Constructing this new class, it was noted that 80% of its expressions were constructed with the following verbs: pôr, botar, colocar, enfiar, jogar, and meter, which belong to the same semantic field. In fact many expressions may present an alternation of these verbs: 3. Rui (pôs+botou+colocou+enfiou+jogou+meteu+deitou) lenha na fogueira da CPI In spite of this, it can not be concluded that all the expressions of this class have this property: many expressions accept the alternation of some of these verbs, but reject others: 4. FHC (pôs+botou+colocou+enfiou+meteu) a mão no bolso do contribuinte 5. * FHC (jogou+deitou) a mão no bolso do contribuinte Thus, even in a relatively homogeneous class like this, it is necessary to verify each property of each expression case by case. It means that, to all classes of VFE, an exhaustive study of the properties of each expression must be accomplished, and quick generalizations must be avoided. 1

The current notation of Lexicon-Grammar theory is used here: Ni (i=0,1,2,3) is a free NP (the zero index means the NP in subject position) ; Nhum is a human NP; V is the verb; C is the frozen nominal element.

100

O.A. Vale

Fig. 1. Graph describing the class PB-C1P2DN

In the specific case of the PB-C1P2DN class, it could be realized, a posteriori, that the verbs jogar and deitar can not be used when the first frozen element is a "a part of the body". The regularity of this class allows to create a FS graph without the procedure proposed by Senellart [7]. In fact, that procedure is useful for processing large classes, but presents as inconvenience the creation of one FS graph for each expression. The graph editor of Unitex was used to build a graph that assembles most expressions of the PB-C1P2DN class. This procedure allows to visualize the distribution of the elements. It becomes possible due to the little number of verbs in this class and the alternation showed in (3). The graph in Fig. 1 is constructed to present all the details of these expressions. In the graph, neither the passive forms of the expressions nor the possibility of modifiers insertion were shown. It can be seen, for example, that an expression like meter a mão em vespeiro has some constraints about the determiner. In fact, the indefinite determiner can only be used without the free NP on the right. 6. Rui meteu a mão num vespeiro 7. Rui meteu a mão no vespeiro da partilha de bens 8. * Rui meteu a mão num vespeiro da partilha de bens Unitex was used to locate the graph`s expressions in the whole 1997 text of Folha de S. Paulo (about 94 millions of words). The concordance was obtained. It interesting to observe, in that concordance, the strings identified and those which constitute a VFE. It becomes important in a first approach to separate the compositional occurrences and the frozen ones. Five of the nine occurrences of the string o pé do acelador are not frozen. It is curious to observe that there are 12 of the 33 occurrences

Some Regularities of Frozen Expressions in Brazilian Portuguese

101

of the VFE (+++) na massa, in wich there is a kind of "trial to recover" its compositionality (in a sort of "unfrozeness"). All the other occurrences in these concordances are frozen ones. The problem here is the identification of the frozen strings and the procedure to distinguish them from the compositional strings. In this research stage, we think it becomes necessary to identify these strings examining each occurrence: the intervention of a linguist becomes necessary to separate the compositional from the frozen occurrences. In this example, in among 67 strings found in the corpus, there was five compositional occurrences. Even in the cases where there was a "trial to recover" the compositional sense, the presence of a frozen expression was evident.

3

Conclusion

The VFE phenomenon needs an actually detailed approach. We think it is necessary to keep in mind that we have approached a small class of VFE, which presents a certain homogeneity in its distribution. For an approach of the other classes, it will be interesting to observe the number of strings that appears only with the frozen sense, and those that have compositional and frozen occurrences. For this kind of expressions, it will be necessary to build a set of local grammars [7] to eliminate the ambiguity.

References 1. Gross, M. Une classification des phrases "figées" du français. Revue québécoise de linguistique, Vol. 11, n. 2 (1982) 151–185 2. Vale, O. A. Expressões cristalizadas do português do Brasil: uma proposta de tipologia. Araraquara, . Tese (Doutorado) - Universidade Estadual Paulista, Araraquara. (2001) 3. Gross, M. Méthodes en syntaxe. Hermann, Paris (1975) 4. Silberztein, M. Dictionnaires électroniques et analyse automatique de textes: le système INTEX. Masson, Paris (1993) 5. Paumier, S. Manuel Unitex. http://www-igm.univ-mlv.fr/~unitex/manuelunitex.pdf (2002) 6. Senellart, J. Reconaissance automatique des entrées du lexique-grammaire des phrases figées. Travaux de linguistique. n. 37 (1998) 109–125. 7. Laporte, E., Monceaux, A. Elimination of lexical ambiguities by grammars: the ELAG system. Lingvisticae Investigationes XXII (1998–1999) 341–367

Selva: A New Syntactic Parser for Portuguese Sheila de Almeida, Ariadne Carvalho, Lucien Fantin, and Jorge Stolfi Institute of Computing, State University of Campinas Cx. Postal 6176, 13084-971 Campinas (SP), Brazil {sheila.almeida,ariadne,lucien.fantin,stolfi}@ic.unicamp.br

Abstract. We describe an ongoing project whose aim is to build a parser for Brazilian Portuguese, Selva, which can be used as a basis for subsequent research in natural language processing, such as automatic translation and ellipsis and anaphora resolution. The parser is meant to handle arbitrary prose texts in unrestricted domains, including the full range of coordination and subordination constructs. The parser operates separately on each sentence, and outputs all its syntactically valid derivation trees.

1

Introduction

Here we describe Selva, a new syntactic parser for Brazilian Portuguese, designed to deal with arbitrary prose text, without domain or context restrictions, and allowing the full range of coordination and subordination constructs. The parser operates separately on each sentence (as delimited by periods or other full stops), and outputs all its syntactically valid parsings. We consider only plain running text, excluding styles with special structures like poetry and headlines. The main obstacles to the robust parsing of real-world text are the enormous number of linguistic constructions that must be handled, and the numerous syntactic ambiguities. The latter can only be resolved by semantic analysis, which requires an intelligent understanding of the whole text and of its origins – which in turn requires an impossibly a vast and complex world model, and powerful logical/probabilistic inference methods. This difficulty is especially serious for unrestricted text, where even green ideas may sleep furiously on occasion. A parser that uses only syntactic criteria, such as word categories and word order, will be unable to choose the correct parsing among all possible parsings for the same sentence; it will have to guess, based on some heuristic or statistical rules. Considering that a typical sentence of moderate length may require dozens of such choices, the chances of making the right guess at evey one will be very small. The only way that a parser can approach 100% success rate is by listing all, or nearly all, syntactically valid parsings for each sentence. Even though an n-word input may admit thousands of such parsings, they are only a tiny fraction of all possible trees with n leaves. We concluded that a tool that found all valid parsings would be very useful for language processing, e.g. as a front-end syntactic filter for a restricted-domain semantic analyzer. N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 102–109, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Selva: A New Syntactic Parser for Portuguese

103

Having chosen to generate all possible parsings, we found it best to avoid many traditional rules which are inspired on semantic criteria – such as transitivity – as being too unreliable. We also decided against statistically-based filters, since rule usage probabilities are strongly dependent on semantical context. The Selva parser assumes that the input is syntactically correct, and makes no effort to reject ungrammatical sentences. Finally, we do not try to flag meta-syntactic constructs such as passive voice or cleft predicatives (Ela e ´ quem fez o bolo). On the other hand, in order to keep the size of the output within tolerable limits, and avoid generating invalid parsings for correct sentences, we found it necessary to enforce certain syntactic restrictions, such as person and gender agreement, and to exclude some phrase structures which, although formally valid, are too rare to be worth considering. For instance, we do not recognize clauses with untopicalized object-subject-verb order (as in O carro Maria comprou) – even though they occasionally occur, even in prose.

2

Related Work

There are surprisingly few projects envolving syntactic analysis of Portuguese [10]. Moreover, some of them are commercial projects unavailable for research, while others are developed for very limited domains. We are aware of only two accesible parsers that can be compared to ours: Curupira and VISL. The Curupira parser of Martins, Hasegawa, and Nunes [4] was developed as part of ReGra, a commercial grammar checker [5]. Like our parser, Curupira assumes that the sentence is correct and generates all possible parse trees. The parser is still under development, and its source has not been released. The VISL parser was developed by a team led by Eckhard Bick within the Visual Interactive Syntax Learning project [1]. Apparently the source code is not available, but the parser itself can be used for demonstrations through the Internet. The parser is very robust and produces generally good results; however, it only provides one parsing for each sentence.

3

The Grammar

We encode the syntax by a context-free grammar with markers. The syntax is loosely based on standard Portuguese grammars [7,9,6,2], but we were forced to deviate from them in many points, chiefly due to the absolute lack of semantic information. (Even grammarians who take pains to separate syntax from semantics, like Perini, occasionally define syntactic categories by semantic tests.) Another reason was the need to handle complex coordination phenomena which are ignored by most grammars. Categories and Functions. As usual, we assign two labels to each phrase in a sentence: its syntactic category, and its syntactic role or function within the immediately containing unit. The most important sytactic categories are

104

S. de Almeida et al.

sentence, clause, verb, noun, adverb, adjective, and prepositive (prepositional phrase). (Other categories like preposition and article occur as constituents of these.) These categories are not disjoint; for example, in eu quero correr, the word correr is classified as a verb at a lower level of the parse tree, as a clause at a higher level, and as a noun further up. The top-level syntactic category is the sentence, which we define as a sequence of words delimited by strong punctuation (full stop, colon, semicolon, question, or exclamation marks). The clause category applies to sentences, or parts thereof, that consist of a verb and all its syntactically bound operands and modifiers, including subordinate clauses. (Most sentences are clauses, but occasionally one finds verb-less sentences such as in Ele demorou. Bastante..) Markers. Each category is further subdivided into sub-categories, characterized by certain parameters (markers) which can assume a finite range of values. Thus, for example, the noun class comprises twelve main sub-categories, characterized by three markers: gender (mas or fem), number (sin or plu), and person (1, 2, or 3). These markers are used to implement agreement constraints, and are strictly syntactical, not semantical; and many phrases, such as l´ apis or que saiu, are ambiguous with respect to them. Nouns generally have person 3, except for pronoun-based ones such as tu or apenas eu e vocˆ e. Adjective phrases have the same sub-categories as nouns. Here the person marker is significant only for adjectives built from subordinate clauses: que fomos contratados has person 1, que foram contratados has person 3. Adverbs and prepositives do not have gender, person, or number markers. Verb phrases are sub-classified by four main markers: mood, person, number, and gender. The mood can have seven possible values, such as indicative (ind), subjunctive (sub), past participle (par), etc.. Other markers identify verb phrases which include oblique (clitic) pronouns. We found no compelling reason to mark verb forms for tense (past, present, etc.), or to distinguish copular verbs like ser from ordinary ones. We also did not find it useful to mark verbs for transitivity, since every transitive verb may be used with elided object, and many supposedly intransitive verbs can have special meanings which are transitive. Clauses have several markers, such as the mood of the clause’s main verb. An important one is incompleteness, which characterizes clauses that have an elided noun constituent, like Maria comprou [] ou eu disse que [] sa´ ıram. Such clauses are used to build adjective phrases (see below). 3.1

Clause Structure

A normal clause consists of a single verb phrase, surrounded by optional noun, adverb, adjective, or prepositive phrases, possibly with some punctuation. Each of these constituents has a function in the clause, which can be topic (T ), subject (S ), object (O), object complement (C ), and clause modifier (M ). There may be one or more topics T at the beginning of the clause. Each topic is a noun, adverb, adjective, or prepositive phrase, or an expletive, which

Selva: A New Syntactic Parser for Portuguese

105

includes vocatives and interjections, always followed by a comma, such as O gato, ningu´ em o viu, or Branco, vermelho, qualquer cor serve. Other than topics and appositives (see below), at most three constituents of a clause can be noun phrases; these are assigned the functions S, O and C. The object complement C may also be a non-clausal adjective. The complement occurs, for example, in the sentences ele nomeou Pedro ministro (noun) ele deixou os eleitores frustrados. There may be any number of clause modifiers M, inserted before, between, or after the S, V, O, and C constituents. Each M may be either an adverb, an adjective, or a prepositive. The subject S must agree with the verb in person and number, but there is no such constraint between them and the object O. When the complement C is an adjective, it must agree with the object O in gender and number. Any clause modifiers M which are adjectives must agree with the subject. (Note that these constraints often allow us to distinguish an adjectival C from an adjectival M, as the word frustrados in the previous example.) If we ignore T and M phrases, we have 24 potential orders for the constituents S, V, O, and C ; which become 49 considering that S, O, and C may be absent. Although all these combinations are theoretically valid and occur in special contexts, we found that almost all sentences in our corpus used only the following alternatives: SVOC, SVCO, SOVC, SVO, SVC, SOV, VSO, VS – and their variants without the subject S, e.g. Achei o livro chato (VOC ). Moreover, some of these orders are possible only under certain constraints; for instance, SVCO is not allowed if the complement C is a noun: *o presidente nomeou ministro Pedro. Noun Phrases. The typical noun phrase has the structure M∗ DQ∗ HQ∗ M∗ . All parts may be omitted (under some conditions), except the head word H, which is either a noun, adjective, prepositive, or pronoun. The qualifiers Q may be adjectives or (after the head) prepositives. The determiner D may be an article or demonstrative pronoun; the modifiers (M ) may be adverbs or prepositives. Cf. the example [somente]M [o]D [maior]Q [livro]H [de exerc´ ıcios]Q [verde]Q [que comprei]Q . A noun phrase may also be formed from a subordinate clause, in a number of ways, e.g. [dan¸ car samba] e ´ bom, vi [os cavalos bebendo ´ agua]. A major source of structural ambiguity is the fact that a single adverb, adjective, or prepositive may often be parsed as a constituent of several categories at several levels. The noun phrase a caixa de madeira sem a al¸ ca da tampa admits many different tree structures besides the semantically correct one [[a]D [caixa]H [de madeira]Q [sem [[a]D [al¸ ca]H [da tampa]Q ]]Q ]. Adjectives, Adverbs, and Prepositives. And adjective is either a single word (the headword, H ), or one of several constructs with subordinate clauses, such as que+clause (que Maria comprou) or cujo+noun+clause (cujo carro Maria comprou). A prepositive consists of a preposition (P ) followed by a noun (the body B ) – which may be a subordinate clause, e.g. m´ aquina

106

S. de Almeida et al.

[de [fazer macarr˜ ao]], viaje [sem [sair de casa]]. Non-clausal adjectives and prepositives may be modified by adverbials (M ), as in bem branco or muito de confian¸ ca. The subordinate of que-adjectives must be an incomplete clause (see below) as in o carro [que [Maria comprou []]] and as pessoas [que [eu disse que [] sa´ ıram]]. The second example, where pessoas must agree in person and number with sa´ ıram, shows that markers of clause incompleteness and noun type must sometimes be propagated through several levels of the parse tree. Adverbs can be either a single word or one of several subordinate constructions, such as quando eu nasci, se n˜ ao chover; or arbitrary phrases isolated from the sentence by parentheses, dashes, or paired commas – which include vocatives, expletives, etc.. Adverbs too can be modified by adverbs and prepositives. Many simple adjectives, such as r´ apido, can also be classified as adverbs, and there seems to be no reliable sintactic criterion to distinguish a prepositive used as an adjective (qualifying nouns) or as adverb, except, occasionally, by agreement constraints or similar contextual information. Indirect Object. We found it impossible to distinguish between a clause modifier and an “indirect object” (except a pronominal one). The distinction is traditionally depends on the verb being tagged in the lexicon as “indirect transitive.” However, this approach fails too often to be of much use. In fact, there are examples which are inherently ambiguous, such as O menino gosta de verdade – de verdade may be either what the boy likes, or how much the boy likes it. We also did not find it helpful to mark verbs with its “usual” prepositions (regˆencia), since it seems that any preposition can be used to modify any verb. We were forced to conclude that the concept of “indirect object” is largely a matter of semantics, not syntax. Therefore, we parse prepositionals like de verdade, and the verb-attached weak “indirect” pronouns like lhe and se, as clause modifiers. Compound Verbs. The traditional parsing of clauses like ele vai fazer isso (or ela tinha feito isso) classifies vai fazer (resp. tinha feito) as a “compound verb”, and labels isso as the object of the top-level clause. We found this approach problematic in view of coordinations like ele vai fazer isto e pensar naquilo, or inserted terms like, ele vai apenas me encontrar. Threfore, we chose to parse the main verb in such constructions, together with any direct object and modifiers, as a subordinate noun phrase which is the direct object of the auxiliary verb. Under this view, we must interpret the auxiliary ir as a special sense of the verb, which is transitive – the object being the action that is going to happen. (However, we still have special tratment for clauses where a weak pronoun – direct object or clause modifier – belonging to the main verb is displaced in front of the auxiliary, as in ele me quer ver.) 3.2

Coordination

Coordination is a major difficulty in syntactic analysis. In typical text, besides coordination of whole clauses or major phrase categories (nouns, ad-

Selva: A New Syntactic Parser for Portuguese

107

jectives, verbs, and adverbs), one finds problematic examples such as Tenho camisas com e sem gola, Quero este mas n˜ ao aquele livro. Coordination may also occur between multi-phrase sequences, as in ele est´ a proibido de entrar em ou sair de casa, Maria comprou o carro e vendeu a casa simultaneamente, Ele nomeou Pedro diretor e Maria assistente, etc.. A popular solution is to view such constructs as coordination between clauses, with many elided parts: for instance, Maria compra e vende im´ oveis is parsed as [Maria compra []] e [[] vende im´ oveis], where [] stands for an omitted constituent. However, this interpretation is problematic because it requires forward anaphoric references (cataphora) and the elision of parts which should not be elided, and it also breaks the semantics of adverbials like mutuamente and simultaneamente. Threfore, coordination must be handled as a general phenomenon that can operate on two or more phrases of almost any category X, to produce a phrase of the same category X [8]; or even on groups of phrases which do not form a single syntactic unit, e.g. between two subject-verb pairs as in Jo˜ ao vendeu e Maria comprou o carro.

4

The Pre-processor

Before each input clause is submitted to the parser, it goes through a preprocessor, which breaks the text into words, obtains the word categories (from a large dictionary, which can be supplemented by the user, and some simple heuristics for numbers and proper names) and finally turns it into a list of Prolog clauses. Contractions like dele and lhos are flagged as such in the main dictionary, and are expanded by the pre-processor. Since some some contractions are ambiguous, the result is no longer a sequence of tagged words but rather a branching directed graph. For instance, the clause vamos nos encontrar becomes:  •0 → vamos → •1 

5

nos

 •3 → encontrar → •4 em → •2 → os 

The Parser

The Selva parser is implemented in Prolog [3]. Rather than using the DCG parsing facilities built into Prolog, we use a separate translator to map the source grammar into plain Prolog rules. The translator adds an interval constraint to each term of each rule, and reorders the terms so as to avoid infinite recursion. Even though the parser depends on Prolog’s top-down search with backtracking, the extra parameters and rule reordering give it some of the robustness and effciency expected from bottom-up parsers. A typical rule in the source grammar sentence(MOOD, ...) → subject(PERSON, , NUMBER), *verb(MOOD, PERSON, NUMBER), object( , , ).

(1)

108

S. de Almeida et al.

gets translated into the following Prolog code sentence(NI, NS, INF, SUP, MOOD, ..., T) : − verb(N1, N2, INF, SUP, MOOD, PERSON, NUMBER, T2), subject(NI, N1, INF, SUP, PERSON, , NUMBER, T1), object(N2, NS, INF, SUP, , , , T3), buildtree(’sentence 1’, T, [T1, T2, T3]), Each predicate matches directed paths in the input graph with certain properties. The added parameters INF and SUP will be instantiated with number nodes, and specify lower and upper bounds for the nodes in the matched path. Parameters NI and NS specify the actual initial and final node numbers of the path, and are elsewhere required to satisfy INF ≤ NI ≤ NS ≤ SUP. The predicate sentence is satisfied by every path in the input graph that begins at node NI, ends at node NS, and consists of three sub-paths matched by subject, verb, and object, in that order. If the match succeeds, the predicate buildtree defines T as a tree node, whose label ’sentence 1’ identifies the syntax rule, and whose subtrees are the parse trees T1,T2,T3 of the constituent phrases. Note that the translator moved the verb term – the starred item in (1) – to the beginning of the Prolog rule. (The proper order of the sentence’s constituents is still ensured by the NI/NS arguments.) This feature was introduced to avoid infinite loops in syntax rules that begin with recursive optional terms – such as subject, which may be elided, and may be a subordinate clause beginning with a subject. Each recursive attempt to match a subject will have to go through the verb term, which cannot be elided and therefore will consume at least one word. Threfore the subject’s NI and N1 arguments will be constrained to a strictly smaller range of indices at each recursive call.

6

Evaluation and Comparison

In order to evaluate Selva’s performance on real-world texts, we created a small corpus of 80 inputs, by taking the first sentence from the second paragraph of newspaper and magazine articles. The entries averaged 14.7 words each (minimum 5, maximum 37); most of them used subordination, and many had nontrivial coordinations. These clauses were run through a preliminary version of our parser and grammar. In 28 cases, the parser failed to terminate, or did not find the correct parsing (the structure that a human parser would give, using all syntactic and semantic information available). For the remaining 52 cases where the correct structure was found, it produced 51.4 parsing per sentence, on the average (minimum 2, maximum 480). Since this test was performed, we introduced the interval-constraint mechanism and re-wrote large parts of the grammar. Therefore, the above results should be viewed only as an indication that the basic premise – that it is feasible to generate all syntactically valid parsings – is quite realistic. For comparison, we ran the same 80 clauses through the VISL parser. In almost all cases, the single derivation tree that was returned was at least syntactically possible. However, in about half of the cases, it did not match the correct

Selva: A New Syntactic Parser for Portuguese

109

tree, as it would be defined by a human parser – typically because of incorrect guesses about the nesting of prepositional phrases and subordinate clauses. We also tried our corpus on the Curupira parser (version 1.0). That version seems configured to return only the first 4 parse trees found, in the order implied by the ordering of the rules in the grammar. According to the few tests which we have run so far, its coverage seems to be still incomplete. However the parser is still under development, and we expect its performance will improve considerably in future releases.

7

Future Work

We plan to continue to improve the grammar in light of systematic tests. In particular, we plan to tune it, by adding or excluding rules, so as to minimize the number of spurious derivation trees returned while improving the success rate. We also intend to use a more compact representation for the output, namely a single tree with OR nodes to encode the multiple choices allowed for each syntactic node. Such an encoding would allow us to generate and represent exponentially many parse trees for any n-word sentence, at polynomial cost. Acknowledgments. We wish to thank Gra¸ca V. Nunes and the team at NILC – N´ ucleo Interdepartamental de Lingu´ıstica Computacional of the University of S˜ ao Paulo, in S˜ ao Carlos – for kindly providing us a pre-release of the Curupira parser and allowing us to use the ReGra tagged dictionary. This work was supported in part by CAPES and CNPq.

References 1. E. Bick. The Parsing System “Palavras”: Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework. PhD thesis, Aarhus Univ., 2000. 2. P. Cipro Neto. Gram´ atica da L´ıngua Portuguesa. Scipione, 1997. 3. W.F. Clocksin and C.S. Mellish. Programming in Prolog. Springer, 1994. 4. R.T. Martins, R. Hasegawa, and M.G.V. Nunes. CURUPIRA: Um parser funcional para a l´ıngua portuguesa. Technical Report NILC-TR-02-26, NILC-ICMC, Universidade Estadual de S˜ ao Paulo, December 2002. 5. R.T. Martins, R. Hasegawa, M.G.V. Nunes, G. Montilha, and O.N. Oliveira Jr. Linguistic issues in the development of ReGra: A grammar checker for Brazilian Portuguese. Natural Language Engineering, 4(4):287–307, December 1998. 6. R.M. Mesquita. Gram´ atica da L´ıngua Portuguesa. Saraiva, S˜ ao Paulo, 1999. ´ 7. A.M. Perini. Gram´ atica Descritiva do Portuguˆes. Atica, 1996. 8. R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. A Comprehensive Grammar of the English Language. Longman, 1985. 9. L.A. Sacconi. Nossa Gram´ atica. Atual Editora, S o Paulo, 1984. 10. D. Santos. Um centro de recursos para o processamento computacional do portuguˆes. Datagrama Zero – Revista de Ciˆ encia da Informa¸c˜ ao, 3(1), February 2002.

An Account of the Challenge of Tagging a Reference Corpus for Brazilian Portuguese∗ 1,2

2

2

2

Sandra Aluísio , Jorge Pelizzoni , Ana Raquel Marchi , Lucélia de Oliveira , 2 2 Regiana Manenti , and Vanessa Marquiafável 1

ICMC – DCCE, University of São Paulo, CP 668, 13560-970 São Carlos, SP, Brazil {sandra,jorgemp}@icmc.usp.br 2 Núcleo Interinstitucional de Lingüística Computacional (NILC), ICMC-USP, CP 668 13560-970 São Carlos, SP, Brazil {raquel,lucelia,regiana,vanessam}@nilc.icmc.usp.br

Abstract. This article identifies and addresses the major linguistic/conceptual, as opposed to logistic, issues faced in the morphosyntactic tagging of MACMorpho, a 1.1 million word Brazilian Portuguese corpus of newspaper articles that has been developed in the Lacio-Web Project. Rather than simply presenting the annotated corpus and describing its tagset, we elaborate on the criteria for establishing the tagset and analyze some interesting cases amongst the linguistic problems we faced in this work.

1 Introduction Annotated reference corpora, such as Suzanne, the Penn Treebank or the BNC have helped both the development of English computational linguistics tools and English corpus linguistics. Manually-annotated corpora with part-of-speech (POS) and syntactic annotation are costly but allow one to build and improve sizeable linguistic resources, such as lexicons or grammars, and also to develop and evaluate most computational analyzers. Usually, such treebank projects follow the Penn Treebank (http://www.ldc.upenn.edu/Catalog/docs/treebank2/cl93.html) approach, which distinguishes a POS tagging and a parsing phase each comprising an automatic annotation step followed by manual revision. Recently, there have been several efforts to build gold standard annotated corpora for other languages than English, such as French, German, Italian, Spanish, Slavic (http://treebank.linguist.jussieu.fr). For Brazilian Portuguese (BP), however, the figure is not so bright. With regard to manual morphosyntactic annotation, to the best of our knowledge, there are only two small Brazilian corpora which were used to train statistical taggers: (i) the 20,982-word Radiobras Corpus [1, 2], and (ii) the 104.966-word corpus built from NILC’s corrected text base spanning 3 genres (news, literature and textbooks) [3]. There are, although, several (Brazilian and European) Portuguese corpora automatically annotated by Bick´s [4] syntactic parser PALAVRAS (http://visl.hum.sdu.dk), which are part of the AC/DC project (http://www.linguateca.pt). ∗

This project is partially funded by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq). We are grateful to E. Bick for parsing MAC-Morpho.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 110–117, 2003. © Springer-Verlag Berlin Heidelberg 2003

An Account of the Challenge of Tagging a Reference Corpus

111

In order to make freely available both corpora and computational linguistic tools which learn from raw and annotated corpora, such as POS taggers, parsers and term extractors, we have started the Lacio-Web project. Lacio-Web (LW), a two-year project launched at the beginning of 2002, tries to fill the gap with regard to linguistic resources and tools for BP. In this paper we present the rationale for building a 1.1 million-word corpus with manually validated morphosyntactic annotation (the results of the inter-annotator agreement evaluation and further logistic/historical detail have been published in [5]), including the criteria for establishing the tagset (Section 2), some linguistic problems we faced in this work (Section 3) and directions for further work (Section 4). This corpus was taken from a text collection from Folha de São Paulo (http://www.folha.uol.com.br/folha), which gives us high quality contemporary Brazilian Portuguese from different authors and domains. The resulting annotated corpus (named MAC-Morpho) will be available in two versions: in annotators’ format (one word per line followed by its tag) and in the XML-compliant format proposed by the EAGLES [6] (www.cs.vassar.edu/XCES).

2 Designing the Tagset We analyzed the Eagles recommendations for the Morphosyntactic Annotation of Corpora (http://www.ilc.pi.cnr.it) and two of the more important tagsets designed for English (36-tag Penn Treebank Tagset and BNC project’s1 61-tag C5 and 140-tag C7) and three other tagsets for Portuguese (NILC2, PALAVRAS and Tycho Brahe [7] respectively with 36, 14 and 48 tags). Although there are already two tagsets for Portuguese (PALAVRAS and NILC), whose purpose is similar to ours, neither fulfills all the criteria we consider as essential to our project. These criteria have been employed by and large by the Penn Treebank and Tycho Brahe projects. Even though the latter project also tackles Portuguese, it has been specifically designed to support diachronic research and, perhaps due to this, ends up with a conceptually different tagset from ours. 2.1

Criteria, Features, and Previous Work

Recoverability. Exploiting recoverability refers to avoiding tagging (morphological) details that can otherwise be easily recovered by querying a lexicon on the basis of the word and its tag alone. For example, the decision of having a unified “article” tag – instead of two or more, such as “definite/indefinite singular masculine article” – takes advantage of the automatic recoverability of any further features of interest, provided articles are not ambiguous with each other. This criterion ultimately leads to minimal tagsets with the sole purpose of disambiguation, i.e., a tagset suffices as long as every possible pair (word, tag) resolves to at most one single lexical entry (whatever an entry may be) or set of morphologically equivalent entries. NILC Tagset fails to exploit, for instance, the recoverability of the traditional Portuguese pronoun classes,

1 2

http://www.hcu.ox.ac.uk/BNC/what/ucrel.html http://www.nilc.icmc.usp.br/nilc/TagSet/ManualEtiquetagem.htm

112

S. Aluísio et al.

ending up with 10 distinct pronoun tags. Were we to satisfy recoverability solely, 2 simple tags (“relative and non-relative pronoun”) would do exactly the same effect. Syntactic Function (and Actuality). Notwithstanding, recoverability and its related morphological disambiguation efficiency are not enough, since we strictly understand that the ideal tagset should be optimal for supporting a subsequent full syntactic parsing step. In other words, it should entail as much syntactical inference as possible while not requiring its tagger to be a full-fledged parser, paradoxical though it may seem. Thus, recoverability is but a lower-bound measure, ever second to syntactic function, an eminently tag-multiplying factor. The referred paradox is not trivial, and the pitfall of reaching a fully syntactic, or simply overcrowded, tagset may seem unavoidable, at first sight. Fortunately, we believe we quite managed to develop a twofold sound compromising criterion, namely: • intra-word syntactic Distinctness preservation (or D-preservation): any two syntactically distinct occurrences of a word should never receive the same tag; • inter-word syntactic Likeness preservation (or L-preservation): reciprocally, any two syntactically equal occurrences of different words should receive the same tag as long as morphological recoverability is left unharmed. The application of D-preservation to our former two-tag treatment of pronouns (“relative” vs. “non-relative”) leads to LW Tagset’s five pronoun tags, namely PROPESS (personal pronoun, of whatever grammatical case), PRO_KS_REL (relative subordinating pronoun), PRO_KS (non-relative subordinating pronoun, introducing noun clauses, such as “who” in “Please identify who the murderer is.”), PROSUB (nonsubordinating, non-personal pronoun as a nucleus, such as “who/this” in “Who/This is the murderer?”) and PROADJ (non-subordinating, non-personal pronoun as a modifier, such as “this” in “This man is the murderer.”). In these examples and in accordance with the stated criterion, two syntactically distinct occurrences of “who/this” receive accordingly distinct tags. It is worth noticing that, properly exploiting recoverability and syntactic encoding, our five-tag treatment of pronouns is more informative than that of NILC Tagset, despite the latter having twice as many pronoun tags. In time, syntactic function implies syntactic actuality, i.e., tags should clearly reflect the syntactic function of words in the clauses and phrases they belong to, which sometimes means departing from traditional (usually untenable) treatment. One such example is the introduction of the tag ADV_KS_REL (relative subordinating adverb) to account for relative “(P) onde // (En) where”, “quando // when” and “como // how” (the latter is never relative in English, but arguably so in Portuguese), traditionally regarded as pronouns. That is not an unheard-of position, since PALAVRAS also treats these words as adverbs. But maybe a bit too eagerly: according to its POS tagset, e.g. “quando // when” is always an adverb, whereas we understand it may fall into four categories, namely KS (subordinating conjunction, in adverbial clauses), ADV_KS_REL (relative subordinating adverb, in relative clauses), ADV-KS (nonrelative subordinating adverb, e.g. in indirect interrogative sentences) and plain ADV (non-subordinating adverb, e.g. in direct interrogative sentences). To do PALAVRAS justice, however, we should notice that it is a parsing system, not a POS tagger, and its performance seems to be not at all hindered by such simplifications, which is the case exactly because (i) it is not based on the more common tagger-parser pipeline architecture and (ii) it avails itself of a host of secondary morphosyntactic tags. The

An Account of the Challenge of Tagging a Reference Corpus

113

application of L-preservation is exemplified while discussing the immediately following criteria. Consistency and Indeterminacy. A tagset is worth nothing if it does not provide for consistency, i.e. if its users (not only corpus annotators) are not likely to agree (including with themselves!) on how and when to use each tag. Even if we only employed one single all-consistent, all-efficient annotator, users must be able to evaluate, understand and ultimately replicate their work. The pursuit of consistency is paramount, even if to the detriment of other requirements. In specific, consistency is not usually very partial to refinement, which here means syntactic or morphological detail. One such example is the contrast between past participles in adjectival position (e.g. “(P) a casa pintada // (En) the house (that has been) painted”) and adjectives proper zero-derived from past participles (e.g. “(PBr) uma moça muito falada // (En) a 3 young woman very much gossiped about”), whose annotation was intended by the Lacio-Web team at first, but had to be eventually abandoned due to low interannotator consistency. The solution here was to resort to indeterminacy, introducing the (indeterminate) PCP tag, standing for “past participle or adjective zero-derived therefrom”. Indeterminate tags are created by collapsing inconsistency-mongering tags, thus leading to smaller tagsets. Nonetheless, it is not always that indeterminate tags are the best solution for inconsistency problems. Sometimes, just sound application of other criteria might come to one’s rescue. One ever-lasting source of debate and inconsistency in Portuguese has been the contrast between nouns and adjectives. Unlike their English counterparts, most Portuguese nouns and adjectives can be used interchangeably, making it hard to determine the actual morphological specification of these words and whether nominalization is really taking place, so used to this operation are we native speakers. By simply prioritizing syntactic function, or rather, by upholding L-preservation, we were able to circumvent this delicate problem, the result being thus: every open-/closedclass occurrence happening to be the nucleus of a noun phrase is tagged N/PROSUB; and every open-/closed-class occurrence happening to modify a noun, ADJ/PROADJ or ART (article, whether definite or not). Even the words traditionally called “numerals” usually fall into either N or ADJ, again according to the syntactic function of each occurrence. Only cardinal numerals and all inflections of the word “(P) meio // (En) half” may receive the tag NUM (numeral), and do so only when occurring as noun modifiers, due to their remarkably distinct syntactic behavior in such cases. Therefore, those “numerals” never happening to be real noun modifiers (e.g. “bilhão/milhão // billion/million”, “dezena // ten”, “terço // third”, “quarto // quarter”) will never be tagged NUM. Learnability. Finally, we cannot fail to mention that a most limiting factor to how syntactic LW Tagset could get was, at all times, the assumption of a machine learning technology to apply to (a version of) the annotated corpus, namely that usual in POS taggers and blind but to a very few words contiguously surrounding the current target word. Therefore, it seemed just fair to avoid all refinement that was really not likely to be learnt, such as NILC Tagset’s annotation of verb transitivity.

3

Notice that, unlike English “gossiped”, Portuguese “falada” cannot be accounted for by productive passive voice processes. That is exactly why the latter is regarded as a zeroderived adjective proper.

114

S. Aluísio et al.

It is worth noticing at this point that it has never been our aim to deliver a ready-touse training corpus, but rather one providing for (i) rapid (i.e. automatic) deployment of variously tagged (e.g. for various levels of refinement) training versions of itself and thus (ii) extensive and comprehensive experimentation. Just by way of illustration of how not ready to use our corpus is, it should suffice to mention that some of its tokens are actually groupings of contiguous tokens in the original, resulting in what we call “compounds” (morphosyntactic units made up by two or more words, such as “(P) devido=a // (En) due=to”), which are tagged regularly as if they were but one single word. Rather more training-friendly, in contrast, NILC Tagset also employs multiword morphosyntactic units, but tags each of their tokens separately with the same tag. Naturally, contiguous multiword units having the same tag will pose a segmentation problem to NILC Tagset’s users. 2.2

The Current Tagset

Since the beginning of its development, in July of 2002, LW Tagset (Tables 1 and 2) has undergone cyclic revisions, being currently in its ninth version. Table 1. Regular tags

Table 2. Complementary tags

Tag ADJ ADV-KS-REL

Definition open-class noun modifier relative subordinating Adverb

Compl. Tag |EST |AP

ADV-KS

Non-relative subordinating Adverb

|+

ADV ART KC KS IN N NPROP NUM PCP PDEN PREP PROPESS PRO-KS-REL PRO-KS

Non-subordinating adverb Article coordinating conjunction coordinating conjunction interjection open-class noun phrase nucleus proper noun numeral as a noun modifier past participle or adjective emphasis/focus preposition personal pronoun relative subordinating pronoun Non-relative subordinating pronoun non-subordinating pronoun as a noun phrase nucleus Non-subordinating pronoun as a modifier Auxiliary verb Non-auxiliary verb Currency symbol

|! | [ beginning, |... middle part, | ] and end of discontinuous compound (further discussed in Section 3)

PROSUB PROADJ VAUX V CUR

|TEL |DAT |HOR |DAD

Definition foreign apposition contraction/ enclitic mesoclitic

phone number date time formatted data not falling into above categories

At present it comprises 22 regular POS tags along with nine orthogonal complementary tags. The latter are thus called because they add to the information of the POS tags, to which they are optionally appended by means of the “|” symbol.

An Account of the Challenge of Tagging a Reference Corpus

115

3. Some Emblematic Linguistic Challenges NPROP – Proper Noun. In most respects, proper nouns are but nouns, especially in the relation they bear to noun phrases. What sets them apart is the prerogative to refer to one single entity of the real world in that, if X is a proper noun, X might even be shared by more than one entity (e.g. homonymous people), but that would imply no common properties whatsoever to sharers. Consequently, we should tag NPROP all those words that would otherwise be tagged N but happen to have strictly unitary extensions/indeterminate intensions. Such is our criterion for identifying proper nouns, which, clear though it may seem, makes plenty of room for inconsistency. Problematic cases usually fall into the following categories: • motivated NPROPs, or rather, those obtained by zero-derivation, e.g. “(PBr) Nordeste (Brazilian geopolitical unit) // (En) the Northeast”, “Congresso // the Congress”; • metonymical NPROPs, e.g. “(PBr) gillette // (En) (brand of) razor blade”, “bandaid”, “danone // (brand of) yogurt”, “fusca // a specific make of car or car of this make”; • NPROPs with context-dependent cardinality extensions, e.g. “(P) sol // (En) sun”, “lua // moon” (cf. “A lua está bonita! // The moon is beautiful!” and “Quantas luas tem Júpiter? // How many moons does Jupiter have?”), “Congresso // Congress”; • NPROPs with apparently (and arguably) unitary extensions, e.g. “(P) xadrez // (En) chess”, “HIV”, “gripe // flu”. Compounds. The treatment of groups of words as morphosyntactic units (resulting in compounds, marked by replacing spaces between their elements with the “=” symbol) is at one time imperative and dangerous. It is imperative because, otherwise, how could one tag e.g. “apesar/acerca/cerca” apart from preposition “de” as in “apesar/acerca/cerca de”? It is also dangerous because it is always difficult to establish clear criteria to decide whether to treat a given group as a compound. We chose the following ones: • non-analyzability, which has already been implied, applying to “(P) apesar=de // (En) in=spite=of”, “devido=a // due=to” and suchlike, and sanctions compounds (i) whose part-wise tagging is impossible or much too artificial, generating syntactically exceptional sequences of tags or (ii) whose (semantic) value seems not to be computable from the individual value of its elements; • trade-off, recommending e.g. the consideration of many compound prepositions (“(P) antes=de // (En) prior=to”, “depois=de // after”, “perto=de // close=to”, “longe=de // away=from”, etc.) which could even be tagged as pairs of adverb plus preposition (introducing a complement of the corresponding adverb). However, we believe the latter possibility imposes an unnecessary cost on a subsequent syntactic analysis, since those are highly co-occurring items, expressing basic semantic relations (of time/space, among others) and generally behaving like any other one-word preposition; • non-productivity, strongly correlating with non-analyzability and avoiding groups that, in fact, contain a currently productive syntactic-semantic structure, or rather, that are actually open-class. This criterion, for example, sanctions

116

S. Aluísio et al.

“(P) a=cavalo // (En) on=horseback” and “a=pé // on=foot” while banning “de carro/ônibus/trem/etc. // by car/bus/train/etc.” As one can see, our criteria are tenable, though a bit fuzzy, resulting in some of our highest inter-annotator inconsistency rates [5], in spite of some consistency-assurance devices we have devised (such as a central repository of compounds and candidates thereof). It is worth noticing that nearly as much as half the inconsistency is related to the creation of compound proper nouns, which is small wonder if one considers (i) how often proper nouns are in journalistic texts and (ii) how difficulty it is to determine how many proper nouns (only one or more) should be found in e.g. the following phrases: “(P) Departamento de Computação do Instituto Tecnológico da Aeronáutica // (En) Department of Computation of the Airforce Technology Institute”; “Safári do Quênia // Kenia Safari”; “GP da Austrália de F1 // Australia’s Formula One Grand Prix”; “o SESC de São Carlos // São Carlos SESC”. Discontinuity. One important, perhaps novel feature of LW Tagset’s is the possibility of expressing discontinuity of morphosyntactic units, or rather, handling discontinuous occurrences of compounds, whether occasionally or necessarily so. That is realised by means of the complementary tags “[”, “…” and “]” (respectively denoting beginning, inner part and end of discontinuous unit) and seemed to be a good solution for two serious problems, namely: • “o mais ADJ/ADV possível”: in Portuguese, structures like “(P) o(a) mais rápido(a) possível // (En) as soon as possible”, “o mais eficiente(s) possível // as efficient as possible”, “o mais à vontade possível // as at one’s ease as possible” are hardly susceptible, if at all, to analysis on a word-by-word basis (it is vital to notice that both “o” and “possível” are invariable, while inner adjectives are not). Even if we were to group “o mais” into a compound, how should we tag “possível” and it as independent entities? It seemed all the more appropriate to treat the whole “o=mais=possível” as a compound adverb and enable compound discontinuity. Hence the problematic structure can now be tagged thus: “o=mais_ADV|[ ADJ/ADV possível_ADV|]”; • Compound Disruption: perfectly eligible compounds have sometimes their usual continuity disrupted by extraneous elements inserted for emphasis or to prevent repetition of terms. Take e.g. the compounds “(P) apesar/antes=de_PREP // (En) in=spite=of/prior=to”. They may well happen to occur as “apesar/antes até mesmo de // even in spite of/prior to”, which can now be tagged thus: “apesar/antes_PREP|[ até=mesmo_PDEN de_PREP|]”. One interesting example coming from our corpus is the following: “(P) ...atingem níveis internacionais devido tanto à valorização interna quanto à valorização... // (En) ...reach international levels due not only to internal valorization but also to...” tagged thus: “...atingem níveis internacionais devido_PREP|[ tanto_KC|[ a_PREP|]|+ a_ART valorização interna quanto_KC|] a_PREP|]|+ a_ART valorização... // ...reach international levels due_PREP|[ not=only_KC|[ to_PREP|] internal valorization but=also_KC|] to_PREP|]... ” It is worth noticing that this device seems to be quite suitable to represent diverse binary coordinating structures (“(P) tanto ... quanto/não só ... mas também // (En) not only ... but also”, “nem/já/ora ... nem/já/ora // either ... or/now ... now ...”, among others).

An Account of the Challenge of Tagging a Reference Corpus

117

4 Current and Future Work We have developed MAC-Morpho, a 1.1-million-word Brazilian Portuguese reference corpus which shall be freely available on the Lacio-Web Project page (http://www.nilc.icmc.usp.br/nilc/projects/lacio-web.htm). The total cost of tagging this huge corpus, including research on tagsets and tagging projects, corpus creation, writing the tagset manual, annotators’ training, converting from Bick´s tagset to our tagset, weekly meetings with the annotators and revision took 11 months and involved 7 man month, 4 of them annotating the corpus. We ran two experiments to estimate inter-annotator agreement which presented kappa values in the .81–1.00 interval, namely 0.944 and 0.955, showing almost perfect agreement. The next steps will be a finer-grained correction phase of MAC-Morpho tackling the problems observed in the experiments and a tagset evaluation following [8].

References 1. Marques, N.C., Lopes, J.G.P.: A Neural Network Approach to Portuguese Part-of-Speech Tagging. Anais do II Encontro para o Processamento Computacional de Português Escrito e Falado (1996) 1–9 2. Villavicencio, A., Viccari, R.M., Villavicencio, F.: Evaluating Part-of-Speech Taggers for the Portuguese Language. Anais do II Encontro para o Processamento Computacional de Português Escrito e Falado (1996) 159–167 3. Aires, R.V.X., Aluísio, S.M., Kuhn, D.C.S., Andreeta, M.L.B., Oliveira Jr., O.N.: Combining Multiple Classifiers to Improve Part of Speech Tagging: A Case Study for Brazilian Portuguese. Proceedings of SBIA'2000 (2000) 20–22 4. Bick, E.: The Parsing System “Palavras”: Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework. Aarhus: Aarhus University Press (2000). 5. Aluísio, S. et al.: An account of the challenge of tagging a reference corpus of Brazilian Portuguese. Technical Report 188 – ICMC-USP (2003). Also Available at http://www.nilc.icmc.usp.br/~lacio_web/ 6. Macleod, C., Ide, N., Grishman, R.: The American National Corpus: Standardized Resources for American English. Proceedings of the Second Language Resources and Evaluation Conference (LREC) (2000) 831–36 7. Galves, C., Britto, H.: A Construção do Corpus Anotado do Português Histórico Tycho Brahe: O sistema de anotação morfológica. Proceedings of PROPOR 99 (1999) 81–92. 8. Déjean, H.: How to Evaluate and Compare Tagsets? A Proposal. Proceedings of the Second Language Resources and Evaluation Conference (LREC) (2000). Also available at http://www.sfb441.uni-tuebingen.de/~dejean/

Multi-level NER for Portuguese in a CG Framework Eckhard Bick Institute of Language and Communication, Southern Denmark University [email protected] http://visl.sdu.dk

Abstract. This paper describes and evaluates a linguistically based NER system for Portuguese, based on lexico-semantical information, pattern matching and morphosyntactic, context driven Constraint Grammar rules. Preliminary Fscores for cross-domain news texts, when distinguishing six different name types, were 91.85 (raw) and 93.6 (subtyping of ready-chunked proper nouns).

1 Introduction Named entity recognition (NER) in running text is a complex task with a number of obvious applications – semantic corpus annotation, summarisation, text indexing, to name but a few. This work focuses on Portuguese, and strives to distinguish between 6 main name type categories by linguistic rather than statistical means. 1.1 Previous Work In recent years, a number of different approaches have been carried out and evaluated by the NLP community. Thus, at the MUC-7 confeence (1998), competing systems were tested on broadcast news. The best performing system (LTG, Mikheev et. al. 1998) used both sgml-manipulating hand-crafted symbolic rules, a Hidden Markov Modeling (HMM) POS-tagger, name lists, partial probabilistic matching and semantic suffix classes, achieving an overall F-measure1 of 93.39, with recall/precision rates of 95/97, 91/95 and 95/93 for person, organisation and location, respectively. HHMresults in isolation (“learned-statistical”) can be regarded as a kind of baseline against which more sophisticated systems should be measured. A well performing example is the Nymbel system (Bikel et. al. 1997), which achieved F-scores of 93 and 90 for English and Spanish news text, respectively. Also, results for English were shown to be fairly stable down to a training corpus size of 100.000, indicating the cost/performance efficiency of the approach. Another automatic learning method, based on maximum entropy training (MENE), is described by Borthwick et. al. (1998). This system showed a clear increase in performance with growing training corpora, with in-domain F-scores of 89.17 for 100.000 tokens and 92.20 for 350.000 1

Defined as 2 x Recall x Precision / (Recall + Precision)

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 118–125, 2003. © Springer-Verlag Berlin Heidelberg 2003

Multi-level NER for Portuguese in a CG Framework

119

tokens. The authors also stress the potential of hybrid systems, with a maximum Fscore of 97.12 when feeding information from other MUC-7-systems into MENE. One possible weakness of trained systems is indicated by the fact that in MUC’s crossdomain formal test, F-scores dropped to 84.22 and 92 for pure and hybrid MENE, respectively. Another interesting base line is human annotators’ F-score, which at MUC-7 was reported as 96.95 and 97.60 (Marsh & Perzanowski, 1998). 1.2 Methodological and Data Framework In this paper, I shall present a mostly linguistic approach to NER, combining different lexical, pattern and rule based strategies in a multi-level Constraint Grammar framework (CG). This approach has previously been succesfully carried out for Danish (Bick, 2002) within the Scandinavien NER project Nomen Nescio. For Portuguese, the system builds on a pre-existing general Constraint Grammar parser (PALAVRAS, Bick 2000) with a high degree of robusticity and a comparatively low percentage of errors (less than 1% for word class). Tag types are word based and cover part of speech, inflexion and syntactic function, as well as dependency markers. The language data used in this article are drawn from the CETEM Público news text corpus (Rocha & Santos, 2000), which has been grammatically annotated in a joint venture between the VISL project at Southern Denmark University and the Linguateca-AC/DC project at SINTEF, Norway (Santos & Bick, 2000). 1.3 Discussion of Name Categories For this project, proper nouns are defined as upper case non-inflecting words or word chains with upper case in the first and last parts. Simplex names in lower case (e.g. pharmaceuticals) are treated as nouns, as are nouns with upper case initial in midsentence, though the latter may be marked as with a secondary tag for later filtering by corpus users. In agreement with general Nomen Nescio strategy, 6 core categories were used (human, place, organisation, event, title, and brand/object): Human Personal Names . This category covers (chains of) personal names (Christian, middle and surnames), possibly interspaced with structural lower case particles (de, da, von, ten, ...) or prefixes (McNamara). Human name chains can be complements of title nouns, as in Dr. Mota and profession nouns (o jornalista Nelson Aguiar). Since the latter allow interfering modifiers (o jornalista português Nelson Aguiar), only the former were fused into the name chain. Lumped into the category are other animate names , both scientific species names (Mus musculus), pet names, gods and mythical beasts. Place Names . As a prototype, this category has a clear syntactic and semantic context. However, human settlements (countries, towns, villages etc.) may also function as +HUM subjects of cognitive verbs, blurring the distinctional line between and . For semantic reasons, and to allow for CG rules using +HUM and –HUM syntactic contexts, I have therefore introduced the civitas category for these names: Dinamarca, EUA, Coreia do Norte.

120

E. Bick

Buildings which metaphorically can offer, invite or earn, receive a separate subcategory, institution , a kind of hybrid between and . Organisations . This is another +HUM category, covering companies, sports clubs, movements and the subcategory , often as abbreviations (ONU, FIFA). A special case is the category (newspapers, radio channels and tv stations), which is systematically ambiguous (/). Events and Occasions. This category is used for both natural and organized events, and includes the experimental weather subcategory (El Nino, hurricane names). There is a certain metaphoric transfer from sites to events (desde Maastricht), and some event names contain ordinal or year markers (Expo 98). Semantic Products, Book, Film, and Music Titles . Semantically and formally, a distinction can be made between literary "running text" titles on the one hand ("O Grito Silencioso"), and "classifiers" on the other. This latter category is rarely quote marked, does not exceed np-structure, and can usually be recognized by a classifying key nominal element: a Lei Áurea. A related, though more generic – “nominal” – (sub)category is that of (Anatomia, Islã, Judo, Funk). and (languages) are experimental subcategories of . Object and Brand Names . This is the default object and waste bin category, containing besides brand names (Coca Cola, Linux), also the subclass of vehicles , covering both brand and individual names (Columbia, Santa Maria) and names ambiguous with or derived from company names ( Peugeot, Konica E240).

2 System Architecture and Strategies The system treats NER as a distributed task, matching the progressive level architecture of the parser itself, applying different techniques at different levels.

Multi-level NER for Portuguese in a CG Framework

121

2.1 Preprocessing Besides more “ordinary” preprocessing tasks like sentence separation, this first module creates ‘=’-linked name chains based on pattern matching (Edvard=Munch). Upper case is the obivous trigger (with some sentence-initial ambiguity), but certain lower case particles (da, di, von, ibn) are allowed in the chain, as are numericals in certain patterns (version numbers, car names). A particular problem is the recognition of in-name punctuation (initials, Sta., H.M.S., jr., & Co., d’Ávila, web-urls, e-mails). Though the preprocessor does check its chunking against full name entries in the lexicon, proper nouns are a very productive class, and heuristic patterns may lead to overchunking (diretor de marketing para a Europa da Logitech). Here, a second lexiconlookup checks for recognizable parts of a name chain candidate, and re-splits it. Palmer & Day (1997) compared the coverage of inter-corpus name vocabulary transfer in 6 languages and found the second highest transfer rate for NEs in Portuguese (61.3%), after Chinese (73.2%) and way above English (21.2%), suggesting the importance of a lexicon module and gazeteer lists in Portuguese NER. 2.2 The Name Type Predictor Some frequent names receive a semantic type tag already from the lexicon based morphological analyzer module (otherwise handling lemmatizing, inflexion and derivation). However, most proper nouns have to be typed later. The name type predictor is a semi-heuristic module, which has its own lexicon (ca. 16.000 entries), enabling it to match parts of names, for instance recognizing person names by looking up Christian names first parts. Similarly, Shell=Portuguesa is typed as , because Shell is recognized as a company. Hyphen-bound pairs of recognized place names will be typed as event/path candidates (o corredor mediterráneo Valencia-Barcelona), an ambiguity that will be tackled later by contextual CG rules. In other cases, the type predictor tries to instantiate morphological and pattern based type clues for the different categories, using both reg. expressions and lists of key-words: e.g. quotes, in-name function words (articles, pronouns etc.), "semantic things" (“Voo das Borboletas”, II Lei da Programação Militar, o Pacote Delors) e.g. Diário=, Revista=, =Times, Voz=, Channel= ... e.g. Conferência=, Expo=, Guerra=, Rali=, =\’[0-9][0-9] ... e.g. Boeing/Mercedes/Toyota=, =Combi, =Sedan, HMS=, USS ... e.g. Macintosh/Sanyo=[0-9],quality markers:=Extra, =de=Luxe e.g. suffixes:-sen, -sson, -sky, -owa, infixes: ibn, van, ter, y, zu, di, abbreviated and part-of-name titles: Sra., Madame, Mlle., sr., Mc=, Al=, =Khan e.g. República=, Puerto=, suffixes: -town, -ville, -polis (a number of these will receive both and tags for later disambiguation e.g. Cabo=, Praça=, Rua=, =Island, =de=Leste, =do=Sul e.g. in-word capitals: [a-z][A-Z] (MediaSoft), "suffixes": Cia., & Co ..., type indicators: =Holding, =Clube, FC=, morphological indicators: -com, -tech, -soft e.g. Universidade=, Tribunal=, =Hilton, Aeroporto= ...

122

E. Bick

Of course, there may be interferences and contradictions between the patterns, so matchings are ordered, conditioned and iterated, and they do allow some ambiguity. Finally, the type predictor uses non-alphabetic characters, paired upper case, function words etc. to assign tags, preventing over-usage of this most common category in the cg-based part of the system. 2.3 The CG Modules CG adds, selects or removes tags from words, i.e. performs word based annotation by mapping and resolving ambiguities. Rules use sentence-wide context and have access to word class, inflexion, verbal and nominal valency potential as well as - in the Portuguese system - semantic prototype information for nouns and some verbal selection patterns. The “ordinary” (preexisting), morphological and syntactic CG levels consist of about 7000 rules. Though only a small part of these tackles proper nouns, it is much safer to contextually disambiguate, say, sentence initial imperatives from heuristic proper nouns, than at the pattern matching stages. Of course, proper nouns can also for their part form valuable context for the disambiguation of other classes, and besides functioning as subjects and objects like other np’s, they can fill certain more specific syntactic slots: @N< (valency governed nominal dependents): o presidente americano George Bush @APP (identifying appositions): uma moradora do palácio, Júlia Duarte, ... @N + )* (ano+anos) produces over 200,000 expressions from the corpus, too many to be manually verified10. Therefore, only success rate was presented. 3.4 Lexical Coverage and Linguistic Adequacy When evaluating lexical FST, it is also possible to consider another measure: how many different expressions the FST represents? Are they all correct? This requires generating all the linguistic expressions described by the FST and verifying if they are correct, so that ideally the FST would only represent combinations authorized by the language. It also involves deciding what it is meant by “different” expressions. For instance, what should be the status of productive subsets of linguistic expressions (e.g. numbers, numerals) or otherwise recurrent close sets (days of the week, months) in this counting? To illustrate the issue, consider the auxiliary graph Sub_ANO appearing in Fig. 2. This automaton describes eventual complements of the Ntmp, expressing the exact number of months and days. If all variation regarding numeral and number

9

The regular expression is given in the Intex format: and are in-built symbols that stand for any word or any number, respectively; ‘*’ is the Kleene operator. 10 A sampling procedure could be used instead.

Evaluation of Finite-State Lexical Transducers of Temporal Adverbs

241

determinants were generated, it would produce 2,663 different combinations. If not, only 7 different strings should be counted 11: (e meio + e [1] mês + e [2-11] meses + , [1] mês e [1] dia + , [1] mês e [2-30] dias + , [2-11] meses e [1] dia +, [2-11] meses e [2-30] dias ) Probably, these finite sets should not count. Nevertheless, how would one decide where should these limits be imposed? Should even complements such as those above be counted or discarded? Should the same be done to recurrent sets of prepositions, determiners, adverbs and modifiers? It is difficult to have clear answers. Provisory, only numerals and some limited sets have not been counted. Even so, the (16) haverDet ano family alone generates more than 9,000 different expressions; the (5) durante Det ano family is even more impressing: over 31,000 different expressions. Consider now the issue of linguistic adequacy. A simple measure could be defined as the percentage of correct expressions generated by the FST. For example, the FST for the (9) Prep este ano family generates 424 different expressions. It is necessary to verify each string manually in order to determine if there is overgeneration of incorrect forms (fortunately, this was not the case here). However, verifying the results of the language generated by some FST may not be so simple, as in the case of families (5) and (16) above. The number of (correct) expressions generated by a FST might be compared with silence, thus giving an approximation to the notion of linguistic coverage, if the corpus is considered to be representative for the phenomena under study. In spite of this, for some FST it may be convenient, for simplicity purposes, to loose the formal constraints on word combinations, otherwise the set of FST would be too complex to manage and maintain. Over-generation may not necessarily affect success rate, since clearly incorrect strings will probably not occur in texts. However, linguistic adequacy may be reduced.

4 Final Remarks The size and complexity of complex multiword temporal adverbs should not be underestimated. They constitute a non-trivial challenge to computational processing (consider, for instance, the resolution of temporal references [10]). Such large FST may also render significantly slow the lexical processing of texts. In order to ensure efficient maintenance of cumulative data, the description needs to be structured with clear taxonomical principles, considering the overwhelming size and significant overlap of some families of expressions. The compromise between linguistic adequacy and efficiency in results should be taken into consideration when evaluating linguistic resources such as the lexical transducers described here.

11

Numbers inside square brackets stand for ranges of both numerals and numbers.

Evaluating Automatically Computed Word Similarity Caroline Varaschin Gasperin and Vera L´ ucia Strube de Lima Faculdade de Inform´ atica PPGCC, PUCRS Av. Ipiranga, 6681 90619-900 Porto Alegre, RS, Brasil {caroline,vera}@inf.pucrs.br

Abstract. This paper aims to evaluate how accurate are the lists of semantically similar words, generated automatically from corpora using a syntax-based technique. Such evaluation is done in a user-visible way: we select the query expansion task to apply our lists of words and then evaluate task performance on information retrieval, concerning precision and recall values. We present our experiments and results.

1

Introduction

This work considers the application of lists of semantically related words as source of semantic knowledge on query expansion task. These lists are generated automatically using a syntax-based knowledge-poor technique for computing word similarity [1,2]. It is a difficult task to evaluate the quality of these lists in a systematic way, so we decided to apply them in a user-visible task and evaluate such task, in our case, query expansion. Query expansion is the technique of adding different words that can be used to represent the same query’s concept ([3]) in order to better catch the documents that it is expected to retrieval. By doing this, we should obtain better results when retrieving information from document bases or from the Internet. One way to automatically expand a query consists on adopting semantic resources to provide words semantically related to those in the original query, which could enrich it. These resources are usually thesauri or lexical databases available for a certain language or application. It’s very costly to build such lexical resources manually, it requires considerable time and human effort [4]. But it is possible to build it automatically from corpora by automatic lexical acquisition techniques, like the one we adopt to generate our lists of semantically related words. Our testing strategy consists basically on: (1) scanning the lists for the words inserted by the user in the query looking for new semantically related words that could be added to the query; (2) submitting both original and expanded queries over our documents base; and (3) computing precision and recall measures considering retrieved documents. N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 243–250, 2003. c Springer-Verlag Berlin Heidelberg 2003 

244

C.V. Gasperin and V.L.S. de Lima

On next section we detail how the lists of semantically related words are generated, that is, how word similarity is computed. In Sect. 3, we present how we use the lists on query expansion task. In Sect. 4, we report our experiments on testing our proposal. In Sect. 5, we show some final remarks concerning the work. Finally, in Sect. 6, we comment how we intend to continue this work.

2

Computing Word Similarity

The technique used for computing word similarity [1,2] is an extension of Grefenstette’s [4] strategy to obtain semantically related words from parsed corpora. This is a knowledge-poor syntax-based technique, like the ones in [5,6,7,8,9]. There are other kinds of knowledge-poor techniques as well as knowledge-rich ones [10,11,12] to obtain semantic information from corpora. Other knowledgepoor techniques are window-based, like the ones proposed in [13,14]. Our technique includes basically three steps: extracting the syntactic contexts of each word in the corpus, comparing each pair of words using their syntactic contexts through a similarity measure, and building lists of the most similar words for each noun in the corpus. 2.1

Syntactic Contexts

Each syntactic relation between a word and another generates a syntactic context for the boht words. The following syntactic relations are considered: – – – – – –

ADJ: an adjective modifies a noun; NNPREP: a noun modifies another noun, using a preposition; SUBJ: a noun is the subject of a verb; DOBJ: a noun is the direct object of a verb; IOBJ: a noun is the indirect object of a verb. SOBJ: relation between the noun that is the subject of a verb and the noun that is the object of the same verb.

Table 1 shows some examples of syntactic contexts extracted from the corpus. We call R1 (where R means any syntactic relation) the context extracted for the head word in the relation, and R2 the context for the other word. At nominal relation (ADJ, NNPREP, SOBJ) the head word is the (first) noun, at verbal relations (SUBJ, DOBJ, IOBJ) the head word is the verb. At relations NNPREP and IOBJ we include the preposition as part of the context. 2.2

Word Similarity

To find the semantic similarity between the words, we compare them through their syntactic contexts, using as similarity measure a weighted version of the binary Jaccard measure [4]. The weighted Jaccard (WJ ) measure considers a global and a local weight for each context. It gets a value between 0 and 1. The global weight (shown in (1), where nwords means the number of different words

Evaluating Automatically Computed Word Similarity

245

Table 1. Examples of binary syntactic dependencies Sentence A cidade inicia a colheita da maior safra de sua hist´ oria. (The city begins the crop of the largest production of its history.)

Noun cidade iniciar colheita

grande safra

hist´ oria

Contexts ¡SUBJ2,iniciar ¿ ¡SOBJ1,colheita¿ ¡SUBJ1,cidade¿ ¡DOBJ1, colheita¿ ¡DOBJ2,iniciar ¿ ¡SOBJ2,cidade¿ ¡NNPREP1 de, safra¿ ¡ADJ2,safra¿ ¡ADJ1,grande¿ ¡NNPREP2 de,colheita¿ ¡NNPREP1 de, hist´ oria¿ ¡NNPREP2 de,safra¿

in the corpus) takes into account the amount of different words associated with a given syntactic context in the corpus. The local weight (2) is based on the frequency of occurrence of the context with a given word. The WJ formula is shown in (3), where tw corresponds to the total weight of a syntactic context with a given word: the multiplication of global and local weights.  j

gw(scj ) = 1 +

f (wi ,scj )(log(f (wi ,scj ))−log(f (scj ))) log(nwords)

f (scj )

lw(wi , scj ) = log(f (wi , csj )  j min(tw(wm , scj ), tw(wn , scj )) W J(wm , wn ) =  j max(tw(wm , scj ), tw(wn , scj ))

(1) (2) (3)

By computing the similarity measure of all word pairs in the corpus, it’s possible to generate the list of the most similar words to each word in the corpus. For this work, we just extracted syntactic contexts for nouns and adjectives, not for verbs, since we were just interested on generating lists of semantically related nouns and semantically related adjectives. 2.3

Lists of Words

We record the lists of semantic similar words in XML format. Figure 1 presents the lists of nouns related to show. The attribute “value” corresponds to the similarity value calculated by WJ formula. We decided to keep just the 15 most similar words in the lists.

246

C.V. Gasperin and V.L.S. de Lima











Fig. 1. List of nouns semantically related to “show ”.

3

Using Similar Words for Expanding Queries

The knowledge about word semantic similarity can be very helpful when submitting a query for document retrieval. This lexical knowledge contributes on expanding the query in order to get a bigger amount of relevant documents. Here we used use lists of semantically related words, generated automatically by the technique presented in Sect. 2, as our source of semantic knowledge. To evaluate how such lists represent semantic similarity between words, we applied them on query expansion task. We used a tool for expanding queries – QET – developed by Pizzato [15]. We load our XML file containing the lists of words, submit a query and the tool suggests an expanded query. Internally, QET generates a graph where each word is a node linked with the words that form its list and also with the words it is part of the list. When a word is submitted in a query, the navigation through the graph starts by the node of this word. All nodes linked to this node are visited and all the neighbour nodes of a visited node are also visited. The links are weighted according to the distance (in levels) of the starting node. This weight is calculated by the formula in (4), where l corresponds to the current level according to the starting node, and WJ is the similarity value between the words corresponding to the nodes. wlink(wm , wn ) =

W J(wm , wn ) l

(4)

A threshold value σ to stop the process is specified by the user, to avoid infinite loops. The nodes (words) are also weighted. Their weight corresponds to the sum of the weights of the links that come to it. The nodes with weight higher than λ (also specified by the user) are included in the query. Note that the weights of nodes and links are different depending on the starting node (word in the original query). If the original query has more than one word, it means we have more than one starting node.

Evaluating Automatically Computed Word Similarity

247

Fig. 2. QET screen

Take as example query the following, “show”, composed by a single word. For this query, Fig. 2 shows some of the nodes visited and weighted by QET. For σ=0.01 and λ=0.20, the expanded query is “show exposi¸c˜ao espet´aculo apresenta¸c˜ao”. To illustrate how expansion works, let’s analyse how the word apresenta¸c˜ ao had their weight calculated and could be included in the query. The similarity value between show and apresenta¸ca ˜o is 0.072607, so this is the weight of the link between both words in level 1. But show ’s list has also the word evento whose list also contains apresenta¸ca ˜o with similarity value 0.050369, and so on. Figure 3 shows how the 7 references to apresenta¸ca ˜o occur.

4

Experiments

We used the NILC1 corpus to generate the word lists and also to evaluate the information retrieval results of the query expansion. This corpus contains 5093 articles published in Brazilian Portuguese on the newspaper “Folha de S˜ ao Paulo” of the year of 1994. The articles are from different sections of the newspaper, therefore they track several subjects. To evaluate how original and expanded queries work on information retrieving, we applied some examples over our document base, that contains the same 1

N´ ucleo Interinstitucional de Ling¨ u´ıstica Computacional

248

C.V. Gasperin and V.L.S. de Lima

Fig. 3. show ’s graph

documents used to generate the word lists. An human expert marked these documents as relevant or non-relevant to some example query. We submitted original and expanded queries and calculated the corresponding precision and recall values. We considered “OR queries”, that is, to match documents that contains any word of the query. We applied the process for 7 testing queries. Results are shown in Table 2. Note that the words included in the expanded query are semantically related to the original words, as it was expected. There are synonyms examples, like

Table 2. Precision and recall results Original query P R Expanded query viagem+aviao 0.09 0.61 viagem+aviao+voo+pacote+ temporada acidente+ au- 0.27 0.89 acidente+automovel+defeito+ tomovel abuso+queda+impacto+caminhao+transito computador 0.14 0.80 computador+micro+versao+ m´ aquina+modelo+equipamento doenca+grave 0.21 0.39 doenca+grave+psiquiatrico+ cardiovascular+circulatorio+ cardiaco+mortal+cancer fruta+tropical 0.11 0.44 fruta+tropical+meridional+ deserto+havaiano+laranja+ caribenho+fresco+acerola+ suco+goiaba+papaia+polinesio ensino 0.45 0.23 ensino+instrucao+idioma+ televisao+conhecimento+ aprendizado musica+ 0.11 0.53 musica+brasileira+disco+ brasileira obra+arte+banda

P R 0.06 0.82 0.09 0.91

0.04 0.95 0.26 0.65

0.06 0.61

0.06 0.29

0.05 0.72

Evaluating Automatically Computed Word Similarity

249

“computador”/“micro”; hyponyms like “fruta”/“laranja”, “doen¸ca”/“cancer”; among other semantic relations. For expanded query it caused recall value increasing and precision value decreasing in comparison with numbers for original query. It is because the new words included in the query increase the amount of retrieved documents, but also bring ambiguity for the query. According to [3], it is not a severe problem, it is worse to miss a good document than it is to make a few false matches.

5

Concluding Remarks

This paper presented how semantic knowledge from automatically generated lists of similar word can improve information retrieval through query expansion task, and evaluated it. As it is difficult to evaluate the quality of the word lists independently of an application, in this work we could do that by applying the lists on information retrieval through query expansion task. At the same time we validated the accuracy of the lists of semantically related words, and verified that expanding queries with semantically related words increases recall on information retrieving. The recall improving proves that the lists of similar words used really contains the semantic knowledge expected, that is, the syntax-based technique used to extract semantic relations from corpora is quite effective. The use of QET was very proper for our evaluation, since it explores the similarity in more than one level: it links the different word lists when computing word weights. By doing this, QET takes into account word similarity for whole query instead of single words.

6

Future Work

As future work, when concerning computing word similarity, we plan to refine the process of acquiring word syntactic contexts, since the Portuguese parser used to analyse the corpus, PALAVRAS [16], has a newer improved version. Concerning query expansion, we intend to make more tests using QET for tuning sigma and lambda values. By another point of view, we plan to verify how an user will span his original query if we offer semantically related words, instead of specifying lambda to restrict the words that will be included in the query. Acknowledgements. We’d like to thank CNPq/Brazil for the authors research grants. We are also grateful to Luiz Pizzato for providing QET and the help on using it, and to NILC and AC/DC project for providing the corpus.

242

J. Baptista

References 1. ACL Workshop on Temporal and Spatial Information Processing. Toulouse, France (2001) 2. Baptista, J.: Manhã, tarde, noite. Analysis of temporal adverbs using local grammars. Seminários de Linguística 3 (1999) 5–31 3. Baptista, J., Català-Guitart, D.: Compound Temporal Adverbs in Portuguese and in Spanish. In: Ranchhod, E., Mamede, N. (eds.): Advances in Natural Language Processing. Lecture Notes in Computer Science, Vol. 2389. Springer-Verlag, Berlin Heidelberg New York (2002) 133–136 4. Baptista, J.: Some Families of Compound Temporal Adverbs in Portuguese. In: Laporte, E. (org.): Workshop on Finite-State Methods in Natural language Processing at EACL’2003 (to appear). 5. Eleutério, S., Ranchhod, E., Freire, H., Baptista, J.: A System of Electronic Dictionaries of Portuguese. Lingvisticae Investigationes 19 (1995) 157–182. 6. Gross, M.: Grammaire transformationnelle du français.3 - Syntaxe de l'adverbe, ASSTRIL, Paris (1986) 670 pp. 7. Gross, M.: The Construction of Local Grammars. In Schabes, Y., Roche, E. (eds.): Finite State Language Processing. MIT Press/Bradford. Cambridge/ London (1997) 329–354 8. Gross, M.: Construção de gramáticas locais e autómatos finitos. In: Ranchhod, E. (org.): Tratamento das Línguas por Computador. Uma Introdução à Linguística Computacional e suas Aplicações. Caminho, Lisboa (2001) 91–131 9. Hirschman, L., Mani, I: Evaluation. In: Mitkov, R. (ed.): The Oxford Handbook of Computational Linguistics. Oxford University Press, Oxford (2003) 414–429 10. Martinez-Barco, P., Saquete, E., Muñoz, R.: A Grammar-Bases System to Solve Temporal Expressions in Spanish. In: Ranchhod, E., Mamede, N. (eds.): Advances in Natural Language Processing. Lecture Notes in Computer Science, Vol. 2389. Springer-Verlag, Berlin Heidelberg New York (2002) 53–62. 11. Maurel, D.: Adverbes de date: étude préliminaire à leur traitement automatique. Lingvisticae Investigationes 14–1 (1990) 31–63. 12. Maurel D.: Reconnaissance automatique d’un groupe nominal prépositionnel. Exemple des adverbes de date. Lexique 11 (1992) 147–161 13. Ranchhod, E.: O uso de dicionários e de autómatos finitos na representação lexical das línguas naturais. In: Ranchhod, E. (org.): Tratamento das Línguas por Computador. Uma Introdução à Linguística Computacional e suas Aplicações. Caminho, Lisboa (2001) 13– 48. 14. Ranchhod, E., Mota, C., Baptista, J.: A Computational Lexicon of Portuguese for Automatic Text Parsing. In: SIGLEX’99: Standardizing Lexical Ressources. 37th Annual Meeting of the ACL, College Park, Maryland, USA (2001) 74–81. 15. Silberztein M. Dictionnaires électroniques et analyse automatique de texts. Le système INTEX, Masson, Paris (1993) 234 pp. 16. Silberztein, M.: INTEX Manual. ASSTRIL, Paris (2000). http://www.bestweb.net/~intex/ downloads/ Manual.pdf.

250

C.V. Gasperin and V.L.S. de Lima

References 1. Gasperin, C., Gamallo, P., Agustini, A., Lopes, G., Lima, V.: Using syntactic contexts for measuring word similarity. In: Proceedings of the Workshop on Semantic Knowledge Acquisition and Categorisation, Helsinki, Finland (2001) 2. Gasperin, C.V.: Extra o autom tica de rela es sem nticas a partir de rela es sint ticas. Master’s thesis, PPGCC – PUCRS, Porto Alegre (2001) 3. Voorhees, E.M.: Using WordNet for text retrieval. In Fellbaum, C., ed.: WordNet: an electronic lexical database. The MIT press, Cambridge, Massachusetts (1998) 4. Grefenstette, G.: Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, USA (1994) 5. Berland, M., Charniak, E.: Finding parts in very large corpora. In: ACL Proceedings, Maryland (1999) 57–64 6. Finkelstein-Landau, M., Morin, E.: Extracting semantic relationships between terms: Supervised vs. unsupervised methods. In: Proceedings of the International Workshop on Ontological Engineering on the Global Information Infrastructure, Dagstuhl Castle, Germany (1999) 71–80 7. Hearst, M.A.: Automatic acquisition of hyponyms from large text corpora. In: Proceedings of the Fourteenth International Conference on Computational Linguistics, Nantes (1992) 8. Lin, D.: Automatic retrieval and clustering of similar words. In: Proceedings of the COLING-ACL’98, Montreal (1998) 9. Ruge, G.: Automatic detection of thesaurus relations for information retrieval. In: Foundations of Computer Science. Springer, Berlin (1997) 499–506 10. Faure, D., N’Edellec, C.: Asium: Learning subcategorization frames and restrictions of selection. In: 10th European Conference on Machine Learning. (1998) 11. Resnik, P.: Semantic similarity in taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Volume 11. (1999) 95–130 12. Yarowsky, D.: Word-sense disambiguation using statistical models of Roget’s categories trained on large corpora. In: Proceedings of 14th International Conference on Computational Linguistics, Nantes (1992) 13. Crouch, C.J., Yang, B.: Experiments in automatic statistical thesaurus construction. In: Proceedings of the Fifteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Copenhagen (1992) 77–88 14. Thanopoulos, A., Fakotakis, N., Kokkinakis, G.: Automatic extraction of semantic relations from specialized corpora. In: Proceedings of COLING’2000, Saarbrucken (2000) 15. Pizzato, L.A.: Estrutura multitesauro para recupera o de informa es. Master’s thesis, PPGCC - PUCRS, Porto Alegre (2002) 16. Bick, E.: The Parsing System Palavras: Automatic Grammatical Analysis of Portuguese in a Constraint Grammar Framework. PhD thesis, rhus University, rhus (2000)

Evaluation of a Thesaurus-Based Query Expansion Technique Luiz Augusto Sangoi Pizzato and Vera L´ ucia Strube de Lima Pontif´ıcia Universidade Cat´ olica do Rio Grande do Sul – PUCRS FACIN – PPGCC Av. Ipiranga, 6681, Pr´edio 16, sala 106 90619-90 Porto Alegre - RS / Brasil {pizzato, vera}@inf.pucrs.br http://www.pucrs.br/inf/pos/

Abstract. This paper concerns the use and evaluation of a thesaurusbased query expansion method in information retrieval. The query expansion process assigns weights to different types of relations obtained fom vocabulary structures, providing an efficient way to measure distance measure between different terms. This method, tested for Portuguese, improved the overall information retrieval performance on small corpora and over the Internet.

1

Introduction

Websites that offer a comprehensive Internet search are, perhaps, the most known Information Retrieval (IR) interfaces. The IR interfaces receive users need of knowledge, by way of a query, expressed as a natural language sentence. The expected answer for an IR system is a set of documents which represent all relevant information to a user need. Although, today’s systems seem to have a nice precision1 , its use requires a little searching practice to have a wider coverage (better recall2 ). An IR system can not guarantee full precision and recall because of the nature of its queries and documents. The documents, that are expected to be returned, and the queries, that are used in the system, are written in natural language, so both are subject to ambiguity. According to Voorhees [7], the decrease of precision due to homographic words is not severe, once different words in a query can tune the right context. On the other hand, a query will not return documents when their words do not match, even when their subjects are alike. As natural language can express the same concept using different words, there will always be a reduction on the recall. In this paper we tackle an automatic method that modifies a query made by a user, in order to have more relevant documents included in the result set. The 1 2

Percentage of relevant documents retrieved by the IR system. Percentage of relevant documents retrieved over all relevant documents in the document base.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 251–258, 2003. c Springer-Verlag Berlin Heidelberg 2003 

252

L.A.S. Pizzato and V.L.S. de Lima

method is known as query expansion, or query enrichment, and it is normally done intrinsically by the user, based on its previous knowledge on the subject. For example, when the user does not get the expected documents, he can submit alternative queries with different words that express the same meaning or idea used before. To make such enrichment to the query, an automatic process must be supported on some knowledge database, like dictionaries or thesauri. A thesaurus is a lexical database that presents the meaning of its terms, by semantically linking them. So, a thesaurus is a way to obtain different terms that represent similar or related concepts. Several studies consider the use of thesaurus when modifying a user query. Robin & Ramalho [5] used synonym and hiperonym concepts in WordNet to generate an enriched query. In this paper we propose a unified technique that considers different types of semantic relations of a thesaurus. This paper is organized as follow: Sect. 2 presents the developed query expansion method; Sect. 3 presents an evaluation of this expansion method over a 1.323.700 words corpus and over the Internet; Sect. 4 presents the concluding remarks.

2

Thesaurus-Based Query Expansion

Let a thesaurus be a tuple (T, R1 , R2 , . . . , Rn ) where: – T is a set of terms: T = {t1 , t2 , . . . , tn } – Rk is a set of tuples (u, v) that represent relations of type k between terms: Rk = {(u, v) | u, v ∈ T, u is related to v by a semantic relation k} In our query expansion method, we look for terms in the user’s query that exist in the thesaurus. We consider a query a set of terms Q where: Q = {q1 , q2 , . . . , qn | qi ∈ T } We consider any type of relation between terms which is present in the thesaurus. In our method we do not take into account the meaning of the relations, but we let the user decide the importance of every relation type to the IR process. So, every type of relation in the thesaurus has assigned a weight W in the interval [0, 1): ∀Rk ∃Wk ∈ [0, 1) The weight of certain relation type is also associated to its relations: ∀(u, v) ∈ Rk ∃W (u, v) = Wk Our method can be used along with automatically build thesauri. Normally these corpus-based thesauri have a measure associated to its relations. We assume this measure as a value V (u, v) between 0 and 1 for an automatically extracted relation, and 1 for the relations in manually build thesauri. ∀(u, v) ∈ Ri ∃V (u, v) ∈ [0, 1] | Ri is a relation extracted from corpora.

Evaluation of a Thesaurus-Based Query Expansion Technique

253

∀(u, v) ∈ Rj ∃V (u, v) = 1 | Rj is a relation manually defined. Using weights lower than 1 make it possible to compute an importance value given a path of relations between two terms. So that a non-cyclic path between two terms is defined as (consider R = R1 ∪ R2 ∪ . . . ∪ Rk ): P (a1 , ak ) = {(a1 , a2 ), (a2 , a3 ), . . . , (ak−1 , ak ) | (ai , ai+1 ) ∈ R, al = am ↔ l = m} The importance value between two terms is denominated β, and is computed as follow: β[P (a1 , ak )] =

k−1 

W (ai , ai+1 ) × V (ai , ai+1 )

i=1

The β value can also be understood as a semantic distance measure. As in Tudhope et al. [6], the use of different weights allows terms to be semantically close, even when there are a long path of relations between them. And in the same way, two terms can be few steps far, in path length, but be very distant semantically. The proximity in semantic distance is measured by how strong are the relations between two terms. Normally a thesaurus is a very large lexical database, with lots of terms and relations between them. If we let the process of computing β values reach every term it can find, the calculation will have a high time-consuming performance. To reduce the length of these paths, we define a σ value that represents a lower bound of importance in β calculation. This means that, a path of relation will not be continued if β start to get lower than σ. Our goal in this process is to find terms in a thesaurus that may bring better results to a query. For this, we consider a value δ[Q, v] that represents an importance value of a term v given all terms in query Q. This value is calculated as the sum of the β values for a term, given each term in the query: δ[Q, v] =

n 

β[P (qi , v)] | qi ∈ Q, P (qi , v) ≥ σ

1

At this point, we have a value of importance attached to terms, that can be used in making the decision of which terms will be consider to enrich the query. This decision will be supported by a threshold, denominated λ value. If a term has δ bigger than λ, it will be used. The enriched query EQ is represented by: EQ = Q ∪ {t | t ∈ T, δ[C, t] ≥ λ} Query expansion is used along with an IR system in order to measure the effectiveness of this task. The next section shows the evaluation process.

254

L.A.S. Pizzato and V.L.S. de Lima Table 1. Values and weights used Parameter ET NT BT RT

3

Value Parameter Value 0.80 SY 0.20 0.60 λ 0.60 0.30 σ 0.01 0.10

Evaluation

The evaluation of a query expansion procedure must regard the effectiveness of enriching the query in the result of an IR system. In our evaluation process we used four different thesauri that contain Brazilian Portuguese terms: (1) the VCBS, a thesaurus in CD-ROM organized by the technical department of the library of USP (S˜ ao Paulo University); (2) the VCBS; a thesaurus built by the sub-department of the Brazilian Federal Senate Library; (3) the LDPUCRS, a vocabulary list used by the librarians of PUCRS University; (4) the LTOCSS; which is a thesaurus built automatically over a newspaper corpus, using Grefenstette’s [2] similarity measures, adapted to Portuguese by Gasperin [1]. We used these four structures in a merged approach. We did this by organizing them in a standard XML structure, and unifying their terms and relations on the fly in a query expansion tool. This approach shows to have improved overall system performance over single or separated used of the thesauri (more details can be obtained in [4]). The semantic relations presented at these thesauri are: ET, representing equivalent terms (synonyms, quasi-synonyms or lexical equivalents); BT, representing hiperonym terms; NT, representing hyponym terms; RT, representing a non-specific relation, originally from the manually constructed thesauri; SY, representing a non-specific relation, originally from the corpus-based thesauri; The weights associated with the semantic relations and the values to λ and σ used on the evaluation process are shown in Table 1. The evaluation process was performed in two different phases: an evaluation on a controlled and small corpus and an evaluation over a highly dynamic corpus, on the Internet. 3.1

Evaluation over a Small Corpus

The corpus used in our study is the NILC corpus: Brazilian Newspaper “Folha de S˜ ao Paulo”, year 1994, with 1.323.700 words in 5093 different articles. The articles were separated in different files, so they could be indexed and searched by an IR tool3 . A relevancy markup of search topics was made in the articles, using exhaustive manually search. I.e. a human specialist investigated the totality of the corpus, looking for relevant documents for a specific topic. This process is very 3

We used ASPSeek (http://www.aspseek.org).

Evaluation of a Thesaurus-Based Query Expansion Technique

255

1 Original Query Expanded Query

0.8

Recall

0.6

0.4

0.2

0 1

2

3

4

5

6

7

8

9

10

11

12

13

Query

Fig. 1. Recall in a small corpus 1

1 Original Query Expanded Query

Original Query Expanded Query

0.8

0.6

0.6

Precision

F-Measure

0.8

0.4

0.4

0.2

0.2

0

0 1

2

3

4

5

6

7 Query

8

9

10

11

12

13

1

2

3

4

5

6

7

8

9

10

11

12

13

Query

Fig. 2. Precision and f-Measure in small Corpus

expensive; it takes an average of 8 hours work for each topic to assure nearly 100% coverage of relevant documents. We have made the markup of 13 different topics, and we have formulated an original query for every topic. This original query was used in a query expansion tool, implemented using the method described in Sect. 2. The result of this tool is an enriched query, with the terms of the original query and the terms offered by the query expansion method. The terms of the query expansion method are inserted in the query using the connector “or”, once those terms are computed according to all terms in the original query. The IR results show that in all cases the recall measure was increased substantially. This situation are easy understood: once there are more terms in the query (with an “or” connector), more documents will be retrieved. These mean that recall measure will never decrease. Figure 1 shows the recall results for the different queries. In fact, when more new terms are added, more subject to ambiguity the query will be. Although, precision usually decreases when inserting new terms (see precision in Fig. 2), our tests show that, not always it will be lower for the expanded query. There are queries that do not retrieve relevant documents, and when they are expanded the results for recall and precision are both better.

256

L.A.S. Pizzato and V.L.S. de Lima Table 2. Average results in small corpus Query Original Expanded Increase/Decrease

Precision Recall F-Measure 0.4499 0.2389 0.3121 0.3778 0.5010 0.4307 -16.02% +109.71% +38%

As we study both the recall and precision behaviors, we to consider an alternative measure. The average harmonic mean of this two measures (F-measure) will be used to decide when a query was improved or not by its expansion. Our results show that the query expansion method sometimes improved the IR process and sometimes made it worse (see F-measure in Fig. 2). An average result of the 13 queries performed to the corpus (see Table 2), we notice that the precision was decreased by 16.02%, the recall was increased by 109.71%, and the F-measure increased 38%. The improve of the F-measure indicates that our query expansion method was good to overall IR system effectiveness. 3.2

Evaluation over the Internet

The Internet is today’s biggest repository of documents, and the most known IR interface used. Once our subject of study is applied directly to IR area, we made an evaluation of our query expansion method over the Internet. We decided to use AltaVista Brazil4 , because it is one of the largest web search engines of these times and it does not suffer from some issues appearing in other search engines, like: – Large number of “clone” documents. I.e. a document repeated at different URL locations, appearing more than once in the results of the search. This occurs, for instance, in Radix5 search engine. – No Boolean search implemented. This occurs in TodoBR6 search engine, for example. – Incapable to work with more than 10 words per search. This occurs in Google7 search engine. A number of 14 searches were performed on their original and expanded format, and the top-50 documents on every search were analyzed . The precision is a measure that could be easily obtained, but the recall measure could be estimated only. At first, we consider recall, the number of relevant documents retrieved by a query, over the number of relevant documents in the union set of the original and expanded queries. This could give a value for the recall considering that those 4 5 6 7

http://www.altavista.com.br http://www.radix.com.br http://www.todobr.com.br http://www.google.com.br

Evaluation of a Thesaurus-Based Query Expansion Technique

257

Table 3. Average results in internet corpus Query Original Expanded Increase/Decrease

Precision Recall F-Measure 0.8677 0.2577 0.3190 0.7387 0.5384 0.6114 -14.87% +108.90% +91.69%

unified sets were all documents that were possible to be retrieved. But this had shown that this recall only was increased when few documents were retrieved in the original query. As stated before, the recall measure, in our method of query expansion, can only increase. As expected, the total number of documents retrieved by the system was larger for the expanded query (+73,85%). So, we made an estimation of the total recall, by assuming a stable precision and then estimating the number of relevant documents retrieved by the queries. We assume the total number of relevant documents by dividing the estimated number of relevant documents retrieved by the expanded query, over the recall measure with the top-50 documents. With these numbers it is possible to compute an estimated recall for the whole search. The results are presented in Table 3, and they show a decrease in precision and an improve in recall. The both values resulted in a better F-measure, representing 91.69% better results in IR for the expanded query. The result obtained by the Internet evaluation are very alike the results for the small corpus. Precision decreased about 15% and recall improved about 109%. This shows that the gain on the results was very similar, even when the Internet original search had high precision with low recall, and the small corpus search had medium precision with low recall. Due to this original search state, the overall gain (F-measure) was better in the Internet IR. Mandala et al. [3] used different thesauri for query expansion in a similar way that we do. The results were also similar, once both studies increased the overall retrieval performance. The main differences between our studies are that they used different methods of computing similarity measures for every type of thesaurus, while we used only one method for all types; and Mandala et al. developed their studies using the English language. Several studies about query expansion can be found, but it is a difficult task to establish a true comparison of those and our study, once the used resources are in different languages.

4

Concluding Remarks

In this paper we proposed and evaluated a thesaurus-based query expansion method. This method takes into account different weights to compute the importance of thesaurus terms over the terms of an IR query. The method also walks through paths of relations inside the thesaurus, making possible the use of terms not directly related to the query.

258

L.A.S. Pizzato and V.L.S. de Lima

It was presented an evaluation of the method by using a small and a very large corpus. The evaluation shows that our query expansion degrades the precision of the results, but at the same time it improves considerably the recall. Considering the F-measure, the query expansion increased the IR search on a small corpus and on the Internet.

References 1. Caroline Varaschin Gasperin. Extra o autom tica de rela es sem nticas a partir de rela es sint ticas. Master’s thesis, PPGCC, FACIN, Pontif cia Universidade Cat lica do Rio Grande do Sul, November 2001. 2. Gregory Grefenstette. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, EUA, 1994. 3. Rila Mandala, Takenobu Tokunaga, and Hozumi Tanaka. Complementing WordNet with Roget’s and corpus-based thesauri for information retrieval. In Proceedings of the 9th Conference of the European chapter of the Association for Computational Linguistics (EACL’99), pages 94–101, 1999. 4. Luiz Augusto Sangoi Pizzato. Estrutura multitesauro para recupera o de informa es. Master’s thesis, PPGCC, FACIN, Pontif cia Universidade Cat lica do Rio Grande do Sul, January 2003. Available online at: http://www.inf.pucrs.br/˜pizzato/dissertacao/dissertacao.pdf. 5. Jacques Robin and Franklin Ramalho. Empirically evaluating WordNet-based query expansion in a web search engine setting. In Proceedings..., Oulu, Finland, September 2001. 6. Douglas Tudhope, Harith Alani, and Christopher Jones. Augmenting thesaurus relationships: Possibilities for retrieval. Journal of Digital Information, 1(8):1–20, February 2001. 7. Ellen M. Voorhees. Using WordNet for text retrieval. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, Massachusetts, 1998.

Cooperatively Evaluating Portuguese Morphology 1

Diana Santos1, Luís Costa , and Paulo Rocha2 1

Linguateca, SINTEF Telecom & Informatics Pb 1124 Blindern, 0314 Oslo, Norway {Diana.Santos,Luis.Costa}@sintef.no http://www.linguateca.pt 2 Linguateca, Departamento de Informática, Universidade do Minho Campus de Gualtar, 4710-057 Braga, Portugal [email protected]

Abstract. This paper describes the first attempt to evaluate morphological analysers for Portuguese with an evaluation contest. It emphasizes the options that had to be taken and that highlight considerable disagreement among the participating groups. It describes the trial intended to prepare the real contest in June 2003, its goals and preliminary results.

1

Introduction

Morphological analysers for Portuguese constitute an almost unavoidable first step in natural language processing. No matter whether one is mainly concerned with parsing, information retrieval or corpus exploration, most groups have to deal with some form of morphology. It is therefore no surprise that, in the spirit of our project, the first evaluation contest organized was in the realm of morphology.1 Not surprising, either, that we were able to run a trial with six different systems in September/October 2002, and are expecting more participants in the forthcoming Portuguese Morpholympics (the Morfolimpíadas) scheduled for June 2003. The evaluation contest model is well known in the NLP community and we refer elsewhere to its description [1,2]. As far as we know, it has only once been applied to morphology for German, the 1994 Morpholympics [3]. Inspired by the message understanding conferences (MUC) model, we performed an initial trial to evaluate several possible courses of action and help organize the larger event later, taking into account Hausser’s experience and recommendations. The evaluation should be based both on a set of forms with the “right” analysis (the golden list), and on scores based on larger amounts of text, for which there was no previous solution. The trial should be as similar as possible to the final event, but the scores should not be made public. Trial participants should benefit not only from 1

There was a preliminary hearing of the Portuguese (language) NLP community on which areas there was more interest in evaluating. Morphology, corpora and lexica came up first. See http://www.linguateca.pt/AvalConjunta/.

N.J. Mamede et al. (Eds.): PROPOR 2003, LNAI 2721, pp. 259–266, 2003. © Springer-Verlag Berlin Heidelberg 2003

260

D. Santos, L. Costa, and P. Rocha

the experience but also in that they constitute the organising committee of the major event and help shaping its final form. In addition to run a rehearsal of the competition, we wanted to a) try out which input and output form was better; b) test whether it was possible to create a golden list which was representative of morphological knowledge required and of morphological problems to be solved; c) investigate whether there were significant performance differences per text kind, variant, genre, etc., and d) find measures that could adequately represent performance and highlight meaningful differences between systems.

2

Test Materials Creation

First, we asked the participants to cooperatively build a golden standard, by sending us a set of 20–30 judiciously chosen forms with the right analysis to be used in the contest, preferably in the format of their system. It was stressed that the analysis of those forms did not need to reflect real performance; rather, it should represent ideal performance. The first task was to put together the different items sent by 8 different sources for inclusion on the golden list, while at the same time compiling a set of test texts that wrapped all forms, preventing the golden list items to be identified. 2.1

Test Texts

The test texts were amassed by randomly extracting chunks including the forms in the golden list from a wide variety of distinct sources, maximizing the set of different available categories, such as subject area, kind of newspaper, translated text or not, etc, over all corpora available (see Table 1). Table 1. Distribution of the test texts (according to the organization's tokenization, reflected in the uul format)

Variant Total Brazilian Portuguese African Unknown

Words 39,850 16,132 21,206 1,390 512

Texts 199 82 113 4 1

Genre Total Newspapers Original fiction Translated fiction Web / email

Words 39,850 23,823 836 3,117 3,333

Texts 199 118 3 18 19

The texts were provided to the participants in three formats: uts, an alphabetical list of the (single word) types in all texts, accompanied by a small mweuts, an alphabetical list of a few possible2 multiword types in the texts; uul, a pre-tokenized text, one word per line; and ts, running text. 2

Containing both some commonly considered idioms or locutions, as well as sequences of two or three words that just occurred in sequence.

Cooperatively Evaluating Portuguese Morphology

261

The participants had one week to provide the output of their morphological analysers of all forms in sequence, for each of the four files. (A morphological disambiguation task was also included for those groups having systems that could do it. In that case, the goal was to find the right form in context.) The output of the systems should then be compared to the right answers in the golden list; and compared in general among the systems. This pressuposed the translation of each output into a common format, and the creation of programs for comparison and evaluating the results. The whole organization philosophy gives the least work to the participants, putting the burden on the organization. 2.2

Golden List Compilation

Before any of these tasks could be undertaken, however, it was necessary to produce a golden list on which all participants agreed. Maybe not surprisingly, the compilation of the golden list turned out to be an extremely complex task, and involved an endless number of versions until its final form. Let us give here some simple statistics: the final golden list contained 200 forms having 345 analyses (on average, 1.73 analyses per form). 113 forms had one analysis, 52 two, 20 three and 15 four or more. 114 analyses pertained to verbs, 95 to nouns, and 14 were labelled proper nouns. Defining weight as the ratio of (e.g., verb) analyses over all analyses of the forms which have verb readings, we find verb weight, noun weight and proper noun weight to be .61, .540 and .583 respectively. Of the entries, 9 were multiword expressions, 5 were cliticized verbs and 7 contractions, 14 were hyphenated (different kinds) and 3 forms were deviant (with one analysis only), including foreign words, common spelling mistakes, and neologisms. Even though the trial organizers had restricted the competition to quite consensual categories (we thought) such as gender, number, lemma or base form, tense, as well as occurrence of superlative or diminutive, we found out that there was a surprisingly larger span of fundamental differences between systems than expected.3 By preliminary inspection of the participants’ output formats for a set of forms, and using common knowledge about Portuguese, we had already come up with a set of encoding principles, in order to minimize encoding differences, e.g.: - whenever there is total overlapping between verbal forms, we encode only one rd rd (as is the case of 3 person singular personal infinite and impersonal infinite; 3 person plural Perfeito and Mais-que-perfeito; etc.) - we joined as one form verbs with clitics and erased as irrelevant (for purposes of common evaluation) all sublexical classifications: e.g. for fá-lo-ia (‘would do it’), some systems would return separately all possibilities for fá, for lo and for ia as independent words. Still, while looking at the set of golden candidates chosen by each participant we had to confront much deeper disagreement, as listed in the next section.

3

The categories were extensionally defined as the pairwise intersection of what was provided by the actual systems, a sample of which output having been requested at an early stage.

262

D. Santos, L. Costa, and P. Rocha

2.2.1 Different Linguistic Points of View During the process of harmonizing the golden list, we found the following “theoretical disagreement” (as opposed to actual occurrence of different analyses of specific items): differences about PoS categorization (cases 1–4); differences about which information should be associated with a given PoS (5–7); differences about base form or lemma (these are perhaps the most drastic and the ones that affect the larger number of golden list items) (8–10); differences about the values of a given category (11–12); and differences on what should be done by a morphological analyser (13–14). 1. some researchers would have a given word ambiguous between noun and adjective; others considered it belonged to a vague noun-adjective category 2. some words were either considered proper names or nouns 3. quite a lot or words were either considered past participle or adjective or both 4. there was no agreement on whether a morphological analyser should return the PoS “adverb” for clara, given that in some contexts, adverbs in mente drop the mente suffix, as in clara e sucintamente 5. some systems do not consider gender as a feature of past participles 6. gender/number for proper names: there is internal gender but also a proper name can be used often to identify all kinds of entities 7. should gender be assigned to pronouns when they are invariable? 8. when adverbs in mente are related to adjectives, some systems return the adjective as lemma, others the adverb form 9. derived words: the systems that analyse derivation are obviously different in kind and in information returned from those that only handle inflection, but it seems that there is no standard encoding of derivation information, either 10. for some hyphenated words, that have more than one “head”, there seems not to be a consensus about how to represent their lemma(s) 11. is indeterminate a third value for gender, or it means both M and F? 12. how many tenses are there, are tense and mood different categories? 13. are abbreviations and acronyms in the realm of morphology? 14. should a morphological analyser return “capitalized”, “upper case”, “mixed case” and so on as part of its output? This question arose because not all morphological analysers actually return the input form, so this information may be lost. Then, we have to mention the well known hard problems in morphological processing: differences in MWE handling; differences in clitic and contraction handling, and differences in the classification of closed words. (In fact, the organization had initially stated that the classification of purely grammatical items was not interesting for an evaluation contest, since they could be listed once and for all as they concern a small number of words, but we still had to deal with them due to forms which had both grammatical and lexical interpretations.) 2.2.2 Absence of Standard The cooperative compilation allowed also a general acknowledgement that there was no standard available for specially formatted areas (such as bibliographic citations, results of football matches, references to laws); traditional spelling errors; foreign words; oral transcription; and random spelling errors.

Cooperatively Evaluating Portuguese Morphology

263

It was agreed that for these cases one should simply count their number in the texts, not use them for evaluation. However, this is obviously easier said than done, since they cannot be automatically identified. 2.2.3 Different Testing Points of View The compilation process also showed that there were different views of what a golden standard should include, from really “controversial” items, to multiword expressions and punctuation (none of these had been foreseen by the organisers, although later accomodated). Some participants thought it would be enough to give one analysis (the one they wanted to test) and not all analyses of one form; some were intent on checking the coverage of particularly ambiguous forms; others to see whether a rule-generated system would block a seemingly regular rule to apply. Some of these differences could and should be solved by clear compilation guidelines and neat examples – like “give all analyses of the form you selected”; “do not include purely grammatical items in the golden list candidates”; “comment or mark deviant items”, etc., but others are really interesting since they reflect genuinely different system conceptions with correspondingly different ways of testing them. Finally, it should be mentioned that, although the problem of rare forms and their possible relevance in the context of an evaluation of morphological analysers was debated, no solution has yet emerged.

3

Measuring

The next step of the trial was to process each system's three outputs and translate them into an internal evaluation format. Then, another set of data was gathered by extracting the data relative to the elements present in the golden list for subsequent comparison of their classifications. 3.1

Tokenization Data

The rationale behind the three formats was our fear that too many tokenization differences would hinder easy comparison of running text results ([4] reports 12-14% tokenization differences between two different systems for Portuguese). We provided the uul format in order to provide a common tokenization, but this did only half the job, anyway, since some systems still separated our “units” into smaller parts, while others joined several of them. On the other hand, while the uts format prevented joining into longer units (since it is an alphabetical order of all types in uul), it was unrealistic or even unfair in that it might provide the systems with tokens they wouldn’t find if they were left in charge of tokenization. By providing the same texts in three forms, we wanted to check how significant would be the differences, and eventually choose which was best, or whether a

264

D. Santos, L. Costa, and P. Rocha

combination (probably measuring different things in different formats) should be used. Quantitative data, in terms of coverage and tokenization agreement, are displayed in Tables 2 and 3. Tokenization differences imply that, even if all systems returned exactly the same analyses for the forms they agreed upon, there would still be disagreement for 15.9% of the tokens, or 9.5% of the types. Table 2. Tokenization overview, for the ts format. A token is considered common if it was found by the four systems. 8480 tokens were common. One token can have several analyses

System No. of tokens No. of analyses Common tokens

B 41,636 73,252 84.1%

C 41,433 76,455 91.6%

D 39,503 57,650 86.5%

E 41,197 69,619 86.2%

Table 3. Tokenization overview, for the uts format (one type only). A type is considered common if it was found by the four systems. 9580 types were common

System No. of types No. of analyses Common types

B 11,593 18,483 90.7%

C 10,896 18,742 92.0%

D 10,613 15,005 91.3%

E 10,745 13,487 90.5%

As to whether the performance of the morphological analysers varied significantly with text kind, Table 4 shows some first results, concerning the analysers’ performance regarding language variant (BP: Brazilian, PP: from Portugal).4 (Internal) coverage is defined as the percentage of the tokens identified by the system for which it could produce some analysis (as opposed to "unknown"). Table 4. Impact of variant in internal coverage and number of analyses, for the ts format, after handling clitics and contractions, and counting every (set of) grammatical analysis as one

System Coverage PP Analyses/form PP Coverage BP Analyses/form BP Total coverage Analyses/form gen.

4

B 98.20% 1.38 97.20% 1.38 97.58% 1.39

C 99.95% 1.67 99.85% 1.65 99.87% 1.62

D 99.19% 1.26 98.46% 1.26 98.82% 1.26

E 94.94% 1.44 96.40% 1.42 95.65% 1.43

There are well-known morphological and (regular) ortographical differences between the two variants. However, it is possible that most systems can cope with those. On the other hand, results can always be parametrized, for variant-specific systems.

Cooperatively Evaluating Portuguese Morphology

3.2

265

Comparison with the Golden List

For comparison with the golden list, one can use two different units: the analysis and the form, which may in turn have several analyses. In Table 5, not including the two punctuation items, we first give the number of forms and analyses provided by each system, and then proceed to count forms whose set of analyses is exactly like the golden list (column 3), individual analyses just like the golden list (column 4), and which forms had an altogether different number of analyses (column 5), subdivided in more analyses and less analyses. Finally, we look at the set of PoS for a given form (column 8). Later, we intend to use "meaningful combinations of features" [5]. It can in any case be reported that the highest number of differences concerned lemma. Table 5. System comparison with the golden list, using uts. Each form (ambiguation class) was compared, as regards number of analyses, and set of PoS classifications assigned

System

no. forms golden li 198 system B 168 system C 178 system D 186 system E 182

no. equal analyses forms 343 198 297 26 315 93 299 64 274 83

equal anal. 343 101 192 160 145

diff. more less no. 0 0 0 69 32 37 50 24 26 59 23 36 67 14 53

diff. in PoS set 0 70 45 72 83

Note that the column for "no. forms" is necessary since not all forms were recognized as such by the competing systems (in addition, the mweuts data are not included for lack of an easy comparable unit). Several scoring functions can then be applied. 3.3

Coarse-Grained Comparison of the Output for All Tokens

We wanted to see whether it was possible to measure (blind) agreement among systems based on all common tokens recognized. Table 6 presents an initial “agreement table”, where the system in the left is considered as correct and the system on top is measured against it, after a first homogeneization procedure, concerning contractions and clitics, was applied, and all grammatical analyses were reduced to one. Table 6. System cross-comparison, based on uts. Each system in turn is used as golden standard. The cells contain the percentage of analyses of the top system which agree, of all analyses of the system on the left). The field "other information" was not taken into account

System system B system C system D system E

B 100% 69% 66% 64%

C 68% 100% 68% 71%

D 51% 57% 100% 59%

E 47% 53% 53% 100%

266

D. Santos, L. Costa, and P. Rocha

These are the raw numbers. Although they may seem overwhelming, several steps can be taken to reduce sytematically some of the differences, not only by harmonizing further the tokenization (such as numbers, proper names and abbreviations), but especially by taking into consideration systematic conflicting points of view. In fact, thanks to the discussions among the participants on what should or should not be considered right, it was possible to trace several cases of theoretical disagreement, where one might define not one but several evaluation (and thus, comparison) functions depending on the theoretical standpoint. After giving the question thorough consideration, we decided to compute a minimum information function (minif) and a maximum information function (maxif), and make our comparisons and evaluations based on these. In practice, this means to define two internal evaluation formats, and use more complex translation procedures. Finally, there are several other evaluation methodologies investigated, which for lack of space cannot be presented here, and will hopefully be published elsewhere: comparison with already annotated corpora; comparison with the (automatic) output of the disambiguation track; finer measures of which lexicon items are more prone to disagreement, and so on. We are also investigating the semi-automatic compilation of a new golden list, dubbed the silver list, following a proposal at the trial meeting in Porto. This paper should, in any case, have enough material to give a glimpse both of the complexity of the organization of the forthcoming Morfolimpíadas (see http://www.linguateca.pt/Morfolimpiadas/ for updated information) and of the many issues in the computational morphology of Portuguese when applied to real text. It should be emphasized that this paper (and the contest whose first results are here presented) is only possible through the cooperation (and effort) of many people. We are greatly indebted to all participants in the trial, as well as to the researchers and developers who have contributed with suggestions, discussion or criticism.

References 1. 2.

3. 4. 5.

Hirschman, Lynette: The evolution of Evaluation: Lessons from the Message Understanding Conferences. Computer Speech and Language 12 (1998) 281–305 Santos, Diana, Rocha, Paulo: AvalON: uma iniciativa de avaliação conjunta para o português. In: Actas do XVIII Encontro da Associação Portuguesa de Linguística (Porto, 2-4 de Outubro de 2002) (2003) Hausser, Roland (ed.): Linguistische Verifikation: Dokumentation zur Ersten Morpholympics 1994. Tübingen: Max Niemeyer Verlag (1996) Santos, Diana, Bick, Eckhard: Providing Internet access to Portuguese corpora: the AC/DC project. In Gavriladou et al. (eds.): Proceedings of LREC'2000 (2000) 205–210 Santos, Diana, Gasperin, Caroline (2002). Evaluation of parsed corpora: experiments in user-transparent and user-visible evaluation. In Rodríguez, M.G. & Araujo, C.P.S. (eds.): Proceedings of LREC 2002 (2002) 597–604

Author Index

Aires, Rachel 227 Almeida, Sheila de 102 Alu´ısio, Sandra 110, 227 Amaral, Rui 219 Arim, Eva 70 Baptista, Jorge 235 Barbosa, Filipe 23, 57 Barbosa, Pl´ınio Almeida 193 Barone, Dante Augusto Couto Barros, Manuela 49 Bick, Eckhard 118 Branco, Ant´ onio Horta 167

62

Carvalho, Ariadne 102 Caseiro, Diamantino 9, 49 Coimbra, Rosa L´ıdia 66 Costa, Francisco 70 Costa, Lu´ıs 259 Dias, Altamir 171 Dias-da-Silva, Bento C.

78

Gasperin, Caroline Varaschin 243 Gon¸calves, Carlos Alexandre 23 Greghi, Juliana Galvani 86 Hagen, Astrid 126 Hasegawa, Ricardo 179

210

Claudia 159 Luc´elia de 110 Lu´ıs C. 31, 143, 189 Mirna F. de 78

Pardo, Thiago Alexandre Salgueiro 210 Paulo, Joana L. 135 Paulo, S´ergio 31, 49 Pelizzoni, Jorge 110 Pereira, Hugo 189 Pinto, Guilherme 23 Pizzato, Luiz Augusto Sangoi 251 Quaresma, Paulo 201, 227 Quintano, Luis 206 Resende, Fernando Gil 23, 57 Ribeiro, Ricardo 143 Rino, Lucia Helena Machado 210 Rocha, Paulo 259 Rodrigues, Irene 201, 206 Rosa, Maria Carlota 23

1

Lima, Carlos 18 Lima, Vera L´ ucia Strube de Linhares, Jo˜ ao Carlos 171

Neto, Jo˜ ao 9 Neto, Jo˜ ao Jos´e 94 Neto, Jo˜ ao P. 126 Nunes, Gra¸ca 179 Nunes, Maria das Gra¸cas Volpe Oliveira, Oliveira, Oliveira, Oliveira,

Fantin, Lucien 102 Ferrari, Lilian 57 Franzen, Evandro 62 Freitas, Diamantino 40 Freitas, Tiago 70

Jesus, Luis M.T.

Marquiaf´ avel, Vanessa 110 Martins, Ronaldo 179 Matos, David M. de 135 Meinedo, Hugo 9 Monserrat, Ruth 23 Moraes, Helio R. de 78 Moraes, Miryam de 94 Mota, Cristina 184 Moura, Pedro 184 Mour˜ ao, M´ arcio 197 Moutinho, Lurdes Castro 66

243, 251

Madeira, Pedro 197 Mamede, Nuno J. 135, 175, 197 Manenti, Regiana 110 Marchi, Ana Raquel 110

Santos, Diana 151, 227, 259 Shadle, Christine H. 1 Silva, Carlos 18 Silva, Gilberto 159 Silva, Jo˜ ao Ricardo 167 Silva, M´ ario J. 227

268

Author Index

Stolfi, Jorge

102

Tavares, Adriano 18 Teixeira, Ant´ onio 66 Teixeira, Jo˜ ao Paulo 40 Teixeira, Pedro 189 Trancoso, Isabel 9, 49, 143, 219

Vale, Oto Ara´ ujo 98 Viana, C´eu 49 Violaro, F´ abio 193 Ynoguti, Carlos Alberto Zavaglia, Claudia

86

193