Radio Engineering: From Software to Cognitive Radio 9781848212961, 9781118602218

Software radio ideally provides the opportunity to communicate with any radio communication standard by modifying only t

506 80 49MB

English Pages 384 Year 2011

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Radio Engineering: From Software to Cognitive Radio
 9781848212961, 9781118602218

Citation preview

Radio Engineering

Radio Engineering From Software To Cognitive Radio

Edited by Jacques Palicot Series Editor Pierre-Noël Favennec

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc. Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK

John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2011 The rights of Jacques Palicot to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. ____________________________________________________________________________________ Library of Congress Cataloging-in-Publication Data Software and cognitive radio engineering / edited by Jacques Palicot. p. cm. Includes bibliographical references and index. ISBN 978-1-84821-296-1 1. Cognitive radio networks. 2. Software radio. I. Palicot, Jacques. TK5103.4815.S64 2011 621.384--dc23 2011024239 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-296-1 Printed and bound in Great Britain by CPI Antony Rowe, Chippenham and Eastbourne.

Table of Contents

Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alain B RAVO

xvii

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xix

Introduction

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xxi

PART 1. C OGNITIVE R ADIO . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Chapter 1. Introduction to Cognitive Radio . . . . . . . . . . . . . . . . . . Jacques PALICOT, Christophe M OY and Mérouane D EBBAH

3

1.1. Joseph Mitola’s cognitive radio . . . . . . . . . . . . . . 1.1.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . 1.1.2. Joseph Mitola’s vision of cognitive cycle . . . . . 1.2. Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1. Convergence between networks . . . . . . . . . . 1.2.2. Generalized mobility without service interruption 1.2.3. Distribution of intelligence . . . . . . . . . . . . . 1.3. Spectrum management . . . . . . . . . . . . . . . . . . . 1.3.1. Current situation . . . . . . . . . . . . . . . . . . . 1.3.2. Spectrum sharing . . . . . . . . . . . . . . . . . . . 1.3.2.1. Horizontal and vertical sharing . . . . . . . . 1.3.2.2. Spectrum pooling . . . . . . . . . . . . . . . . 1.3.2.3. Spectrum underlay technique . . . . . . . . . 1.3.2.4. Spectrum overlay technique . . . . . . . . . . 1.4. A broader vision of CR . . . . . . . . . . . . . . . . . . . 1.4.1. Taking into account the global environment . . . . 1.4.2. The sensorial radio bubble for CR . . . . . . . . . 1.5. Difficulties of the cognitive cycle . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . .

3 4 5 7 7 8 9 9 9 10 11 11 13 16 17 18 19 21

vi

Radio Engineering

Chapter 2. Cognitive Terminals Toward Cognitive Networks . . . . . . . Romain C OUILLET and Mérouane D EBBAH 2.1. Introduction . . . . . . . . . . . . . . 2.2. Intelligent terminal . . . . . . . . . . 2.2.1. Description . . . . . . . . . . . 2.2.2. Advantages . . . . . . . . . . . 2.2.3. Limitations . . . . . . . . . . . 2.3. Intelligent networks . . . . . . . . . . 2.3.1. Description . . . . . . . . . . . 2.3.2. Advantages . . . . . . . . . . . 2.3.3. Limitations . . . . . . . . . . . 2.4. Toward a compromise . . . . . . . . 2.4.1. Impact of the number of users . 2.4.2. Impact of spectral dimension . 2.5. Conclusion . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

23 25 26 30 31 32 32 34 35 35 38 39 40

Chapter 3. Cognitive Radio Sensors . . . . . . . . . . . . . . . . . . . . . . Renaud S ÉGUIER, Jacques PALICOT, Christophe M OY, Romain C OUILLET and Mérouane D EBBAH

43

3.1. Lower layer sensors . . . . . . . . . . . . . . . . . . . . . 3.1.1. Hole detection sensor . . . . . . . . . . . . . . . . . 3.1.1.1. Matched filtering . . . . . . . . . . . . . . . . . 3.1.1.2. Detection . . . . . . . . . . . . . . . . . . . . . 3.1.1.3. Energy detection . . . . . . . . . . . . . . . . . 3.1.1.4. Collaborative detection . . . . . . . . . . . . . 3.1.2. Other sensors . . . . . . . . . . . . . . . . . . . . . . 3.1.2.1. Recognition of channel bandwidth . . . . . . . 3.1.2.2. Single- and multicarrier detection . . . . . . . 3.1.2.3. Detection of spread spectrum type . . . . . . . 3.1.2.4. Other sensors of the lower layer . . . . . . . . 3.2. Intermediate layer sensors . . . . . . . . . . . . . . . . . . 3.2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 3.2.2. Cognitive pilot channel . . . . . . . . . . . . . . . . 3.2.3. Localization-based identification . . . . . . . . . . . 3.2.3.1. Geographical location-based systems synthesis 3.2.3.2. Rights of database use and update . . . . . . . 3.2.4. Blind standard recognition sensor . . . . . . . . . . . 3.2.4.1. General description . . . . . . . . . . . . . . . . 3.2.4.2. Sage 1: band adaptation . . . . . . . . . . . . . 3.2.4.3. Stage 2: analysis with lower layer sensors . . . 3.2.4.4. Stage 3: fusion . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

23

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

43 43 46 47 47 48 53 53 55 56 56 57 57 58 59 59 61 62 62 63 63 64

Table of Contents

vii

. . . . . . . . .

. . . . . . . . .

64 64 64 65 69 71 72 74 75

Chapter 4. Decision Making and Learning . . . . . . . . . . . . . . . . . . Romain C OUILLET, Mérouane D EBBAH, Hamidou T EMBINE, Wassim J OUINI and Christophe M OY

77

4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. CR equipment: decision and/or learning . . . . . . . . . . . . . . 4.2.1. Cognitive agent . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2. Conflicting objectives . . . . . . . . . . . . . . . . . . . . . 4.2.3. A modeling part in all approaches . . . . . . . . . . . . . . 4.2.4. Decision making and learning: network equipment . . . . 4.3. Decision design space . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1. Decision constraints . . . . . . . . . . . . . . . . . . . . . . 4.3.1.1. Environmental constraints . . . . . . . . . . . . . . . . 4.3.1.2. User constraints . . . . . . . . . . . . . . . . . . . . . . 4.3.1.3. Equipment capacity constraints . . . . . . . . . . . . . 4.3.2. Cognitive radio design space . . . . . . . . . . . . . . . . . 4.4. Decision making and learning from the equipment’s perspective 4.4.1. A priori uncertainty measurements . . . . . . . . . . . . . . 4.4.2. Bayesian techniques . . . . . . . . . . . . . . . . . . . . . . 4.4.3. Reinforcement techniques: general case . . . . . . . . . . . 4.4.3.1. Bellman’s equation . . . . . . . . . . . . . . . . . . . . 4.4.3.2. Bellman’s equation to reinforcement techniques . . . 4.4.3.3. Value update . . . . . . . . . . . . . . . . . . . . . . . 4.4.3.4. Iteration algorithm for policies . . . . . . . . . . . . . 4.4.3.5. Q-learning . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4. Reinforcement techniques: slot machine problem . . . . . 4.4.4.1. An introductory example: analogy with a slot machine . . . . . . . . . . . . . . . . . . . . . . . 4.4.4.2. Mathematical formalism and fundamental results . . . 4.4.4.3. Upper confidence bound (UCB) algorithms . . . . . . 4.4.4.4. U CB1 algorithm . . . . . . . . . . . . . . . . . . . . . 4.4.4.5. U CBV algorithm . . . . . . . . . . . . . . . . . . . . . 4.4.4.6. Application example: opportunistic spectrum access .

3.2.5. Comparison of abovementioned three sensors for standard recognition . . . . . . . . . . . . . . . . . 3.3. Higher layer sensors . . . . . . . . . . . . . . . . . . . 3.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . 3.3.2. Potential sensors . . . . . . . . . . . . . . . . . . 3.3.3. Video sensor and compression . . . . . . . . . . 3.3.3.1. Active appearance models . . . . . . . . . . 3.3.3.2. A real scenario . . . . . . . . . . . . . . . . 3.3.3.3. Different stages . . . . . . . . . . . . . . . . 3.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

77 78 78 79 79 80 81 81 81 81 82 82 82 82 84 86 87 88 89 90 90 91

. . . . . .

. . . . . .

91 92 92 93 94 94

viii

Radio Engineering

4.4.5. Artificial intelligence . . . . . . . . . . . . . . . . . . . . 4.5. Decision making and learning from network perspective: game theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1. Active or passive decision . . . . . . . . . . . . . . . . . 4.5.2. Techniques based on game theory . . . . . . . . . . . . 4.5.2.1. Cournot’s competition and best response . . . . . 4.5.2.2. Fictitious play . . . . . . . . . . . . . . . . . . . . . 4.5.2.3. Reinforcement strategy . . . . . . . . . . . . . . . 4.5.2.4. Boltzmann–Gibbs and coupled learning . . . . . . 4.5.2.5. Imitation . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2.6. Learning in stochastic games . . . . . . . . . . . . 4.6. Brief state of the art: classification of methods for dynamic configuration adaptation . . . . . . . . . . . . . . . . . . . . . . 4.6.1. The expert approach . . . . . . . . . . . . . . . . . . . . 4.6.2. Exploration-based decision making: genetic algorithms 4.6.3. Learning approaches: joint exploration and exploitation 4.7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

95

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

96 96 97 98 98 99 100 100 101

. . . . .

. . . . .

. . . . .

. . . . .

101 101 102 103 104

Chapter 5. Cognitive Cycle Management . . . . . . . . . . . . . . . . . . . Christophe M OY and Jacques PALICOT

107

5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Cognitive radio equipment . . . . . . . . . . . . . . . . . . 5.2.1. Composition of cognitive radio equipment . . . . . . 5.2.2. A design proposal for CR equipment: HDCRAM . 5.2.3. HDCRAM and cognitive cycle . . . . . . . . . . . . 5.2.4. HDCRAM levels . . . . . . . . . . . . . . . . . . . . 5.2.4.1. Level L3 . . . . . . . . . . . . . . . . . . . . . . 5.2.4.2. Level L2 . . . . . . . . . . . . . . . . . . . . . . 5.2.4.3. Level L1 . . . . . . . . . . . . . . . . . . . . . . 5.2.5. Deployment on a hardware platform . . . . . . . . . 5.2.6. Examples of intelligent decisions . . . . . . . . . . . 5.3. High-level design approach . . . . . . . . . . . . . . . . . 5.3.1. Unified modeling language (UML) design approach 5.3.2. Metamodeling . . . . . . . . . . . . . . . . . . . . . . 5.3.3. An executable metamodel . . . . . . . . . . . . . . . 5.3.4. Simulator of cognitive radio architecture . . . . . . . 5.4. HDCRAM’s interfaces (APIs) . . . . . . . . . . . . . . . . 5.4.1. Organization of classes of HDCRAM’s metamodel . 5.4.1.1. Parent classes . . . . . . . . . . . . . . . . . . . 5.4.1.2. Child classes . . . . . . . . . . . . . . . . . . . 5.4.2. ReM APIs . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3. CRM’s APIs . . . . . . . . . . . . . . . . . . . . . . . 5.4.4. Operators’ APIs . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

107 109 109 110 114 116 116 117 118 118 119 122 122 123 124 124 127 127 128 128 129 131 134

Table of Contents

ix

5.4.5. Example of deployment scenario in CR equipment . . . . . . . 5.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

135 139

PART 2. S OFTWARE R ADIO AS S UPPORT T ECHNOLOGY . . . . . . . . .

141

Chapter 6. Introduction to Software Radio . . . . . . . . . . . . . . . . . . Jacques PALICOT and Christophe M OY

143

6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 6.2. Generalities . . . . . . . . . . . . . . . . . . . . . . 6.2.1. Definitions . . . . . . . . . . . . . . . . . . . . 6.2.1.1. Ideal software radio . . . . . . . . . . . 6.2.1.2. Software-defined radio . . . . . . . . . . 6.2.1.3. Other interesting classifications . . . . . 6.2.2. Interests and aftermath for telecom players . 6.2.2.1. Designer of terminals and access points 6.2.2.2. Operator and service provider . . . . . . 6.2.2.3. End user . . . . . . . . . . . . . . . . . . 6.3. Major organizations of software radio . . . . . . . 6.3.1. Forums . . . . . . . . . . . . . . . . . . . . . . 6.3.1.1. SDR Forum/Wireless Innovation Forum 6.3.1.2. OMG . . . . . . . . . . . . . . . . . . . . 6.3.2. Standardization organizations . . . . . . . . . 6.3.3. Regulators . . . . . . . . . . . . . . . . . . . . 6.3.4. Some commercial and academic projects . . 6.3.5. Military projects . . . . . . . . . . . . . . . . 6.4. Hardware architectures . . . . . . . . . . . . . . . . 6.4.1. Software-defined radio (ideal) . . . . . . . . . 6.4.2. Software-defined radio . . . . . . . . . . . . . 6.4.2.1. Direct conversion . . . . . . . . . . . . . 6.4.2.2. SR with low IF . . . . . . . . . . . . . . 6.4.2.3. Undersampling . . . . . . . . . . . . . . 6.4.2.4. Other architectures . . . . . . . . . . . . 6.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

143 145 147 147 147 148 148 148 149 150 150 150 151 151 152 152 152 153 153 153 154 155 155 157 157 159

Chapter 7. Transmitter/Receiver Analog Front End . . . . . . . . . . . . . Renaud L OISON, Raphaël G ILLARD, Yves L OUËT and Gilles T OURNEUR

161

7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 7.2. Antennas . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1. Introduction . . . . . . . . . . . . . . . . . . . . 7.2.2. For base stations . . . . . . . . . . . . . . . . . 7.2.2.1. Constraints on spatial discrimination . . . 7.2.2.2. Constraints on the spectral discrimination 7.2.2.3. Sample topologies and concepts . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

161 161 161 162 162 164 165

x

Radio Engineering

7.2.3. For mobile terminals . . . . . . . . . . . . . . . . . . . . . . 7.2.3.1. Constraints . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3.2. Sample topologies and concepts . . . . . . . . . . . . 7.3. Nonlinear amplification . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2. Characteristics of a power amplifier . . . . . . . . . . . . . 7.3.2.1. AM/AM and AM/PM characteristics . . . . . . . . . . 7.3.2.2. The efficiency . . . . . . . . . . . . . . . . . . . . . . . 7.3.2.3. Input and output back-offs . . . . . . . . . . . . . . . . 7.3.2.4. Memory effect . . . . . . . . . . . . . . . . . . . . . . 7.3.3. Merit criteria of a power amplifier . . . . . . . . . . . . . . 7.3.3.1. Intermodulation . . . . . . . . . . . . . . . . . . . . . . 7.3.3.2. The C/I ratio . . . . . . . . . . . . . . . . . . . . . . . 7.3.3.3. Interception point . . . . . . . . . . . . . . . . . . . . . 7.3.3.4. Noise power ratio (NPR) . . . . . . . . . . . . . . . . 7.3.3.5. Adjacent channel power ratio (ACPR) . . . . . . . . . 7.3.3.6. Error vector magnitude (EVM) . . . . . . . . . . . . . 7.3.4. Modeling of a memoryless power amplifier . . . . . . . . . 7.3.4.1. Input–output relationship of an amplifier . . . . . . . 7.3.4.2. The polynomial model . . . . . . . . . . . . . . . . . . 7.3.4.3. The Saleh model . . . . . . . . . . . . . . . . . . . . . 7.3.4.4. The Rapp model . . . . . . . . . . . . . . . . . . . . . 7.3.5. Modeling of a power amplifier with memory . . . . . . . . 7.3.5.1. The Saleh model . . . . . . . . . . . . . . . . . . . . . 7.3.5.2. The Volterra model . . . . . . . . . . . . . . . . . . . . 7.3.5.3. The Wiener–Hammerstein model . . . . . . . . . . . . 7.3.5.4. The polynomial model with memory . . . . . . . . . . 7.4. Converters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1.1. Requirements for the software radio . . . . . . . . . . 7.4.2. Characteristics of the converters . . . . . . . . . . . . . . . 7.4.2.1. Quantization noise . . . . . . . . . . . . . . . . . . . . 7.4.2.2. Thermal noise . . . . . . . . . . . . . . . . . . . . . . . 7.4.2.3. Sampling phase noise . . . . . . . . . . . . . . . . . . 7.4.2.4. Measuring spectral purity: the spurious free dynamic range (SFDR) . . . . . . . . . . . . . . . . . . . . . . . 7.4.2.5. SFDR improvement by adding noise: the dither . . . 7.4.2.6. Switched capacitor converters: the KT /C noise . . . 7.4.2.7. Signal dynamics . . . . . . . . . . . . . . . . . . . . . 7.4.2.8. Blockers . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2.9. Linearity constraints . . . . . . . . . . . . . . . . . . . 7.4.2.10. Jammers . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2.11. Bandwidth and slew rate . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

167 167 170 172 172 172 173 174 174 175 175 176 178 178 178 180 180 181 181 182 182 182 183 183 184 184 185 185 185 186 187 187 189 190

. . . . . . . .

. . . . . . . .

192 193 194 194 194 195 195 195

Table of Contents

7.4.2.12. Consumption constraints: the figure of merit (FOM) 7.4.2.13. Constraints on digital ports . . . . . . . . . . . . . . . 7.4.3. Digital to analog conversion architectures . . . . . . . . . . 7.4.3.1. Current source of DAC architectures . . . . . . . . . . 7.4.3.2. Switched capacitor DAC architecture . . . . . . . . . 7.4.3.3. Evolution of the DAC . . . . . . . . . . . . . . . . . . 7.4.4. Analog to digital conversion architecture . . . . . . . . . . 7.4.4.1. Flash structure . . . . . . . . . . . . . . . . . . . . . . 7.4.4.2. Folding ADC . . . . . . . . . . . . . . . . . . . . . . . 7.4.4.3. Pipeline structure . . . . . . . . . . . . . . . . . . . . . 7.4.4.4. Successive approximation architecture . . . . . . . . . 7.4.4.5. Sigma-delta architecture . . . . . . . . . . . . . . . . 7.4.4.6. Evolution of the ADC analog-to-digital converters . . 7.4.5. Summarizing the converters . . . . . . . . . . . . . . . . . . 7.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

195 196 196 196 197 198 198 199 199 200 201 201 204 204 205

Chapter 8. Transmitter/Receiver Digital Front End . . . . . . . . . . . . . Jacques PALICOT, Daniel L E G UENNEC and Christophe M OY

207

8.1. Theoretical principles . . . . . . . . . . . . . . . . . . . . . 8.1.1. The universal transmitter/receiver . . . . . . . . . . . . 8.2. DFE functions . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1. I/Q transposition in digital domain . . . . . . . . . . . 8.2.2. Sample rate conversion . . . . . . . . . . . . . . . . . . 8.2.2.1. Frequency conversion by decimation filter . . . . 8.2.3. Channelization . . . . . . . . . . . . . . . . . . . . . . 8.2.4. DFE from a practical point of view . . . . . . . . . . . 8.2.4.1. Low-pass filtering . . . . . . . . . . . . . . . . . 8.2.4.2. Cheapest solution in terms of computational cost 8.2.5. Multichannel DFE . . . . . . . . . . . . . . . . . . . . 8.3. Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2. Symbol timing recovery . . . . . . . . . . . . . . . . . 8.3.2.1. Timing phase recovery . . . . . . . . . . . . . . . 8.3.2.2. Phase error detector . . . . . . . . . . . . . . . . 8.3.2.3. Phase-locked loop (PLL) . . . . . . . . . . . . . 8.3.3. Carrier phase recovery . . . . . . . . . . . . . . . . . . 8.3.3.1. DA estimation . . . . . . . . . . . . . . . . . . . 8.3.3.2. NDA estimation . . . . . . . . . . . . . . . . . . 8.3.3.3. Phase recovery . . . . . . . . . . . . . . . . . . . 8.3.3.4. Direct structures: DA and NDA estimator . . . . 8.3.3.5. Loop structures . . . . . . . . . . . . . . . . . . . 8.3.3.6. Phase error detector . . . . . . . . . . . . . . . . 8.3.3.7. The loop filter . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

xi

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

208 208 210 210 212 213 223 224 225 226 227 229 229 229 230 230 230 230 232 233 233 233 234 234 236

xii

Radio Engineering

8.3.4. Synchronization in the software radio context . . . . . . 8.3.4.1. Analysis of the dynamic behavior of the loop with constellation change . . . . . . . . . . . . . . . . . 8.3.4.2. Reliability and detection of constellation . . . . . 8.4. The CORDIC algorithm . . . . . . . . . . . . . . . . . . . . . 8.4.1. Principle of the CORDIC algorithm . . . . . . . . . . . 8.4.2. Operation of the CORDIC operator . . . . . . . . . . . . 8.4.2.1. Vector mode . . . . . . . . . . . . . . . . . . . . . . 8.4.2.2. Rotation mode . . . . . . . . . . . . . . . . . . . . 8.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

237

. . . . . . . .

. . . . . . . .

239 241 243 243 245 245 245 246

Chapter 9. Processing of Nonlinearities . . . . . . . . . . . . . . . . . . . . Yves L OUËT and Jacques PALICOT

249

9.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2. Crest factor of the signals to be amplified . . . . . . . . . . . . . 9.2.1. Characteristic parameters of the crest factor . . . . . . . . . 9.2.1.1. Crest factor definition . . . . . . . . . . . . . . . . . . 9.2.1.2. Definition of continuous and finite PR: PRc,f . . . . . 9.2.1.3. Definition of discrete and finite PR: PRd,f . . . . . . . 9.2.2. Relationship with the literature notations . . . . . . . . . . 9.2.2.1. PAPR case . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2.2. Relationship between P AP R and P M EP R . . . . . 9.2.3. Distribution function of PR . . . . . . . . . . . . . . . . . . 9.3. Variation of crest factor in different contexts . . . . . . . . . . . . 9.3.1. Single-carrier signals’ context . . . . . . . . . . . . . . . . . 9.3.1.1. Influence of Nyquist filter . . . . . . . . . . . . . . . . 9.3.1.2. Influence of square-root Nyquist filter . . . . . . . . . 9.3.2. Multicarrier signals’ context . . . . . . . . . . . . . . . . . . 9.3.3. Software radio signals’ context . . . . . . . . . . . . . . . . 9.3.4. Context of cognitive radio . . . . . . . . . . . . . . . . . . . 9.3.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 9.3.4.2. Variations in crest factor on spectrum access by using carrier-by-carrier vision . . . . . . . . . . . . . . . . . 9.3.4.3. Influence of spectrum access on PAPR in the CR context . . . . . . . . . . . . . . . . . . . . . 9.4. Methods for reducing nonlinearities . . . . . . . . . . . . . . . . . 9.4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2. PAPR reduction methods . . . . . . . . . . . . . . . . . . . 9.4.2.1. Methods with modifications of the receiver . . . . . . 9.4.2.2. Methods without modifying the receiver . . . . . . . . 9.4.2.3. Synthesis of the PAPR reduction methods . . . . . . . 9.4.3. Methods working on the linearity of the amplifier . . . . .

. . . . . . . . . . . . . . . . . .

249 250 250 250 250 251 251 251 251 252 252 252 252 253 255 256 260 260

. .

262

. . . . . . . .

263 264 264 266 266 267 267 268

. . . . . . . .

. . . . . . . .

. . . . . . . . . . . . . . . . . .

. . . . . . . .

Table of Contents

xiii

9.4.3.1. Methods to change the function of amplification . . . . . 9.4.3.2. Methods without changing the amplification function . . 9.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

268 268 269

Chapter 10. Methodology and Tools . . . . . . . . . . . . . . . . . . . . . . Pierre L ERAY, Christophe M OY and Sufi Tabassum G UL

271

10.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 10.2. Methods to identify common operations . . . . . . . . 10.2.1. Parametrization approaches . . . . . . . . . . . . 10.2.2. Pragmatic approach of parametrization to design multistandard SR . . . . . . . . . . . . . . . . . . 10.2.2.1. Common functions . . . . . . . . . . . . . . 10.2.2.2. Common operators . . . . . . . . . . . . . . 10.2.3. Theoretical approach to design multistandard SR by common operators . . . . . . . . . . . . . . . . 10.3. Methods and design tools . . . . . . . . . . . . . . . . 10.3.1. Co-design methods . . . . . . . . . . . . . . . . . 10.3.1.1. Simulink® based approach . . . . . . . . . 10.3.1.2. An example: SynDEx . . . . . . . . . . . . 10.3.1.3. GNU radio . . . . . . . . . . . . . . . . . . 10.3.2. MDA approaches . . . . . . . . . . . . . . . . . . 10.3.2.1. Introduction . . . . . . . . . . . . . . . . . . 10.3.2.2. MOPCOM methodology . . . . . . . . . . . 10.3.2.3. GASPARD model . . . . . . . . . . . . . . 10.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . .

271 273 273

. . . . . . . . . . . . . . . . . . . . . . . .

274 274 274

. . . . . . . . . . .

. . . . . . . . . . .

276 280 280 280 281 287 288 288 289 295 297

Chapter 11. Implementation Platforms . . . . . . . . . . . . . . . . . . . . Amor NAFKHA, Pierre L ERAY and Christophe M OY

299

11.1. Introduction . . . . . . . . . . . . . . . . . . . . . 11.2. Software radio platform . . . . . . . . . . . . . . 11.3. Hardware architectures . . . . . . . . . . . . . . . 11.3.1. Dedicated circuits . . . . . . . . . . . . . . . 11.3.2. Processors . . . . . . . . . . . . . . . . . . . 11.3.2.1. CISC architecture . . . . . . . . . . . . 11.3.2.2. RISC architecture . . . . . . . . . . . . 11.3.2.3. Superscalar architecture . . . . . . . . 11.3.2.4. VLIW architecture . . . . . . . . . . . 11.3.2.5. Vector architecture . . . . . . . . . . . 11.3.3. Reconfigurable architecture . . . . . . . . . 11.3.3.1. Fine-grain architecture . . . . . . . . . 11.3.3.2. Coarse-grain architecture . . . . . . . 11.4. Characterization of the implementation platform 11.4.1. Flexibility/reconfiguration . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

299 299 300 300 301 302 302 303 303 304 304 305 307 309 309

xiv

Radio Engineering

11.4.2. Performances . . . . . . . . . . . . . . . . . . . . . . . . 11.4.3. Power consumption . . . . . . . . . . . . . . . . . . . . . 11.5. Qualitative assessment . . . . . . . . . . . . . . . . . . . . . . 11.6. Architectures of software layers . . . . . . . . . . . . . . . . . 11.6.1. The SCA software architecture . . . . . . . . . . . . . . 11.6.2. Intermediate software layer: ALOE . . . . . . . . . . . 11.6.3. Software architecture for reconfiguration management: HDReM . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.7. Some platform examples . . . . . . . . . . . . . . . . . . . . . 11.7.1. The USRP platform . . . . . . . . . . . . . . . . . . . . 11.7.2. OpenAirInterface . . . . . . . . . . . . . . . . . . . . . . 11.7.3. Kansas University Agile Radio . . . . . . . . . . . . . . 11.7.4. Berkeley Cognitive Radio Platform . . . . . . . . . . . . 11.8. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

311 312 312 313 314 314

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

315 317 317 318 319 320 320

Chapter 12. General Conclusion and Perspectives . . . . . . . . . . . . . .

323

12.1. General conclusion . . . . . . . . . . . . . . . . . . 12.2. Perspectives . . . . . . . . . . . . . . . . . . . . . . 12.2.1. CR and sustainable development . . . . . . . 12.2.1.1. A spontaneous communication protocol 12.2.2. Toward a collective intelligence . . . . . . . . 12.2.3. Toward a fully cognitive radio . . . . . . . . .

. . . . . .

323 323 323 324 324 324

Appendix A. To Learn More . . . . . . . . . . . . . . . . . . . . . . . . . . .

327

A.1. The special issues of journals A.1.1. SR domain in general . . A.1.2. CR domain . . . . . . . . A.2. Specialized conferences . . . A.3. Some reference books . . . . A.3.1. SR domain in general . . A.3.2. CR domain in general .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . .

. . . . . . .

. . . . . . .

327 327 328 330 330 330 331

Appendix B. SR and CR Projects . . . . . . . . . . . . . . . . . . . . . . . .

333

B.1. European projects . . . . . . . . . . . . . . . . . . . . . . . . . . . B.2. French projects . . . . . . . . . . . . . . . . . . . . . . . . . . . .

333 336

Appendix C. International Activity in Standardization and Forums . . .

339

C.1. The IEEE 802.22 Standard . . . . . . . . . . . . . . . . . . . . . . C.2. SCC41 standardization (Standards Coordinating Committee 41, dynamic spectrum access networks) . . . . . . . . . . . . . . . . .

339 339

Table of Contents

C.3. P1900.1 standardization (Working group on terminology and concepts for next-generation radio systems and spectrum management) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.4. P1900.2 standardization (Working group on recommended practice for interference coexistence and analysis) . . . . . . . . . C.5. P1900.3 standardization (Working group on recommended practice for conformance evaluation of Software Defined Radio software modules) . . . . . . . . . . . . . . . . . . . . . . . . . . . C.6. P1900.4 standardization (Working group on architectural building blocks enabling network – device distributed decision making for optimized radio resource usage in heterogeneous wireless access networks) . . . . . . . . . . . . . . . . . . . . . . . C.7. P1900.5 standardization (Working group on policy language and policy architectures for managing cognitive radio for dynamic spectrum access applications) . . . . . . . . . . . . . . . . . . . . C.8. P1900.6 standardization (Working group on spectrum sensing interfaces and data structures for dynamic spectrum access and other advanced radio communications systems) . . . . . . . . . . C.9. ITU-R standards (Question ITU-R 241-1/5: cognitive radio systems in mobile service) . . . . . . . . . . . . . . . . . . . . . . C.10. ETSI technical committee on reconfigurable radio systems (TC-RRS) standardization . . . . . . . . . . . . . . . . . . . . . . C.11. Forum: Wireless Innovation Forum (former by SDR forum) . . C.12. Forum: Wireless World Research Forum . . . . . . . . . . . . .

xv

340 340

340

340

341

341 341 342 342 343

Appendix D. Research at European and International Levels . . . . . . .

345

D.1. Research centers . . . . . . . . . . . . . . . . . . . . . . . . . . .

345

Acronyms and Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . .

347

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

355

List of Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

373

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

375

Foreword

It is not surprising for an engineering school that started its TSF courses in 1903 with Commander Ferrié to be interested in radio communications in their most advanced scientific forms. Since then the uses and services, now collectively linked to “mobility”, have exploded. Today there is little prospect of a company without anytime, anywhere availability of simple, affordable access to reliable, seamless, wireless infrastructure. This already impressive list of functionality can be easily modified and extended to meet every reader’s individual expectations. However, as noted in this book, radio as a resource is limited, even if it is infinitely renewable, because of the energy needed for wave generation. In fact, wireless communication systems themselves (terminals, equipment, and networks) and their connections with wired infrastructure are highly complex systems. For their design, we must involve all the engineering sciences: modeling and simulation, contents and computer systems, interactions and protocols, hardware– software integration, networking and communications, etc. This book is titled Radio Engineering: From Software Radio to Cognitive Radio. It looks at the astounding move from mathematical formalism to hardware/software architectures, and tackles issues of methodology, tools, and implementation platforms, as well as standardization.

xviii

Radio Engineering

This broad exploration of radio communications, until their projection in sustainable development problems, illustrates the teaching and research ambitions of Supélec in information sciences, energy, and systems. I hope that the reader will find, to a great extent, answers and ideas in this book.

Alain Bravo Director general, Supélec Gif sur Yvette, France July 2011

Acknowledgments

Writing a collective book with more than 10 authors was a very rewarding experience. This book is the result, and we hope that this book it will give the readers all the keys to enter into new areas, which are software radio and cognitive radio. The authors wish to thank Pierre-Noël Favennec, series editor of Telecoms and Optics at ISTE, for offering the opportunity to write this book. We are also grateful to Alain Bravo, director general Supélec, who honored us by writing the preface of this book. Our thanks go initially to all our colleagues who participated in this scientific endeavor and whose works have largely served as base material for this book. Alphabetically, we thank: – PhD colleagues: Laurent Alaus, Ali Al Gouwayel, Jean-Philippe Delahaye, Mohamed Ghozzi, Loïg Godard, Sufi Tabassum Gul, Rachid Hachemani, Sajjad Hussain, Stéphane Lecomte, Sylvain Le Gallou, Adel Metref, Hongzhi Wang, and Sidkiéta Zabré; – Post-doctoral colleagues: Julien Delorme, Thi Minh Hien Ngo, and Virgilio Rodriguez. We are indebted to all the reviewers who kindly agreed to spare their precious time to review this book and give valuable comments to significantly improve the quality of the book.

xx

Radio Engineering

Alphabetically, we thank: Eric Bayet, Moïse Djoko Kouam, Damien Ernst, Guy Gognat, Yann Le Guillou, Jérome Martin, Markus Muck, Dominique Noguet, Dominique Nussbaum, Christian Roland, Ronan Sauleau, and Martine Villegas. The authors would like to express their special thanks to Sufi Tabassum Gul for his strong support in English.

The Authors Rennes, France July 2011

Introduction

This book presents an inevitable evolution in today’s wireless communication world: it concerns “cognitive radio” (CR). This new concept is based on the work of Joseph Mitola published in 1999 and 2000 [MIT 99a, MIT 00a]. From information theory to cognitive radio In 1948, through a single 55-page publication, Claude Elwood Shannon [SHA 48] radically changed the vision of modern telecommunications by inventing a new theory known as Information Theory. In this publication, Shannon brought, in particular, the response to a fundamental telecommunications problem: how much information can be transferred between two people communicating in a given environment at a particular time, so that everyone understands the respective information without errors? Ever since the formulation of this response by Shannon, we see around us the development of phones, WiFi cards, etc., capable of transmitting more and more data per second. However, we must not delude ourselves, and it is in this that Shannon’s work is so fundamental: the rate of information transmitted without error is naturally limited by the communication environment, the frequency band (commonly called bandwidth) used, and the power of transmitted signals. Hence, if any of these three fundamental resources reaches its limit, we cannot transmit information at a higher rate. Ten years ago, Joseph Mitola saw that a revolution in telecommunications was definitely going to take place right away. The principal perception of Mitola is that the rapid and less efficiently controlled use of telecommunication resources (especially bandwidth) had led to an enormous waste of these resources. The simplest of the examples of wastage is the functioning mode of the global system for mobile communication (GSM) standard (also known as 2G): the latter permits eight users of the mobile telephone network to connect simultaneously to a base station of their service provider. When a single user is connected to the base station, this user, however, only uses one-eighth of the total resources available to it. The static nature of the current communication protocols raises the question of how to make the wireless

xxii

Radio Engineering

domain more “flexible”. From this important comprehension, directly affecting the sustainability of modern telecommunications, the field of CR was thus born, which tends to make communication devices more autonomous, capable of deciding which resources to use and how to use them effectively. For example, if no GSM network coverage is present in the room in a house, then why not take advantage of the WiFi access point. The goal would be that a communication system can make such kinds of decisions autonomously. It is even highly desirable that our telecommunications devices can precisely perform this reasoning in an intelligent manner, reflecting that the human being will do the same when faced with such a situation. From software radio to cognitive radio In 1995, Joseph Mitola proposed a new concept entitled “software radio” (SR) [MIT 95]. Ideally, this concept permits equipment to communicate with any radio communications standard, without changing any hardware component and only by modifying the embedded software. This technology, which may appear very simple at first glance, not only introduces many new advantages, but also raises numerous technological challenges. This technology and its related problems are addressed in Part 2 of this book. Mitola realized the need to put intelligence simultaneously into both the network and the equipment to satisfy user needs and resource constraints, ultimately resulting in an increase in spectral efficiency. This is why he proposed the idea of CR [MIT 99a]. This intelligence enables the equipment to choose the best conditions to meet its communication needs. The choice ideally implies a realtime change of transmission parameters, even a change of standard. To do this optimally, Mitola showed that the change in real time should be realized by an SR. He concluded that CR would be more effective if it were supported by SR technology. Our orientation in this book is exactly toward this logic. Part 1 will focus on the CR while Part 2 will discuss the SR as a support technology for the concept of CR. Book structure The subject of this book is very broad. We do not claim to examine exhaustively all the aspects associated with it. However, a radio designer or a researcher of the domain will find the necessary information useful to understanding the fundamental concepts and identifying other literature sources that could complete explanations contained in this book. This book contains two main parts. Part 1, entitled “Cognitive Radio”, includes five chapters. It illustrates the expectations and the challenges related to the new concept of CR. Chapter 1 provides an introduction to CR, starting with the need for optimal spectrum management. Then, a set of definitions is provided. The cognitive cycle

Introduction

xxiii

based on the work of Mitola is described. The concept of opportunistic radio is presented and various ways of achieving such opportunistic radio are discussed. A more general vision than the classical spectral vision is also proposed, notably due to the notion of “sensorial radio bubble” and a limited “three-layer model” obtained by regrouping the layers of the open system interconnection (OSI) model. Finally, the chapter ends with a state (non-exhaustive) of the current national and international collaborative projects and standardization. Chapter 2 addresses the question of the intelligence distribution between the network and the terminal equipment. This discussion is far from closed. Clearly, behind this distribution are the industrial and economic interests of different players. Hence, our conclusion is that the intelligence must be in both networks and terminals at the same time. Chapter 3 focuses on the “sensing” function of the cognitive cycle. The sensors of the three layers of the model are presented and some are described in detail. Particularly, the role of “hole detection sensor” in the spectrum, a sensor considered in the literature as the main CR sensor, is explained. This chapter thus broadens the classical understanding of the sensors in CR, which typically consists of the physical layer sensors. Chapter 4 points out that with all the available information provided by the sensors of any kind, as explained in Chapter 3, and due to the known behavior rules or the learned rules from past experience, the equipment, in accordance with the cognitive cycle, must make decisions. This aspect of CR is discussed extensively in the literature. It results in a large number of potential solutions, with their advantages and disadvantages. These are described in this chapter. The necessity of learning is emphasized and methods taking into account that learning are presented. Chapter 5 presents the need to manage the intelligent cycle. This management is crucial for proper functioning of the equipment, both at the design level of this equipment and its functioning in real time. After explaining the need to manage the cycle, we present a solution known as hierarchical and distributed cognitive radio architecture management (HDCRAM) in detail. Part 2, entitled “Software Radio as Support Technology”, includes six chapters. Although the concept of SR dates back to 1995, and despite the rapid technological advances in past 20 years, there remain a large number of difficulties to be overcome before realizing an SR that conforms to the original concept. This part illustrates all these difficulties. Chapter 6 describes the SR in its historical and economic contexts. It presents the ideal concept and the resulting architecture. After emphasizing the difficulties of implementing such an architecture, the concept of “software-defined radio”

xxiv

Radio Engineering

(SDR) is proposed, with different possible architectures. Despite reduced difficulties encountered in this SDR architecture, there remain numerous problems to be solved. These are the subject of the subsequent chapters. Chapter 7, starting from the SDR architecture, in which an analog part is retained, deals with the “transmitter/receiver analog front end” (AFE). In this chapter, three problems, which follow the sequential order of signal processing, are reviewed: – the first concerns the antennas that must be wideband or multiband and/or highly directive depending on the application context; – the second deals with the amplification stages. This difficulty is often neglected in the literature, but from our point of view it is fundamental. Indeed, the signals processed by the amplifiers will be the sum of a large number of modulated carriers and therefore present a large variation in power; – the third concerns the problems of analog-to-digital and digital-to-analog conversion. This was the first identified problem in the beginning of the work on the SR. Indeed, the levels to be sampled are such that no circuit can perform this conversion on a very wideband of several GHz for example. This problem has generated much research activity both in terms of signal processing and electronics. An update on this activity and the most promising solutions is presented in this chapter. Chapter 8 deals with the second part of the SDR architecture. It follows the analogto-digital conversion stage and precedes the digital-to-analog conversion stage, in which a number of functions previously processed in analog are processed digitally in the SR context. This part is called “transmitter/receiver digital front end” (DFE). In this chapter, functions such as (de)-modulation, filtering of desired channels, and clock synchronization with the sampling clock are described. A particular point is made on the synchronization function, which in a CR/SR context has new constraints, more difficult than conventional digital reception. Chapter 9, based on the nonlinear amplification problem described in Chapter 7, proposes a theoretical analysis of the signals involved, and identifies a set of methods to reduce the significant power variation of the signals to be amplified. In the context of dynamic spectrum access, the methods of adding signal over reserved carriers are preferred and detailed. Chapter 10 explains the need to take into account the real-time (re-)configuration constraints imposed by the SR and by the design phase of the equipment. The advantage of reducing the computational complexity of processing by factorization techniques is emphasized. The technique of parameterization using common operators is also presented in this chapter. To provide a high-level design environment, which is as general as possible, the model-driven architecture (MDA) approach is presented as a pertinent solution. In this framework, several design flow solutions are described.

Introduction

xxv

Chapter 11, the last chapter of this book, describes the objective reality of the SR existence. There is a need to process the data at different frequencies and the nature of this processing involves the use of heterogeneous platforms. Like the management of the intelligent cycle described in Chapter 5, the need for (re-)configuration management is highlighted and the solutions of configuration management are presented. The need for the real-time (re-)configuration and the generated constraints affects the solutions on different platforms. The solution of partial (re-)configuration of field-programmable gate array (FPGA) to this constraint is also described. Finally, some existing platforms with SR capabilities are presented. We conclude this book by presenting an application, the green cognitive radio, which aims at reducing energy consumption and electromagnetic pollution, and the possible evolutions such as collective intelligence, spontaneous communication, and a radio with aptitudes approaching the cognitive aptitudes of the CR.

PART 1

Cognitive Radio

Chapter 1

Introduction to Cognitive Radio

1.1. Joseph Mitola’s cognitive radio In 1995, Joseph Mitola introduced the new concept of software radio (SR) [MIT 95]. Soon after, during his thesis [MIT 00a], he became interested in the efficient use of the spectrum. He noticed that the spectrum was very inefficiently used and a large part of it was underutilized. He concluded that by locally managing the spectrum intelligently, its use could be significantly increased. Mitola realized the need to put intelligence simultaneously into both the network and the equipment, to satisfy both the user needs and resource constraints, ultimately resulting in an increase in spectral efficiency. This is why he proposed the idea of cognitive radio (CR) [MIT 99a]. He demonstrated that CR would be more efficient if combined with SR technology. Let us illustrate the CR approach by drawing a simple analogy between radio systems carrying information and land transportation of goods. The railroad is a physical link between two stations. The train on which the goods are to be carried must follow the track and it cannot choose other “routes and/or timings”. The analogy then is that a conventional radio communication link on the global system for mobile communication (GSM) standard has no other options but to follow the track (given frequency and modulation) to ensure that the information reaches the receiver. When the same goods are transported by the road network, a whole infrastructure joining the same two points with no time constraints exists. If we follow this analogy, the SR will act as the infrastructure equivalent to the road network that will offer possible choices (frequency, modulation, etc.) for transmitting information from the sender

Chapter written by Jacques PALICOT , Christophe M OY and Mérouane D EBBAH.

4

Radio Engineering

to the receiver. On this network, the driver will be free to choose the route on the basis of different criteria, e.g. travel time, distance, toll costs, expected traffic, departure time, etc. Similarly, the CR will allow the terminal, i.e. the driver, to move in the radio infrastructure (thanks to SR technology), having many more choices at its disposal, mainly because of its environmental awareness. 1.1.1. Definitions In his thesis, Mitola wrote: “The term cognitive radio identifies the point at which wireless personal digital assistants (PDAs) and the related networks are sufficiently computationally intelligent about radio resources and related computer-to-computer communications so that they can: 1) detect user communications needs as a function of usage context; and 2) provide radio resources and wireless services most appropriate to these needs”. Another definition has recently been given by the Federal Communications Commission (FCC) of the United States [FCC 05]. We think that this definition is very limited in scope, but it is largely adopted by the radio domain community: “A cognitive radio is a radio that can change its transmitter parameters based on interaction with the environment in which it operates”. We interpret this definition by FCC as: “A CR is a radio that can change its transmission parameters while functioning, through interactions with the environment in which it operates”. New standards, such as WiMAX, offer a self-adaptation of physical layer parameters based on the radio frequency environment. They are the first examples of CR complying with the definition of the FCC. But as we will see in this chapter, and more specifically in section 1.4, the broader definition originally given by Mitola is more suitable to describe CR for our context. Following the important work in standardization, see Appendix C, various organizations have proposed their own definitions, which are listed below: – At the time of the IEEE standardization process, the group P1900.1 proposed the following definition: “Cognitive radio is a type of radio in which communication systems are aware of both their environment and their internal state. They can make decisions about their radio operating behavior based on that information and predefined objectives”. – ETSI, the European standardization organization, meanwhile proposed: “Cognitive Radio System is a radio system, which has the following capabilities: - to obtain knowledge of the radio operational environment and established policies and to monitor usage patterns and users’ needs;

Introduction to Cognitive Radio

5

- to dynamically and autonomously adjust its operational parameters and protocols according to this knowledge in order to achieve predefined objectives, e.g. more efficient utilization of the spectrum; - to learn from the results of its actions to further improve its performance”. – The International Telecommunication Union (ITU) has approved the following definition: “Cognitive Radio System (CRS) is a radio system employing technology that allows the system to obtain knowledge of its operational and geographical environment, established policies and its internal state; to dynamically and autonomously adjust its operational parameters and protocols according to its obtained knowledge in order to achieve predefined objectives; and to learn from the results obtained”. We observe that the latter definitions are consistent with the broad definition initially given by Mitola. We will explain his vision in section 1.4. 1.1.2. Joseph Mitola’s vision of cognitive cycle In his thesis Mitola conceptualized and theorized the chic concepts at that time in the world of radio communications. We can summarize these concepts with the following points: – broad sense adaptation to the environment; – intelligence in the network and the terminal; – independence of the terminal with respect to the network and the operator; – independence of the user with respect to the technique. He proposed this concept to meet the need of spectrum management and optimization of its use (see the analogy with chess, section 1.3.2). His concept is based on the fact that a CR system must follow the cognitive cycle as shown in Figure 1.1. As a result, this concept uses the data provided by sensors at all levels (spectral occupation, position, velocity, luminosity, temperature, bio-sensors (eye, ear, fingers, etc.)) to obtain adequate information about the following elements: – radio interface; – propagation; – network; – protocols; – security (at all levels); – user requirements. It is, therefore, a question of transforming the physical world into an intelligent computerized platform with object-based distributed intelligence.

6

Radio Engineering

Figure 1.1. Cognitive cycle proposed by Mitola in [MIT 00a]

Following the description given in Figure 1.1, a CR system can adapt its behavior (functionality) according to its environment primarily due to: – analytical capabilities of its sensors. In our vision, the notion of sensors is very broad as we will see later. It corresponds to providing information, via all possible means, to the cognitive engine that then makes the decisions. This information comes from actual physical sensors, from signal processing algorithms, from information exchanges with different nodes of a network, etc. This notion of sensors, as well as the precise description of some of them, is the subject of Chapter 3; – intelligence making it possible to make appropriate decisions based training and/or knowledge bases. The knowledge used in decision making, like information provided by sensors, is a very broad concept that ranges from parameters provided by the sensors to the technico-economic considerations regulatory rules for spectrum use. Decision making is described in Chapter 4;

on the the via

– auto-reconfiguration capabilities modifying its operation, which are provided by support technology, i.e. SR technology. This is described in Part 2 of this book that deals precisely with SR as support technology. A simplified schematic diagram showing this operation is depicted in Figure 1.2 [GOD 09].

Introduction to Cognitive Radio

7

Information capturing Decision making

Adaptation

Figure 1.2. Simplified three-stage cognitive cycle

1.2. Positioning The domain of telecommunications networks and the services offered by these networks are the subject of rapidly changing technologies and architectures. New technologies have led to a spectacular increase in capacity that enabled, in particular, the mass distribution of multimedia content. Similarly, new architectures have fostered a move toward global convergence of networks and services. Currently two solutions are proposed to provide this increase in capabilities. The first solution consists of developing a new network or 4G standard offering this capacity. This solution seems less realistic keeping in view the necessary investment required to deploy a new network. Very few operators will take the risk of embarking on such an adventure before having made their previous investments in 3G and 2G profitable. The second solution is to use all the opportunities offered by communication networks already deployed, thanks to the notion of convergence. We may believe that future reality will be a mix of both the solutions.

1.2.1. Convergence between networks Currently there are two approaches proposed for realizing the notion of convergence between networks: – Cooperative networks: in this approach all network entities possess an agreement to share the spectrum resource and in the same way to exchange traffic. This requires a common network configuration (at least on certain segments of the network), and thus a deep cooperation between all the actors. This approach is very difficult from this point of view because different stakeholders are market driven and are not necessarily willing to cooperate.

8

Radio Engineering

– Adaptive networks: in this approach, all network entities adapt their behavior according to the environment. This obviously makes it possible to overcome the difficulties related to the cooperation of the previous approach. This approach does not prohibit local cooperation between different network entities. 1.2.2. Generalized mobility without service interruption Throughout this book, we consider CR in the context of generalized mobility or handover1 as illustrated in Figure 1.3, i.e. we are to make a choice among a predetermined set of wireless communication standards that are present at a certain place and/or time of communication.

Figure 1.3. Generalized mobility without service interruption

This concept, initially proposed in the context of [WWR 10], is now commonly adopted in the radio communications community. It entails a dual mobility: a conventional geographical mobility requiring an automatic transfer between the cells for the cellular systems, called horizontal mobility and other mobility between networks, standards and services. The latter mobility is offered by automatic transfer and is called vertical mobility. This must be realized by avoiding any communication

1 The term “handover” refers to the automatic transfer between cells of a mobile cellular network. It covers the entire range of techniques that permit movement from one cell to another without interrupting communication.

Introduction to Cognitive Radio

9

breaks for the user. It is mainly with the advent of SR technologies that this kind of transfer has become really imaginable because a reconfiguration of the terminal is often necessary. R EMARK.– This generalized notion of mobility becomes more complicated when a simultaneous connection to multiple standards is desired. This not only complicates the transfer between the standards but also generates very strict constraints at the level of SR technology, as will be explained in Chapter 6. 1.2.3. Distribution of intelligence The notion of convergence, as described earlier, implies the presence of a distributed intelligence both in the network and in the terminal. From this point of view a debate exists as to whether this intelligence must be principally localized in the network or in the terminal: – Network-centric approach: this approach has the advantage of reducing the complexity of the terminal, which is particularly important, taking into account the embedded processing capabilities of the terminal. In addition, the network is also the place that centralizes most of the information to provide a global optimization. – Terminal-centric approach: the terminal is the most pertinent to know about its environment as well as its operational conditions. It can also benefit from information coming from the network. Whatever the approach considered, another eminence must be taken into account i.e. having knowledge of whether the processing will be performed in a centralized/cooperative way or not. A detailed discussion about these two approaches and their implementation issues is presented in Chapter 2. As a first trial, we think that a combination of both approaches will certainly be the most appropriate solution. 1.3. Spectrum management 1.3.1. Current situation Contrary to the generally perceived idea, the spectrum is a public resource, only its use is private. The spectrum is a finite natural resource. Indeed, a frequency exists only because it can be generated. From this point of view, it is necessary to have a sufficient quantity of energy to generate the frequency and to diffuse it. We can therefore speak about this finite resource since it depends on finite energy reserves. This finite resource can be used indefinitely (as long as the energy resource is available to generate the electromagnetic wave). Current rules for frequency allocation follow a very complicated and timeconsuming process for their implementation. Frequency allocation is, in fact, fixed.

10

Radio Engineering

It is assigned on the basis of services that follow strict international rules. These rules are discussed every five years at the World Administrative Radio Conference2. One such frequency allocation plan for the United States representing the spectral occupancy is shown in Figure 1.4. It is clearly visible that the whole spectrum is allocated. A first hasty conclusion could be to say that there is no more room available in this spectrum. A number of studies have shown that the spectrum is allocated but not used, e.g. the case of bands reserved for military purposes. An analysis of the spectral occupation such as that presented in Figure 1.5 for the TV bands shows that for a particular day (September 1, 2004) and at a given place (in New York City), the spectrum is underused (a usage of merely a small percentage). This analysis has given birth to the notion of “Hic and Nunc”, which says that, in spite of frequency allocation, the spectrum may be available at one location and at a given instance. This fact has been observed on numerous occasions. A large number of instances can be found in the proceedings of the various editions of the DySpan conference. In particular, in the framework of the European project E2R, measurements were taken at the time of the Football World Cup in 2006 in Germany [HOL 07]. More recently, in the framework of the European Network of Excellence NEWCOM++, the UPC (Polytechnic University of Catalonia) carried out the measurements [LOP 09] as shown in Figure 1.6. Analysis of the results shows that the utilization ratio is very low, in fact less than 10% in the band under consideration. Consequently, if for a particular place and instance, the equipment is capable of identifying its spectrum use, it may establish a communication in the underutilized spectral bands. In the literature, this is also called opportunistic communication. We can summarize this idea by saying that it is more profitable to increase the spectrum utilization ratio than to increase the spectral efficiency to a few percent of the spectrum used. The techniques used to identify the spectral occupancy will be described in Chapter 3. 1.3.2. Spectrum sharing Let us consider a new analogy to illustrate the problem of spectrum sharing3. We assume that the reader knows the rules of chess and the considerable number of possibilities to play for several moves. Assume that a new chess game is to be defined by replacing the 64 squares of the chess board with several hundred spectral channels. Now replace the 32 game pieces with several dozen signals that may potentially use these channels. Next, imagine that displacement of each game piece is no longer

2 WARC process (World Administrative Radio Conference). 3 This analogy was first conceived by J. Mitola.

Introduction to Cognitive Radio

11

predefined but has to be decided in real time as a function of the environment. We leave it to the reader to conclude the infinite number of possible combinations to predict the best spectrum use over the duration and the need to have intelligence to manage this problem: in fact that is what Mitola concluded in 1999 [MIT 99a]. Coming back to the descriptions given in the White Paper of WWRF in 2005 [BER 05], we can categorize different spectrum-sharing techniques with various methods. Table 1.1 provides a classification according to spectrum regulation policies. In its report, the FCC [FCC 02b] proposed a model that consists of three classes: – exclusive use model: this class is equivalent to the 1st class of the previous model of Table 1.1; – command and control: this class includes 2nd and 3rd classes of the previous model; – open access: this 3rd class is equivalent to the 4th class of the previous model. 1.3.2.1. Horizontal and vertical sharing Spectrum sharing will be either with licensed radio systems or with unlicensed systems. Spectrum sharing with a licensed primary user is called vertical sharing, while sharing with users in an unlicensed band is called horizontal sharing: – vertical sharing = spectrum sharing between systems that have different statuses in terms of regulation policies (pertain to different classes in Table 1.1); – horizontal sharing = spectrum sharing between systems of identical status. This denomination, initially mentioned in [KRU 03], then discussed in [BER 05], is shown in Figure 1.7. 1.3.2.2. Spectrum pooling The concept of spectrum pooling/frequency grouping was initially proposed by Mitola in [MIT 99b]. The idea is to bring together different unused spectral bands whose use in a group of frequencies belongs to licensed primary users. One such possible allocation of the bands belonging to a group was described by T. Weiss and F. Jondral in [WEI 04]. The goal of this notion is to increase spectrum use without any modification in the use of licensed spectral bands. The standards based on multi-carrier modulation are particularly well suited for this concept of frequency grouping [WEI 04]. The next two sections present two different types of spectrum access pertaining to the second class of the FCC model, i.e. command and control.

12

Radio Engineering

Figure 1.4. Global occupancy of spectrum: spectrum allocation in 3 kHz to 300 GHz band, in the United States in 2003. Following this allocation it can be very easily observed that there is no more spectrum available for new networks and services, except to manage this spectrum in a different way by taking into account its real usage (see Figures 1.5 and 1.6)

Introduction to Cognitive Radio

13

Figure 1.5. Measurements of occupation of the 608–698 MHz band from 1 to 3 September 2004 [SSC 05]

1,800

Fixed UMTS links FDD–DL

2,000

UMTS sat

1,600

UMTS FDD–UL

UMTS TDD UMTS sat

Satellite DCS1800 DCS1800 downlink & MoD uplink

UMTS TDD DECT

1,400

Radio loc/nav

1,200

MSS MoD Fixed links

1,000

Audio broadcast

–20 –40 –60 –80 –100 –120

Fixed links Satellite

Power (dBm)

Power spectral density (960 – 3,100 MHz) Aeronautical and satellite MoD radiolocation and radionavigation

Fixed links

2,200

ENG,RFID,ISM–2450 (wideband data/SRD)

UMTS extension

Military radars

Radio location and navigation (defense)

2,400

2,600

2,800

3,000

2,400

2,600

2,800

3,000

2,400

2,600

2,800

3,000

Instantaneous spectrum occupancy Time (hour)

24 18 12 6 0 1,000

1,200

1,400

1,600

1,800

2,000

2,200

Duty cycle (%)

Average duty cycle = 10.00% 10 80 60 40 20 0 1,000

1,200

1,400

1,600

1,800

2,000

2,200

Frequency (MHz)

Figure 1.6. Occupation measurements of 960–3,100 MHz band in an external environment in the city of Barcelona. Received power, instantaneous temporal occupation on a scale of 24 hours (a black point indicates that the frequency is under use at the time of measurement), and the average occupancy rate are shown in this figure. It can be inferred easily that this rate is very low, i.e. of the order of 13.3% in 1,000–2,000 MHz band and of the order of 4% in 2,000–3,000 MHz band [LOP 09]

1.3.2.3. Spectrum underlay technique This technique consists of inserting a new signal in the same spectrum and at the same time as the original signal. The evident constraint is that the additional signal should not disturb the quality of the original signal. It is a very stringent constraint and very few systems can satisfy it. Ultra wideband (UWB) systems and other systems that use spread spectrum modulation techniques fall into this category. The signal strength of the secondary user must be sufficiently low to respect the interference limits acceptable to the primary user. In this context, Haykin [HAY 05] defined the concept of temperature interference.

14

Radio Engineering Regulation policies

Characteristics Access to the spectrum is reserved exclusively to the licensed user. -1The operator buys the right to use the spectrum Exclusive license (see UMTS licenses). for spectrum use This access is restricted to a particular type of radio access (for example GSM, UMTS, etc.) The use is controlled by the instances of regulation. The notion of primary and secondary users. -2Sharing is restricted to particular kinds of radio access. License offering a possibility Examples of this policy are the frequencies allocated of spectrum sharing to the standards, Digital European Cordless Telecommunications (DECT), and Personal Communications Service (PCS). Notion of coexistence. -3Respect of certain regulations: Unlicensed spectrum -Maximum admissible power. -Users must respect the power spectral density (PSD). Examples of unlicensed bands are ISM bands 2, 4, 5 GHz. -4Spectrum open to all users. Open spectrum Users must abide by a minimum number of regulations. Table 1.1. Classification of spectrum access based on regulation policies

A UWB signal may be one possible solution enabling the secondary user to coexist with the primary user due to a very low spectral power density as illustrated in Figure 1.8. Another recently proposed technique [KOB 08, SAM 08] involves a new type of modulation known as Vandermonde frequency division multiplexing (VFDM). The general idea is to use the frequency selectivity of channels (instead of the spatial dimensions) for frequency beamforming. The secondary network TX2 (see Figure 1.9) transmits its signal in the zeros of the channel (2, 1) so as not to interfere with the primary network. Just like any system based on beam paths, this requires a priori knowledge of channel h2,1 at the transmitter (which can be achieved by exploiting the inherent feedback sequences in the communications). For every selective channel with L coefficients, it is possible to show that L zeros exist and consequently, the secondary system can transmit L symbols using L beam paths. As channels h2,1 and h2,2 are statistically independent, the L beam paths have a null probability of cancelling channel h2,2 . Therefore, the secondary network evaluates channel h2,1 and builds the Vandermonde modulator in an adaptive manner. Surprisingly, it is possible to set up two non-interfering networks, simultaneously (the secondary network can transmit at any power). In the first network, the users (with OFDM, for example) can use FFT modulators, while users of the secondary network must use Vandermonde modulators; however, Vandermonde modulators must adapt to the channel and must be reconfigurable to the speed of change of the channels.

Introduction to Cognitive Radio

Regulation policies

TV Licensed spectrum Vertical spectrum sharing

Non-licensed spectrum

WiFi

Horizontal spectrum sharing

Figure 1.7. Horizontal and vertical spectrum sharing [BER 05]

Figure 1.8. Underlay technique of insertion in the spectrum

CR

15

16

Radio Engineering

Tx1

h 1,1

Rx1

h 1,2

h 2,1 Tx2

h 2,2

Rx2

Figure 1.9. VFDM modulation. As channels h2,1 and h2,2 are statistically independent, the zeros of the two channels have a null probability of being equal. Therefore, the symbols transmitted by the secondary network will not suffer from any attenuation

Extensions of the VFDM principle to a multi-user framework (VFDMA) have also been proposed. 1.3.2.4. Spectrum overlay technique The overlay insertion technique is equivalent to opportunistic spectrum access. It consists of detecting blank/holes in the spectrum and then inserting the signals of the secondary users into the detected hole(s), see, for example, the holes of the three standards scheme of Figure 1.10. After sensing the frequency spectrum, if the primary user is not active on a particular channel, the secondary user makes use of the channel to send its own signal, otherwise the secondary user remains inactive. This technique requires the implementation of several successive steps. Each of these steps has specific constraints and requires advanced signal processing algorithms, which will be described in various chapters of this book. Filtering phase: the case of constant bandwidth channels within a standard differs from that of broadband channels, comprising several consecutive standards. This illustrates the difficulty in filtering broadband signals without a priori information. This problem will be discussed in detail in Chapter 8. Detection phase: this phase is the most studied in literature. From the point of view of our layered model (see section 1.4.1), the result of this phase is “holes’ detection sensor”, which belongs to the physical layer of our model. Many solutions for realizing this sensor will be discussed in detail, with their advantages and drawbacks, in Chapter 3. Characterization phase: not all free frequency bands are usable. It is thus necessary to characterize the detected frequency band by analyzing the noise, the

Introduction to Cognitive Radio

17

interferences, and the virtual impulse response between the two communicating pieces of equipment. Decision phase: a decision is made on the basis of information gathered in previous phases. This phase is also extensively studied in the literature. A large number of possible solutions to this complex problem will be presented in detail in Chapter 4. Insertion phase: finally, after the decision, communication itself is started and secondary signals are inserted into the spectrum. The insertion is done in such a manner as to avoid hindering the performance of users in adjacent bands. To this end, modulation techniques with good spectral power density properties have been proposed in the literature. These modulation techniques have small secondary lobes. Examples of these techniques include OFDM/OQAM. The European project PHYDIAS [PHY 08] has particularly studied this aspect of the problem of opportunistic access.

Figure 1.10. Overlay technique of insertion in the spectrum

It is clear that insertion of a new signal will generally change the peak to average power ratio (PAPR) of the overall signal. Consequently, amplification of this new signal must be carried out under the constraint of keeping the PAPR constant, to avoid the rise of the secondary lobes. Considering the PAPR as a sensor of the physical layer and performing the insertion by taking into account this difficulty are discussed in Chapter 9. 1.4. A broader vision of CR The cost of the spectral resources and the optimization of its usage are a vital concern for current wireless systems. CR technology is, however, often restricted to the optimization of the spectral resources. A more general vision of CR is proposed in this section. It is in compliance with Mitola’s “vision”, who in his thesis, added:

18

Radio Engineering

– “Cognitive radio increases the awareness that computational radio entities have on users, networks, their location and the larger environment. Recent progress in location-and-environment-aware computing is therefore relevant to cognitive radio”. Our vision of CR is: the CR increases knowledge at various levels of the environment (user, network, geographical position, etc). This approach incorporates the concept of machine learning, a significant level of environmental understanding and an autonomous decision power as depicted in Figure 1.1. This understanding of the environment in the broad sense is given by all the sensors, which will be presented in Chapter 3. – “Cognition tasks that might be performed range in difficulty from the goal driven choice of RF band, air interface, or protocol to higher level tasks of planning, learning, and evolving new protocols”. Cognition tasks that must be accomplished with ascending order of difficulty range from the choice of RF band to the definition of a new protocol on the fly. – “This type of learning technique makes the Software Radio trainable in a broad sense instead of just re-configurable”. Learning as proposed in the cognitive cycle thus evolves the SR from a reconfigurable radio to a self-adaptive radio. We present the contents of this book in a very general context, in line with J. Mitola’s vision. As an example, in Chapter 3, we present sensors related to image processing, which is far away from the use of sensors for the detection of spectrum holes to which the CR is often limited to.

1.4.1. Taking into account the global environment The generalization presented here is in line with the previous section. To explain, we cut the model of Figure 1.11 into three main layers: – a high-level layer, which contains primarily the application layer and man/ machine type interfaces, called the higher layer; – an intermediate layer containing transport and network layers; – a low-level layer, which contains medium access control and physical layers, called the lower layer. The set of these layers runs on an SR platform (possibly ideal), but this model also runs on a software-defined radio platform (see Chapter 6). These SR platforms are based on hardware architecture of execution, which, in general, is heterogeneous. This platform is ideally hidden via an abstraction layer that offers transparency in terms of implementation of software components to process the signals. We further illustrate the model in Figure 1.11. Sensors that we have figured out are listed in the left column. These will be addressed in detail in Chapter 3. In the right

Introduction to Cognitive Radio

19

column, the research areas related to the considered layer are given. As our objective is to optimize the operation of these three layers intelligently, the CR will also have a very strong bond with the emerging field of cross-layer optimization. This is what we find in the literature under the name “opportunistic radio”, a restriction (following the model previously presented) of the physical layer of CR that is concerned with the spectrum management.

Sensors specific for the layer

The three layers

Regulations for spectrum usage, Man/machine interface user,s profile (price, subscription), (MMI) application personnel choice (sound, image, highest layer ecologic radio, position, velocity, security, etc.) Vertical handover inter-networks Transport, network, and intra-networks, standards, load intermediate layer on the radio link, etc. Access type, power, modulation, coding, frequency, handover, Physical link, medium, channel estimation, antennas, lower layer consumption etc. Middleware and abstraction layer True wideband software radio

Some trendy concepts in the literature

Context aware

Networks interoperability

Link adaptation

Figure 1.11. A multi-layered view of CR

1.4.2. The sensorial radio bubble for CR The concept of the “sensorial radio bubble” (SRB) was defined by keeping in mind the conventional notion of the “human sensory bubble” (for details see [PAL 07]). This bubble is a multi-dimensional space around the equipment considered (Figure 1.12), with one dimension per sensing capacity (exactly like the “human sensorial bubble” with its sensors: five senses, see Figure 1.13) [REB 04]. We have extended this concept of the human or animal bubble to the world of inanimate objects such as CR equipment, thanks to their capabilities to take into account their environment. From this point of view, this work enters into a domain known as “bio-inspired systems”. Each dimension of the SRB can be represented by numerous parameters (such as temperature and time for a conventional thermometer). Therefore, with each sensor i ] Ci , there is an associated vector of parameters defined by: Vi = [P0i , ..., Pji , ..., PJ−1 with i = 0, ..., N − 1 and j = 0, ..., J − 1 where N is the total number of sensors considered and J is the number of parameters of the sensor Si . One of

20

Radio Engineering

Figure 1.12. Sensorial radio bubble for CR

Figure 1.13. The human bubble and the five senses

Introduction to Cognitive Radio

21

these parameters can represent distance as in the case of certain “human bubble” sensors (hearing, sight, touch). In certain situations the vector may contain only a single parameter. Considering the simplified model of the cognitive cycle given in Figure 1.2, the SRB is clearly located in the function “environment analysis”. The SRB gives all the pertinent information about the neighboring environment of the equipment so that the decision algorithms can make the appropriate decisions. The CR at the equipment level can be summarized as follows: “the CR is a decentralized vision, dynamic with a local optimization of the needs and resources contrary to the traditional centralized vision, static and designed to operate in the worst case scenario”. 1.5. Difficulties of the cognitive cycle The cognitive cycle encompasses a large number of problems. Three of these major difficulties are the subject of the subsequent chapters. As illustrated through the SRB, capturing the information is a key element and will be the subject of Chapter 3. The management of such an intelligent system to take into account the information provided by the sensors and simultaneously reconfigure the equipment in real time is also a complex problem. Few studies on this subject exist today. A description of the problem and the possible solutions proposed is presented in Chapter 5. An equally important decision-making problem exists, related to the previous management problem. Currently, most efforts on this subject are based on bio-inspired algorithms. These will be described in Chapter 4. Finally, a CR system is based on SR technology as shown in Figure 1.11. At present and certainly for many more years to come, difficulties in technologies will not permit us to reach a true SR. Nevertheless, a software-defined radio can provide CR systems. The technological difficulties of SR, and technology support of CR and their solutions, are addressed in the second part of this book. The definition of a truly operational CR system still demands many years of research. Communities in the domains of networks, digital communications, information processing, signal processing, and electronics are very active. More references on these topics are available in Appendices A, B, and D. These appendices list the related works published in special issues of the journals since the work of Mitola in 1999, a list of the European projects, and a list of the laboratories active in these domains, respectively.

Chapter 2

Cognitive Terminals Toward Cognitive Networks

2.1. Introduction The development of large and autonomous mobile telecommunications networks is accompanied by many technological challenges having a common goal of introducing artificial intelligence at the level of these networks. One of the first questions to be addressed is to determine in which part of the network this intelligence must be more developed. Indeed, it is often understood that the fixed infrastructure of a mobile network, owing to its potential of fast information processing and data storage capacity, is more proficient in controlling the network than would be the whole set of mobile terminals. In this situation, increased intelligence at the network level corresponds to: (i) incorporating intelligence in base stations so that they can respond to individual requests of the mobile users, and (ii) developing the interactions between the fixed base stations so as to make constructive interference between the cells, i.e. non-destructive, and optimize the resource distribution. However, it is not always feasible for a given architecture to control many users simultaneously, especially when these users are highly mobile. Indeed, in such a situation, the instantaneous computational load for fixed infrastructure could not permit responding to all the queries of the users. An orthogonal vision is to move network intelligence toward mobile users. If the mobile users at any instant are able to evaluate the resources available in a network, then they can decide on their own to establish a communication with one

Chapter written by Romain C OUILLET and Mérouane D EBBAH.

24

Radio Engineering

or more surrounding base stations within the context of a specific service (WiFi communication, 3G, download of Internet data, etc.). Meanwhile, in the context of a network with low mobility and low density of users, the computing capacity of the intelligent terminals, which is assumed to be much lower than the computing capacity of the mobile infrastructure and more stringent in terms of energy resources, makes mobile devices much less proficient for maximizing network utilization. It is thus essential to establish the distribution of intelligence across the network before developing an intelligent network of communications. Besides technological repercussions, these discussions obviously have an economic impact on the production of intelligent equipment, though this economic aspect will not be developed here. In this chapter, we will discuss the advantages and limitations associated with the introduction of artificial intelligence, first at the level of the terminals and then at the level of the network infrastructure. Then, a compromise between these two extreme positions will be introduced. In particular, we will identify the network parameters that allow us to determine where to place ideal compromise between the intelligent terminals and the intelligent networks. Before developing the concepts of intelligent terminals and flexible fixed infrastructures in profound detail, we present a practical example of flexible network in which the task sharing between users of the network and infrastructure gives a simple idea of the constraints associated with each entity. Let us consider a motorway network, which we imagine to be flexible in the sense that the infrastructure is capable of autonomously deciding the distribution of the lanes authorized for circulation. The prime responsibility of such a network is to satisfy a maximum number of users by avoiding, in particular, latencies in the network, which correspond here to situations of congestion. Imagine that at a given time, typically during the night, traffic is sparse on the motorway and suppose that each user of the network has put a request to reach his/her home in a minimum time. The motorway network considered as a unique entity capable of exchanging information through the network at a very low computing cost is then able to process user requests individually, indicating to each user an ideal route; queries of all the users are translated here into a multivariate problem of low dimension that the network is supposed to be able to handle. In this context, it seems interesting to place the load of joint optimization processing of the users’ requests on the network level. Indeed, if the intelligence is placed at the level of network users, who we assume are naturally ignorant of instantaneous flow conditions at average or at a long distance from home, all the requests could not be optimized; here, we find ourselves in a situation of game theory [NAS 50] in which each user looks to maximize his/her own gain, for example, by anticipating the decisions taken by other users. For example, if I go to city A tonight, should I take lane 1 or lane 2 knowing that if I am in the lane of users the majority of whom are going to city B, would I benefit from lane 2? However, the situation is somewhat reversed when the network density increases. It is certainly true that for a higher number of users in the network, the computing ability

Cognitive Terminals Toward Cognitive Networks

25

of the infrastructure cannot address the problem of joint optimization of all the users. A great deal of the infrastructure’s work then consists of diverting network density to other lanes, e.g. completely close a lane to prevent the formation of congestion, etc. We find ourselves in this context not only in a situation of multivariate optimization, but rather in a situation of average optimization with the objective of maximizing the average efficiency of the network; the idea is to satisfy the majority of the users, with the risk of generating situations, at times, unfavorable for certain users. For example, a user may have to turn around halfway and wait for several hours due to the closure of the lane that would have brought him home a few minutes later. The mathematical tool used in this context is a recent branch of game theory, known as mean field game theory. However, in this context, due to the potential risk of generating situations occasionally disadvantageous for certain users, it is recommended to shift part of the intelligence at the users’ level who can decide the lane to borrow autonomously. A given user with complete knowledge of the “classic” state of the network, at the moment when it borrows it, is more appropriate to decide the option that optimizes its own traffic conditions. This situation is particularly highlighted when the a priori information of the user is such that it can exactly anticipate the traffic conditions that he is required to meet, making it fully capable of maximizing its own gain. If all the users have a strong a priori knowledge of the network and are able to exchange information at a low cost, then they engage a large number of users in a game whose goal can be to maximize the collective gain. Classically, most users do not have this level of a priori knowledge, in particular when a large number of users are away from home or when a particular event generates an unexpected situation in the network. As a result, it is not conceivable in such a scenario to leave the optimization weights to all the users. A right balance for the distribution of intelligence throughout the network must be found between the extreme situations. One extreme situation is when the infrastructure manages the network autonomously with a risk of seriously affecting the conditions of certain users and the other extreme situation is when the users are involved in a game independently having little a priori knowledge with a risk of halting the entire network. We will describe below the equivalence of this situation of a motorway network to the telecommunications networks. It will come out that the situation described above gives a relatively accurate representation of the situation of an intelligent telecommunications network. We will also introduce the tools required for the analysis of different situations, where the network is autonomous, users are autonomous, and a trade-off is established between the two situations. 2.2. Intelligent terminal We begin our description of the intelligent networks by the vision focused on the terminal. We consequently position ourselves in a situation in which the network is

26

Radio Engineering

intelligent only by the action of the terminals that individually or collectively decide the allocation of resources. 2.2.1. Description The paradigm of the intelligent terminal originates from the ideal Bayesian information processing and decision. The idea here is to consider the mobile terminal as an intelligent entity that continuously analyzes the received stimuli from its environment and makes decisions that tend to maximize its own gain; this gain corresponds here to attain a certain quality of service (QoS). One of the approaches is to consider all the mobile terminals as independent entities that are individually in the quest of maximizing their own QoS. First, according to this approach, the efficient terminal is required to discover its environment independently in a reliable and fast way. For example, it is not inconceivable to imagine a terminal able to understand the habits of everyday life of its owner. If such is the case, the terminal will be able to ideally manage the allocation of its energy resources during the whole day; these days most of the terminals try to synchronize permanently with the neighboring networks in low coverage zones. By doing this they waste the resources to a greater extent, whereas an adaptive terminal will avoid this inefficient use of its resources. In this context it is important to develop methods that allow terminals successively to: – understand its environment: this step requires at the same time: - to abstract the physical environment in symbolic form explicable for the terminal. It is, in particular, necessary to model the transmission channel of information in a reliable mathematical form. This abstraction stems from the domain of channel modeling; the area, rich in terms of publications, however, today does not have unanimity to naturally model all types of possible channels. Indeed, most of the channels used in the current telecommunications standards are the results of propagation tests in open air for different classes of typical situations (urban, rural, mountains, valleys, etc.). However, being understood that the linear propagation model is a good approximation of the actual conditions of wave transmission (this model justifies the conventional convolutional model  y(t) = h(τ )x(t − τ )d(τ ) + n(t)), recent developments make it possible to generate channel models based on a priori information of the terminal [GUI 06]. This approach is completely different from the conventional approaches that assume a given parametric model (e.g. Rice model with amplitude parameter of the component in line of sight) and determines the best value of these parameters according to the terminal’s observations. Here, the approach consists of not establishing any a priori model, but rather initially assuming all the models to be valid. When the terminal acquires information, only the models consistent with these new data sets are retained among all feasible models. The terminal will then select the model that is less constrained,

Cognitive Terminals Toward Cognitive Networks

27

i.e. that requires a minimum of non-desirable hypotheses. The mathematical tool to produce such a model is known as the theory of maximum entropy. The validity of this approach draws its roots from an approach of physics to the information theory. Let us consider, for example, the modeling of gas distribution in a closed environment; it is evident that among all the distribution models that validate the constraint, i.e. any model involving the gas existence outside the enclosure is invalid, the most isotropic distribution model (the model with maximum entropy) must be selected; however, the gas might have been distributed quite differently due to the presence of another constraint that is not initially known to the experimenter. The maximum entropy approach was initiated by Jaynes [JAY 03] and Brillouin [BRI 63]; - to identify the available transmission sources. This step, at the same time, allows us to identify the surrounding base stations to discover the communications in progress, and hence to determine the available resources. The objective of the intelligent terminal here is to establish confidence levels and even an absolute knowledge of a variety of information, e.g. the communication band A is free, three base stations are in my surroundings, or the throughput of current communication is important. To do this, it is again possible to conceive Bayesian approaches that make it possible to answer each question unanimously using the principle of maximum entropy. An example is given in [COU 10b] in the context of multi-antenna transmission sources’ detection: when the number of sources is known/unknown and when the signal-to-noise ratio (SNR) is perfect, partial or unknown, etc. The optimal algorithms to infer the number of transmission sources are also described in [COU 10b]. Nonetheless, Bayesian approaches, which must integrate a considerable number of parameters suffer from the “curse of dimensionality”, namely that the numerical calculation of multi-parameter integrals inevitably leads to large errors when the number of parameters is large (three parameters is already too much). When the complete explicit calculations of integrals are not possible, we consider the alternative methods (suboptimal) that are less costly but more reliable. A method called generalized likelihood ratio test (GLRT) proposed in [BIA 09] shows a very adequate performance when the dimensions of the system under consideration are large. The mathematical tools involved among others in the detection domain for systems with large dimensions are random matrices [MAR 67], free probabilities, and large deviations; – to make optimal decisions. Decision making consists of, after evaluation of the environment and detection of the surrounding cells, establishing communication with one or more reception stations and requires a service and a QoS of these stations. Since the intelligence of the network is placed only in the terminals, the service providers do not have the freedom of resource allocation to the user. In the context of cognitive radios (CRs), this situation is typical of secondary network, when the terminal having identified a hole in the spectrum makes use of this free resource to establish communication. Remember that till now it is not at all the question of making decisions of resource sharing by taking into account the presence of other

28

Radio Engineering

opportunist users in the network. Decisions here are individual; this condition in fact creates situations in which the overall QoS is strongly degraded by the “egoistic” decisions of the users. We will return to this point later; – to use its resources in an optimal manner. As mentioned earlier, one of the benefits of adaptive knowledge of the environment is to allow the intelligent terminal to optimize its resource utilization. The optimal resource distribution can be viewed under two aspects that, though they seem pretty similar, are dramatically different in their respective outcomes: - The first approach is the optimal distribution of the available power budget. Here, we suppose a long-term situation in which the terminal can distribute its power resource in an optimized manner. This approach tries to answer the question: “how to use the available power budget in an optimal way?” and the potential answers could be: “by distributing the total available power on frequency bands that are currently free”, “by distributing a part of the available power to the free bands while keeping some resources for the analysis of new stimuli”, or “given a poor accessible QoS, by concentrating the available power on finding new opportunities”. The hidden dilemma of these potential answers is known as the compromise between exploration and exploitation, i.e. to know how much time should be spent in exploring the potential resources and how much time should be spent in transmission (resource exploitation). - A new approach consists of observing that the constraints imposed on the intelligent terminals force them to use their energy resources (e.g. the battery in case of a mobile phone). Hence in this case, the target of the intelligent terminal is not to distribute the available “power” but to distribute the available “energy”. The ultimate goal is not to maximize the transmission rate per second, per hertz (of the bandwidth) but to maximize the transmission rate per second, per hertz, and per joule of the available energy. The outcome of this policy is quite huge. Whenever a terminal is located in a weak coverage area, an optimal policy of minimal energy dissipation will be to put the terminal in standby mode rather than forcing an expensive exploration of the environment that could be less beneficial. On the contrary, the optimal strategy of minimal power dissipation will tend to minimize the use of a less credible resource and to force network exploration to search for an opportunity spectrum. The paradigm of optimal usage of available energy comes under the new brand of radios that are called “green” radios. However, the mathematical tools for this new paradigm are still badly defined. The earlier discussion, however, supposes that the network users do not interfere, or more exactly, tend to maximize their individual gains rather than collective gain. This strategy, however, often turns out to be naive and mostly results in generating suboptimal solutions (from the standpoint of collective sharing of available resources) as compared to a cooperative approach in which each user tries to maximize the average collective gain. In fact, it could arrive while maximizing the individual gains

Cognitive Terminals Toward Cognitive Networks

29

that the network users might find themselves in the worst situation of equilibrium of the prisoners’ dilemma1 that is quite famous among game theorists.

Figure 2.1. Intelligent terminals. These terminals instantly determine the best access point and distribute the the computational load between them

It is quite evident that if the users collaborate and share their available resources in a network, they could achieve huge potential gains. However, it is also quite evident that complete cooperation naturally brings along a large amount of information exchange among the network users, which will result in a loss of average performance. It should also be noted that complete user collaboration brings network “centralization” at user level in place of centralization at network infrastructure level; evidently this situation is not desirable being strongly suboptimal. The compromise here is to establish a game between the network players. This game is based on users interacting by transmitting or not transmitting data and tends to disfavor selfish gain of the individual user but also promotes the collective gain in a decentralized manner. For this, different communication policies can be established. The purpose of mathematical research in telecommunications in this area is to devise games that lead to near-optimal equilibriums; the degrees of freedom to be optimized are the quantity of the information data to be present through the network (these data are,

1 The prisoners’ dilemma is the situation in which each player maximizes his individual gain while minimizing the collective gain. The original context is that of two accomplices in a crime, interviewed simultaneously in isolated rooms, to whom is given the choice to denounce their respective accomplice or deny the crime, the collective gain in this game is maximum when the two partners deny the facts, while the gain of each individual player is maximized when he denounces his accomplice (thus ensuring his freedom in case of non-denunciation of the other player). The non-cooperative equilibrium of this game is the situation of mutual denunciation, which involves a sentence of imprisonment for the two players.

30

Radio Engineering

typically, the SNR sensed by each user) and the strategies to be adopted according to this updated information.

2.2.2. Advantages The first advantage of a user-centric intelligent network is the possibility presented to each user for deciding the deployed resource. However, similar to the highway network (mentioned earlier) where it is more appropriate to maintain the network than to organize the individual user, it seems more natural to leave the burden of optimization to the network infrastructure. This affirmation is, however, inaccurate as the constraints on learning and adaptation of the network are huge. We particularly know about several criteria that are difficult to optimize when a purely network infrastructure-centric approach is considered. Some of these criteria are discussed in detail in later sections of this chapter: – Huge movement of a given user constrains the network to use a dynamic adaptation of the resource allocation strategy associated with that particular user. This strategy takes into account the presence of all the network users (potentially highly mobile users) and is highly expensive to implement. In this case, it is preferable to leave the decision of the available resource usage to the user. An autonomous and selfish decision on the user’s part could actually be optimal in this situation, given that its spatial presence in the network is short. – The network cannot keep track of the daily activity of each terminal and thus is not in a position to evaluate the resources (available to the terminal) better than the terminal itself where environmental learning is sufficiently credible. We can imagine, in particular, a known situation (as it is quite common and occurs repeatedly) of the terminal having two resources: one with a higher QoS than the other but is more often used by the users around. Perfect knowledge of this situation allows the user to decide to defer a portion of its throughput to the second resource when the first resource is being used. The network infrastructure is also capable of making that decision, probably in a more optimal manner, but certainly in a less smooth and more expensive (in terms of computation time) manner; note that if the network also knows the daily activities of each user, we assume that the network has a wide central database where all this information is stored, which again introduces a significant cost and is not justifiable when compared to the low cost for an autonomous user. – In the case of a relatively rapid variation of the occupation of spectral resources available to network users, individual and selfish decisions of the users favor permanent occupation of resources. Indeed, if a user detects a spectral opportunity, i.e. discovers a hole in the spectrum, it will tend to engage the free frequencies immediately. This dynamic competition for resources allows permanent or partial use of all the available bandwidth; the use of bandwidth by consecutive opportunities is clearly not optimal from a global perspective but significantly reduces the latencies caused by the decisions of the infrastructure. Moreover, note that for the various

Cognitive Terminals Toward Cognitive Networks

31

available resources the computation of an optimal resource distribution by the infrastructure is even more complex and inappropriate in this context. This will be discussed further in the following sections. – The overall energy cost of decentralized decisions, having minimum constraints for users, significantly reduces the energy bill of the network. In this case, only the users decide the resources that they need, which requires neither expensive computations at the network infrastructure level nor transmission of synchronization data in uplink and downlink between the base stations and the users. This aspect fits ideally with the scenario of green radios, particularly when the network density and the variety of resources are significant. Consequently, the advantages of a decentralized network add up to a significant gain in terms of latency and energy needed for optimization calculations at the expense of loss in overall network performance. This approach is feasible and more appropriate than the centralized method when user requests vary rapidly in time, when users are highly mobile, and when the diversity of the available resources is sufficiently significant. However, a natural instability appears when the situations favorable to decentralized strategies mentioned earlier are slightly modified. In particular, when the available resources (even dynamically) are lower in number than the user demand, conflict situations develop and can only be solved by the centralized approaches.

2.2.3. Limitations The principal and previously known limitation is the performance problem. While the decentralized approach allows a dynamic network and reduces the overall energy cost, performance in terms of useful throughput of the network is often affected. This fact is often observed in the results of the game theory analysis by Nash [NAS 50], which present, quite often, the network equilibriums (called Nash equilibriums) in which performance is much lower than the optimum centralized equilibrium (called Pareto equilibrium). The second limiting and more serious aspect comes from the risk of inherent instability of the network. Indeed, while the game theory approaches show some favorable situations in which user density in the network is weak (i.e. a limited number of competitors) or when the number of user requests exceeds the number of available resources, overall performance of the network is strongly affected. Consider, for example, the situation in which multiple users dynamically share a single resource, the decentralized selfish strategy consists of, for each user, trying to access the resource all the time. If everyone plays this game, all transmissions will interfere with each other and no data would be decoded, thus generating a null useful throughput. An optimal decentralized strategy for symmetrical users (i.e. everyone

32

Radio Engineering

demands the same level of QoS) then consists of the following strategy: each player detects the occupancy/inoccupancy of the resource at random and accesses it in case of availability. This allows each user to have a chance to access the resource and on average satisfies everyone. However, over a certain period of time, the resource can never be detected (when all players simultaneously attempt to access a resource) and thus can never be accessed. In this context, the centralized distribution of the resource (per time division) by the network is clearly more favorable in terms of resource allocation, although this approach requires sending the training sequences of the propagation environment; the minimal cost of sending the synchronization sequences affects the performance in this case, but only marginally. We will discuss below the centralized approach for intelligent networks, its advantages, and limitations.

2.3. Intelligent networks 2.3.1. Description The “intelligent infrastructure” approach derives its origin from the observation (already a decade old) that current communication networks are limited by intercellular interference. At a time when multi-antenna technologies are common practice, it seems paradoxical that deployed networks are still strongly limited by the interference between cells. Indeed, Telatar [TEL 99] predicts a theoretical gain in communication throughput by a factor equal to the number of antennas at the terminal, if and only if the SNR at the terminal is sufficiently strong (even stronger if there is a large number of antennas); when the SNR is low, no gain can be achieved compared to the situation when a single antenna is used. When the user of a telecommunications network is situated along the borders of adjacent cells, it feels a strong interference from neighboring cells’ communications on its own bandwidth. This reduces the user SNR and renders it incapable of exploiting the degree of spatial freedom that is offered by the multiple antennas. In this scenario, a network of modern communications is no better than an old single-antenna communications network. It is, therefore, highly advisable to establish cooperation between neighboring base stations to avoid the situations of interference, or even better to change the interference into useful information for the user. In this context, the user is no longer tied to a single base station, but several base stations simultaneously transmit information to the user with minimal interference. Ideally, it appears that all base stations available on the Earth should be connected to support communication for all the users. However, it is obvious and has been shown [TSE 03] that in a mobile context, the adaptation time of all base stations requires constant monitoring of changes in all channels of all the users. Such monitoring requires a large amount of synchronization data to be exchanged through the network and dramatically limits the performance in

Cognitive Terminals Toward Cognitive Networks

33

terms of useful data exchange. An elementary formula between the coherence time of communication channels and the size of the network (in terms of total number of antennas) is described in [TSE 03]: if l is the symbol time duration during which the channel is constant, and n and m represent the total number of antennas for transmission and reception, respectively, then: l >n+m−1 is required to ensure the optimal use of multi-antenna technology. It is, therefore, not possible to perform multi-antenna transmission in a highly mobile network. In fact, a compromise between cooperation of neighboring stations, so that interference and the number of autonomous cells are minimized, to limit the exchange of synchronization data, must be studied for each network. An intelligent network is, in terms of the physical layer, in the first place, a network capable of dynamically modifying its spatial coverage to optimize this compromise. However, the major contribution of intelligent radios to telecommunications networks is the “secondary networks” approach, i.e. the opportunist networks that exploit the frequencies left free by the primary network where QoS must be guaranteed. The secondary network, such as the intelligent terminal, is capable of scanning the ongoing transmission in licensed bands of the primary network and detects the absence of transmission in a given band, so as to temporarily operate in that particular band. More realistically, the secondary network must be able to evaluate the distance of the users using the primary network as compared to the secondary network, and if network users are near the primary network, the secondary network will not take the risk of transmitting in the same band as of those users. Conversely, if no user is at a short distance to the secondary network, the intelligent network may exploit the bandwidth that is left free locally. The secondary intelligent network will, therefore, be able to both fairly detect the presence of mobile users in the network and dynamically change its coverage rate for each frequency bandwidth. From a perspective of resource sharing, an intelligent network (henceforth considered as non-interfering) must be capable of allocating resources to the mobile terminals, so as to maximize a merit function for the overall network. Various classic merit functions are as follows: – the overall data rate of the network: the network seeks to maximize the sum of the individual data rates. However, this approach does not take into account the criteria of individual users and compels some users that are often poorly conditioned not to transmit any information at all for the benefit of network; – the minimum bit rate per user (max–min approach): this approach, considered as a fair approach, allows each user to obtain a minimum non-zero bit rate. The overall throughput of the network may force some users to have a low bit rate; – an alternative approach to the max–min is the min–max approach that avoids the exclusive operation of the network by a limited number of well-conditioned users. All

34

Radio Engineering

the users are not guaranteed to have a strictly positive data rate, but it is assumed here that the users have null data rate only if they are really ill-conditioned. As mentioned earlier, the centralized approach enabled the optimal management of network resources; on the other hand, the decentralized approach, based on the intelligence of each terminal, cannot always claim to have this feature. These points are discussed below.

Figure 2.2. Intelligent networks. Intelligence is at the heart of the network, thereby efficiently exploiting the spatial dimensions (Network MIMO)

2.3.2. Advantages First, we recall that in continuation of the existing communications networks, where the number of users per cell is low and the network is not very dynamic, the centralized approach results in optimal performance, easily calculable by the infrastructure. When the system size increases but the network is not highly mobile, this approach is still valid, although the computation time for optimal solutions becomes restrictive for the network. In this case, approaches based on random matrices [MAR 67] can rapidly calculate an asymptotically optimal solution (asymptotically being taken in the sense of an increasing number of users/antennas) for the distribution of resources for each user. In [COU 10c], calculating the optimal sharing of network resources for a large number of antennas per user is expressed. In this chapter, it is particularly mentioned that the large number of degrees of freedom in the channel, including the multiple dimensions (e.g. users/antennas/mobility), can be asymptotically reduced to a finite number of parameters, where the number is worth the maximum number of users in the network. Such information is fundamental to the objective of reducing the complexity of operations required at base-station level. It should also be noted that these methods are less expensive and more favorable than the decentralized approach, in which each user generally exploits the resources in a

Cognitive Terminals Toward Cognitive Networks

35

suboptimal manner. It is difficult to express reservations about the optimality of the centralized control of such a network, which is not highly mobile, for a reasonable number of users. However, as mentioned earlier, when the number of users increases significantly or users are highly mobile, requests for exchange of synchronization data nullify the potential gain in throughput that can be delivered by the base stations. An exclusive decentralized approach seems to be more appropriate here; however, we have also mentioned that a very high density of users can quickly lead to highly unstable suboptimal situations where competition is high or even complete blockage of the network in some circumstances. It is, therefore, not reliable to radically change a centralized network into a completely decentralized network when mobility increases. This point will be discussed later, where we will discuss the compromise between the intelligent network and the intelligent terminals.

2.3.3. Limitations Besides the already mentioned limitations of centralized networks, namely the rapid user movements and induced control requests that affect the transmission rate of useful data, note that the centralized methods, even in a fixed network, require a huge computational capability if available resources are diverse. Typically, a very wide offer, in terms of bandwidth range and high spatial diversity, involves a significant computational capability and often leads to the problem of dimensionality curse discussed above. Even if the tools, developed using the random matrix technique, enable us to considerably reduce the computational effort when the number of antennas/users increases [COU 10c], where frequency diversity is significant [DUP 10] or where the two effects are combined, and where the dimensions of the systems are too large, it might be more economical to leave the terminals autonomous in accessing the resources. This remark is particularly true when the available resources meet the demands of each user. The gain in performance of a decentralized approach here is not about spectral efficiency but about reducing latency and computational energy. In the last part of this chapter, we will discuss the conditions for an ideal compromise between an intelligent terminal and an intelligent network. These conditions are reminiscent of the above points, which will be discussed in more detail.

2.4. Toward a compromise As per earlier discussions, it appears that certain conditions enable us to put ahead the strengths of intelligent terminals to self-organize their data transmissions (either individually or collectively), while other conditions suggest that control of the fixed

36

Radio Engineering

network infrastructure is more viable. Nevertheless, most practical situations can be found at the crossroads of these two extreme positions. The question that arises is how to balance the intelligence of a fixed network and mobile users. We discussed that a dense mobile network cannot be controlled entirely by the network infrastructure for the reason of computational complexity; however, the terminals themselves take the risk in such networks of not being able to coordinate collectively and block the entire network, like the highway network where certain roads can be found blocked. A task division is necessary here; the basic approach is to consider a high-level control by the network infrastructure to avoid the above-mentioned deadlock situations. To achieve this, instead of a point-to-point optimization, a system-level analysis and control with decreasing granularity are needed. In this context, a relatively new approach for macroscopic analysis and optimization of large mobile networks consists of assuming an infinite number of users (or density), all moving randomly following the same law of probability. The analysis of network performance can then be studied by the mean field method that enables a large-scale approximation of the dynamic behavior of the network in order to anticipate potential deadlocks. This approach allows the networks to liquefy the traffic information conditions on a large scale, the goal once again is to maximize a merit function that will be statistical (i.e. averaged over the users) and non-deterministic (i.e. ideal for a given set of users at a fixed instant). This high-level control can be superimposed on a lower microscopic layer where the terminals have control and play a collaborative or selfish game to optimize a second merit function. These terminals are thus free to share resources, thus enjoying the advantage of any a priori knowledge about the network. However, these terminals remain subjected to the requests of the global infrastructure that aims to stabilize the system over a long term. Under these conditions, the network infrastructure is to be organized in a manner similar to that of an animal brain that is fully interconnected from a macroscopic point of view, but is highly specialized in specific points. It is to be noted incidentally that we can learn a lot about the internal structures of the brain to optimize the transmission of information through a wide network at low cost. Recall here that evolutionary biology justifies the performance of most dynamical systems of animals and plants through the quality of compromise, “efficiency maximization/energy cost minimization”, inherent to these systems. These conditions completely fall in line with the conditions of future smart and green radios. Besides the vertical dimension of the task distribution, we should also mention that certain necessary control operations of the network infrastructure can be delegated to the user level. These users are several in number and often more efficient for the rapid parallel execution of the tasks in the network. Cooperation between infrastructure and terminals is horizontal here. It may be contested at first sight that it is rare to justify infrastructure task delegation to terminal level for the following reasons: (i) computing power and energy available at the base stations is highly superior to those of the terminals; (ii) the distribution of tasks, in general, produces a suboptimal processing of these tasks, which does

Cognitive Terminals Toward Cognitive Networks

37

not justify the choice of delegation. These two points are, however, less accurate in future intelligent networks. It is particularly envisaged that microcells (the concept of out-door femto-cells) existing in large quantities – distributed in, for example, street lamps of urban lighting – will eventually replace the base stations having a strong coverage. In this context, the microcells will be, at best, equipped with a battery having a performance equivalent to that of mobile devices. It is, therefore, justifiable, in this case, to spread the complicated operations into elementary operations delegated to interfaces with low energy. The second point is often true but certain scenarios clearly enable the isolation of parts of the computation that can be performed by each terminal, due to a priori information not shared by everyone. There indeed exist some situations in which information available at terminal level is larger than that available at the network level; this situation typically occurs when devices make sporadic requests in the uplink (e.g. download request for a web page), while the base station transmits a large amount of information in the downlink (e.g. the contents of the web page) – the terminal, which receives a lot of information, is able to understand more quickly and more precisely its environment than the base station. In [COU 10d], it is shown that a broad network of terminals can self-organize in such a way, through the exchange of scalar data, using a principle that is known to game theorists as the “gossip” principle. The situation in [COU 10d] is a group of terminals wishing to share access to a resource on multiple frequency bands; the question is to know how everyone should distribute the energy that is available on the different free bands so as to maximize the overall throughput of the network. It is shown that the computation, if performed by the base station, requires a significant knowledge at base-station level, which is not necessarily accessible. However, this information is available at the terminals. The computation of resource allocation optimization, however, requires the joint knowledge of all the non-shared information spread between all the terminals (the cost of information sharing would invalidate the value of the process), and it is then shown that the optimization calculation can be divided into elementary tasks that each user can perform and whose result (a single scalar, independent of network size and number of transmit antennas) can be transmitted at a low power through the network. The closest terminals receiving these scalar parameters then begin to update their own settings that they transmit in their turn across the network. This principle of data transmission to nearest neighbors is called “gossip” [CHA 09], for obvious reasons. The gossip algorithm is shown to converge very fast [BAK 09] so that small data exchanges are needed to ensure that all terminals know the exact scalar parameters that make possible the discovery of individual distribution of energy on the available frequency bands. It is thus possible to consider both horizontal and vertical cooperation within a large intelligent network. The mathematical tools, required to design and analyze the behavior of these networks, are the gaming theories, mean field games [LEB 08], and random matrices.

38

Radio Engineering

We have not yet considered the problem of the exact distribution of network tasks: under what conditions must the network take hold of most optimization computations, and conversely, under what conditions have terminals to be left free to act in an autonomous way? These questions are answered by the network density and the spectral dimension already mentioned earlier and detailed next. 2.4.1. Impact of the number of users We have already mentioned that when the number of users is small, it is preferable to distribute intelligence at the network level to the extent where it can centralize information and optimize all merit functions. Moreover, the centralized approach is the approach adopted by the majority of communication standards. However, because of the growing number of terminals and limited bandwidth, things are changing gradually. It will no longer be conceivable, in a few years, to let the network take care of meeting the QoS demand for each user. At best, the fixed infrastructure is able to manage the system macroscopically. Its tasks will then consist of: – identification of network structure and its key parameters. The infrastructure must be capable of abstracting the microscopic complexity of the network in a limited number of dynamic parameters representing the macroscopic state of the network at all times. Graph theory, random matrix theory, statistical physics, game theory, evolutionary game theory, and mean field game theory [LEB 08] are the main tools that provide the infrastructure. These parameters enable us to perform the tasks described below: – a dynamic dimensioning of macroscopic network. The infrastructure will have to manage the size of the network to better distribute the resources according to the position of users. Note that this approach has already been used in existing communications networks during special events having a large number of terminals in a confined space (e.g. a sporting event within a stadium). The operations required here consist of the total power distribution of the network in cells with high density of users, a dynamic increase or decrease in the coverage area of each cell, an increase in the number of coverage areas in the context of cells having mutual interference, etc; – control of chaotic events. The infrastructure will alleviate any irregular event, such as network congestion, waste of local resources, etc. As explained above, the first approach primarily consists of organizing the network in a robust manner so as to minimize the situations of chaos. In a second step, in case of an irregular situation, the infrastructure should be able to correct the situation quickly, for example, through systematic mechanisms for regularization when the situation is known or through intelligent adaptation in case of an unanticipated situation.

Cognitive Terminals Toward Cognitive Networks

39

Mobile users of a wide network aim to maximize either their individual or collective merit functions by collaborating with a finite number of local mobile users. Recall that in an asymmetrical situation in which a lot of data are sent in the downlink and few data are sent in the uplink, the terminals have more information than the local infrastructure of the network that can at best keep track of the long-term evolution parameters. The local resource sharing via methods taken from game theory will be conducted in order to find an optimal compromise between performance and reactivity described below: – the performance aspect consists of establishing a cooperative strategy making it possible to maximize a merit function for all the players. The Nash equilibrium of the game should be close (if not equal) to the Pareto optimal equilibrium. This constraint of optimality requires a minimum information exchange between players, thus implying a reduction in the useful information throughput. In case of high mobility of users, the channel coherence time may require terminals to exchange a lot of synchronization data as compared to the amount of useful data, implying a low bit rate; – in case of high mobility, the focus is on the reactivity of the terminal. The resources available in the network are very volatile such that a latent decision, about sharing an upstream, leaves little time for effective communication. A minimum of information exchange will be conducted, implying the design of a game that is less effective in useful transfer rate (further away from the Pareto equilibrium). However, such a design will be dynamic and rapidly adaptive. Transverse operations (horizontal and vertical) between the terminals and the network are the operations of mutual control. Typically, the network infrastructure will require regular changes in the parameters that are involved in the strategies and merit functions of the terminals. These non-dedicated constraints will be broadcast in the network, again minimizing the amount of control signals to be transmitted. The non-dedicated control condition makes it possible, besides the small amount of information to be sent in the downlink, at the same time to limit the computation at the infrastructure level (the parameters to be optimized are valid for the local network of terminals) and to leave a large autonomy for mobile devices. We will discuss below the impact of the spectral dimension or, more precisely, the impact of the number of resources available in the network in the context of a compromise between intelligent network and intelligent terminals.

2.4.2. Impact of spectral dimension The more the available resources are diversified, the more complex the optimization operations will be to implement. It is important to note that large bandwidth does not constitute, by its size, a rich resource in the sense of “diversity”.

40

Radio Engineering

A diversified resource is a resource that shows large fluctuations with respect to each user, which is indeed often the case of a broadband channel. Each user of a mobile broadband network is subjected to conditions more or less favorable depending on the frequency. It is also often true from one user to another that these conditions are completely uncorrelated (which is a considerable advantage in terms of average performance). The complete calculation of resource allocation for all users of a large network is thus made even more complex if the coherence band of the channel (i.e. the degree of channel diversity) is large. An optimization under these conditions is costly and more inefficient if the number of users is large and that the terminals are mobile. In this case, again, infrastructure and terminals can share the task of resource distribution. From the side of the infrastructure, the constraints, such as the distribution of total power in each frequency band, will be imposed to avoid a surcharge of requests to a particular resource. The terminals, on their part, will deliver a dynamic choice of bands to be occupied. Note that the diversity of available resources to each user tends to make near-optimal algorithms that are more rudimentary to resource access. Indeed, for a variety of channels, competition for resources is limited by the availability of each resource. Assume for the sake of illustration, a situation in which a large number of frequencies are shared between the terminals; also assume that each resource is either of poor quality with high probability or of good quality with low probability. When a user has a good quality resource (e.g. status of a channel) on a given frequency carrier, other users have most likely resources of good quality in other positions of the bandwidth, allowing simultaneous access to spectrum with a low probability of interference. It is thus possible to create simple and efficient decentralized algorithms of resource sharing that require minimal control. 2.5. Conclusion In this chapter, we presented the two opposite aspects of CR where the intelligence is shifted either entirely to the network infrastructure or completely to the terminals. It appears that the development of large mobile communication systems tends to increase the role of terminals in the network optimization following the parameters like useful throughput, reactivity, computational complexity, and energy required for these operations. The results are strongly related to the mobility of users, the number of network users, and the amount of information available in each network entity, the amount of energy available for each terminal, and the diversity of resources available to terminals. The field of game theory can provide additional and adaptive algorithms that are lacking in existing networks to make them reactive, highly mobile, and scalable. It is, however, quite unrealistic to imagine a network exclusively controlled by these terminals (assuming the presence of a fixed architecture) and that such an approach would be more effective than a hybrid network where some processing is performed at the infrastructure level.

Cognitive Terminals Toward Cognitive Networks

41

Figure 2.3. Future microcellular intelligent networks. In this scenario, the green base stations (powered by photovoltaic cells) coordinate, using dynamic reconfiguration, their strategies in the form of virtual MIMO networks. Each terminal is well served by several base stations by a yard. Intelligence is distributed between the network and terminals based on the density and mobility. Interactions between entities are made through a learning process using the tools of game theory

An intermediate solution, incorporating horizontal and vertical cooperation between a fixed architecture and having a control of large-scale network and local processing by users for the distribution of available resources, seems to be the preferred solution for future communication technologies. Meanwhile, with the advent of communications standards incorporating such parameters, telecommunications research develops more and more analytical solutions and solutions having concrete results through the tools of large-sized systems, such as the theory of random matrices and mean-field games, to ensure an optimal cooperation between terminal and infrastructure.

Chapter 3

Cognitive Radio Sensors

Throughout this book, we define sensors broadly as tools for transmitting useful information for the cognitive cycle. The information is used to optimize the radio link to enhance the quality of service (QoS) provided. The sensing tools range from classic sensors, such as microphones, to cognitive sensors that provide information acquired through advanced processing techniques, for example, the impulse response of the channel. Sensors are chosen according to the environment considered, as illustrated in Table 3.1. The classification of sensors presented in Table 3.2 is explained in the rest of the chapter. This is consistent with the model proposed in Figure 1.11. It introduces a non-exhaustive list of sensing tools divided according to the layer served (lower, intermediate, or higher). 3.1. Lower layer sensors The lower layer sensors contain, amongst others, the physical layer sensors (see Table 3.2). In this section, we focus on the physical layer sensors. 3.1.1. Hole detection sensor This sensor is extensively studied in the literature under the name sensing. The cognitive radio (CR) is often limited to this sensor for detecting holes (or white spaces) in the spectrum, as already discussed in Chapter 1. CRs require that the secondary

Chapter written by Renaud S ÉGUIER, Jacques PALICOT , Christophe M OY, Romain C OUILLET and Mérouane D EBBAH.

44

Radio Engineering

network users are able to detect free spaces in the spectrum and use them in such a way that they do not interfere with the transmissions of the primary network. The QoS required by the primary network will otherwise be significantly decreased. The problem of detection of primary network activity can be cast as a simple energy detection problem, for which various efforts and methods have existed since 1960 [DIG 03, GAR 91, KOS 02, URK 67]. Sensors Environment Spectral occupancy Blanks/holes in the spectrum Electromagnetic Signal-to-noise ratio Channel impulse response Number and position of hotspots and base stations Number and positions of users Usable standards in proximity Network Operators and services in proximity Load on the radio link Battery level Energy consumption Circuit utilization rate (FPGA) Material Utilization rate of the ALU Memory utilization rate Temperature of the material Microphone, camera User identification spatial position, velocity, time, interior/exterior User preferences, user’s profile detection, facial recognition, voice recognition, etc. Case study: application Medical Emotional state User’s temperature Blood pressure level Sugar level, etc. Table 3.1. Classification list (not exhaustive) of sensors according to the environment

The problem of signal detection is cast as a hypothesis test: H0 : no signal is present H1 : a signal is being transmitted

[3.1]

However, contrary to the classical techniques of signal detection, CRs are deployed in large networks so that: – many players potentially intervene in the process of signal detection; – all the players in the network are mobile; this imposes new conditions on the established model of detection;

Cognitive Radio Sensors

45

Sensors Model’s layers (see section 1.4) User’s profile Price of Kilo Octet Operator Personnel choices, etc. Sound Application and MMI (man–machine interface) Video higher layer Velocity Position Security Vertical mobility Inter- and intra-network (see Figure 1.3) Transport, network Load on the radio link intermediate layer Services and networks in proximity Detection of holes/blanks Access type Received power Transmitted power Modulation type Channel coding physical, MAC, platform Carrier frequency lower layer Symbol frequency Horizontal mobility Channel estimation Antenna lobes formation Consumption, material temperature Table 3.2. Classification of sensors based on a simplified three-layer model

– each user of the primary and secondary network is potentially equipped with multiple antennas for transmission/reception. The model that we will follow here consists of the following hypothesis test: H0 : yk = nk H1 : yk = Hxk + nk

[3.2]

where yk ∈ CN is the vector of signals received by the combination of N secondary users (or more precisely, the cumulative of total number of reception antennas) at time k; nk ∈ CN is the additive Gaussian noise received by N users regardless of the transmission hypothesis at time k; xk ∈ Cn is the vector of transmission by n primary users at time k (or more precisely, the accumulation of the number of antennas used by the primary users for transmission); and H ∈ CN ×n is the transmission channel whose elements are taken to be standard Gaussian and are unchanged for a fixed time duration.

46

Radio Engineering

The channel is assumed to be sampled with a sampling period N . Combining the received vectors yk , we rewrite the hypothesis test in matrix format as: H0 : Y = N H1 : Y = HX + N

[3.3]

where columns Y, N ∈ CN ×L are the vectors y1 , . . . , yL , n1 , . . . , nL and columns X ∈ Cn×L are the vectors x1 , . . . , xL in the above model. Conventional signal detection methods are often insufficient in the context of opportunistic spectrum access. We present these methods in sections 3.1.1.1, 3.1.1.2, and 3.1.1.3, and explain their shortcomings. We then present an optimal collaborative detection model in section 3.1.1.4 and compare it with models presented in previous sections. 3.1.1.1. Matched filtering We first present traditional methods for the detection of non-cooperative sources (pilot based or blind detection), and then discuss the framework for new collaborative detection techniques in finite dimension, when n and N are not considerably large. We conclude with detection techniques for large system models. In this section, we present a pilot-aided detection technique called matched filtering. For this technique, we assume that X is known a priori to the receiver or receivers. We also assume that the receiver has knowledge of the sampling frequency and the transmission rate. We simplify model [3.3] by assuming N = n = 1 and H = 1. The vectors x1 , . . . , xL and y1 , . . . , yL become scalar parameters x1 , . . . , xL and y1 , . . . , yL , respectively. In this context, the matched filtering technique maximizes the signal detection capability. It consists of evaluating the following: Cmf =

L 

x∗k yk

[3.4]

k=1

if |xk |2 = 1 for all k, then Cmf is a random variable distributed according to the following rule:  N(0, σ 2 ), H0 [3.5] Cmf ∼ N(L, σ 2 ), H1 where σ 2 is the variance of additive noise nk for all k. The decision between hypotheses H0 and H1 depends on the desired error probability. In particular, it is generally required to have minimal probability to decide H0 when H1 is the actual assumption (to avoid interference with the primary network). In this context, it is desirable to decide H0 only when Cmf /L  1. The matched filtering approach

Cognitive Radio Sensors

47

requires a systematic transmission of pilot sequences from the primary network users to allow the secondary users to opportunistically access the spectrum. This implies that the secondary network has a priori knowledge of the center frequency and the sampling rate of the primary network data. These hypotheses are highly unrealistic in the context of opportunistic radio. 3.1.1.2. Detection In a more realistic case in which the sequences transmitted by the primary network are not known a priori to the secondary network, it is always possible to isolate the intrinsic properties of the type of signals transmitted, in order to identify them. This is always true, in principle, for a telecommunications signal [BIC 86]. In particular, let us consider the case of orthogonal frequency division multiplexing (OFDM), which uses a conventionally large cyclic prefix and hence generates a temporal redundancy of the signal transmitted. The work of Gardner [GAR 91] mainly studies the cyclostationarity of the signals, stemming from the redundancy in frequency of the transmitted signals and completely characterized by the spectral coherence function ρα X (f ), defined for a process X taken at frequency f for a cyclicity α. Various criteria of spectral coherence can be envisioned. For more detail, see [GAR 91] and [ENS 95]. Whenever it appears that the signals detected at frequency f show a cyclostationary character of frequency α, H1 is decided. If, however, no particular correlation of the signal is detected at any cyclic frequency α, then H0 is decided. A detection threshold ξ must explicitly be chosen such that: Ccyc (y1 , . . . , yL ) = sup |ρα y1 ,...,yL (f )|

[3.6]

α

We make the decision: Ccyc > ξ ⇒ H1 is chosen; Ccyc < ξ ⇒ H0 is chosen.

[3.7]

This decision threshold is a function of the variance of additive noise. It is, however, not trivial to simply highlight the cyclostationarity character in the signal to be detected. In the context of transmissions with minimum redundancy, it may turn out that the spectral coherence of the received signal is very weak. Extensions of the cyclostationarity method, especially the kth order cyclostationarity method from Dandawate and Giannakis [DAN 94], have been proposed. In all these methods, it is important to highlight the fact that the noise variance is not accounted for in the cyclicity measures, although it clearly affects the test performance. 3.1.1.3. Energy detection The most commonly used technique for detection was introduced by Urkowitz in 1967 [URK 67]. This technique is called energy detection. It does not require any

48

Radio Engineering

preliminary assumptions on the transmitted signal and optimizes the decision of the hypothesis test when N = n = 1 and H = 1. In this case, the decision criterion Ced consists of summing: 1 |yk |2 L L

Ced (y1 , . . . , yL ) =

[3.8]

k=1

When Ced is greater than a given level of detection, H1 is declared. This level is again a function of the variance σ 2 of noise and can be varied in order to minimize the probability of detection error. The assumption H = 1, however, implies that the source to be detected is in line of sight of the receiver. The assumption N = n = 1 implies that a single source (single antenna) can be detected by a single antenna receiver. Within the framework of cognitive networks, it is highly desirable to permit cooperation among different players of the secondary cognitive network who generally receive a total of N  1 independent signals. It is to be noted, however, that the energy detector can trivially be generalized to the case where N, n > 1 by considering not only the scalar sum of |yk |2 but also the trace of a matrix 1/LYYH . We will see afterward that this suboptimal technique provides detection capabilities comparable to the optimal methods developed henceforth. 3.1.1.4. Collaborative detection In the framework of collaborative cognitive networks, numerous efforts were made to devise an optimal estimator in the context of the general model [3.3]. The optimal decision is based on the Neyman–Pearson criterion, that depends on the following relationship: Copt (Y) =

PH1 |Y (Y) PH0 |Y (Y)

[3.9]

where PX (x) denotes the probability of the random event x of random variable X. For C > 1, it denotes the probability that H1 is greater than H0 and vice versa. To optimize the decision under a minimum error probability constraint, it is sufficient to set a level ξ such that C > ξ implies that H1 will be decided, while C < ξ implies that H0 will be decided. The exact relationship Copt is derived in [COU 10b], for a matrix H of standard Gaussian inputs when n and N are not constrained. This modeling of H is based on maximum entropy considerations, when the receivers have no a priori knowledge of the transmission channel [JAY 03, JEY 46]. C(Y) is taken to be a function of only the eigenvalues of the Gram matrix YYH associated with Y. In this particular case where the cognitive network is searching to detect a single source, i.e. n = 1, if z1 , . . . , zN

Cognitive Radio Sensors

49

denote N eigenvalues of YYH , we get: 2 l N 1  σ 2(N +L−1) eσ + σ2 Copt (Y) = JN −L−1 (σ 2 , zl ), N N i=1 (zl − zi ) z

l=1

[3.10]

i=l

where: 

+∞

Jk (x, y) =

y

tk e−t− t dt.

[3.11]

x

Copt requires a priori knowledge of σ 2 , which in practice implies a priori knowledge of the statistics of the additive noise. Generally, this constraint is unrealistic. Moreover, it is possible to generalize Copt to the cases where σ 2 is not known a priori. In this case, Copt becomes:  σ+2 σ2

Copt (Y) =  σ−2 + 2 σ−

PY|σ2 ,H1 (Y, σ 2 )dσ 2

[3.12]

PY|σ2 ,H0 (Y, σ 2 )dσ 2

2 2 2 2 where σ− and σ+ are such that σ 2 ∈ [σ− , σ+ ]. While an explicit calculation 2 2 of PY|σ2 ,H1 (Y, σ ) and PY|σ2 ,H0 (Y, σ )d is detailed in [COU 10b], an explicit computation of Copt , however, is not possible and only numerical methods can be used for the evaluation of [3.12].

The performance of the optimal technique for N = 4, n = 1 with σ 2 known a priori is compared with the conventional energy detector of Urkowitz, extended to the case N > 1, as shown in Figure 3.1 (a natural extension of the energy detector is obtained by summing not only the squares of the scalars received at the receiver but the traces of matrices YYH received at the secondary receivers also). It is clear that the technique of the energy detector is suboptimal, but it, nonetheless, shows a small performance degradation compared with the optimum detector. Our calculations therefore suggest that the natural extension of the energy detector method is a convenient substitute and is less onerous than the optimal detection method for N > 1. The detection technique for the case N > 1 is by far more complex than the original solution proposed by Urkowitz. The practical calculation of Copt may be particularly prohibitive if a large number of frequency bands are to be tested. Moreover, it is to be noted that no natural extension of Urkowitz’s method exists, for the case where σ 2 is not known a priori. Suboptimal techniques that do not require a priori knowledge of σ 2 are hence considered. Such techniques are much easier to implement. When N becomes extremely large, however, the random matrix domain provides simple solutions, described henceforth.

50

Radio Engineering

Figure 3.1. ROC curve for n = 1, N = 4, L = 8, SN R = −3 dB

When N and L increase simultaneously toward infinity with the relation 0 < c = N/L < ∞: – in the hypothesis H0 , the distribution of the eigenvalues of 1/LYYH converges in principle, √ certainly√toward Marcenko–Pastur law, which has compact support [σ 2 (1 − c)2 , σ 2 (1 + c)2 ] [BAI 98]; – on the contrary, in the hypothesis H1 , if n is finite, then the distribution H shows a finite number of eigenvalues outside the support of eigenvalues YY√ √ 2 of 2 2 [σ (1 − c) , σ (1 + c)2 ]. This effect is shown in Figure 3.2, when n = 4.  √ 2 In the particular case where n = 1 and σ 2 + N c, the k=1 |Hk1 | > 1 + cα maximum eigenvalue of 1/LYYH almost certainly tends toward α + α−1 where  √ 2 α = σ2 + N c, it is not possible to identify the presence k=1 |Hk1 | . If α < 1 + of the transmitting source in the frequency band under consideration [SIL 10]. This condition gives a first answer to the fundamental √ limits of signal detection in large dimension systems. We assume here α > 1 + c, which can be satisfied in practice by sampling the channel L times, such that c = N/L is sufficiently small. Three simple criteria to decide the presence of the signal are: – to determine the presence or the absence of eigenvalues other than the support √ √ [σ 2 (1 − c)2 , σ 2 (1 + c)2 ]. If no eigenvalues are present outside the support of Marcenko–Pastur law, H0 is decided, otherwise H1 ; – when only a single source is transmitting, a more detailed criterion consists of comparing the extreme eigenvalue of 1/LYYH with the two possible values

Cognitive Radio Sensors

51

Figure 3.2. Distribution of eigenvalues of a large dimension system under the hypothesis H1 , n = 4  N

N √ σ 2 (1 + c)2 and α = σ 2 + k=1 |Hk1 |2 . A deep knowledge of large deviation statistics of the extreme eigenvalues can then yield an exact decision criterion. For the null hypothesis H0 , the maximum eigenvalue (properly normalized) follows the Tracy–Widom rule [JOH 01], whereas in the case of H1 , the maximum eigenvalue follows a Gaussian rule; – another criterion is the value of matrix conditioning 1/LYYH , which permits avoidance of the a priori knowledge of noise variance σ 2 [CAR 08]. If we denote the minimum and maximum eigenvalues of 1/LYYH by λmin and λmax , respectively, then asymptotically: Ccn =

√ √ λmax σ 2 (1 + c)2 (1 + c)2 √ 2 = √ = 2 λmin σ (1 − c) (1 − c)2

[3.13]

which no longer depends on σ 2 . It is then sufficient for a selected decision level ξ to minimize the errors of detection, i.e. decide H1 when Ccn > ξ and decide H0 otherwise. The most attractive feature of the latter technique is that it does not require a priori knowledge of σ 2 . In Figure 3.3, the ROC curve of this method for finite N is compared with the large N method for N = 4, n = 1. It is clearly visible that the finite N method outperforms the asymptotic method. We can thus think about an extension of the asymptotic method through a more precise study of the large deviations of λmax and λmin .

52

Radio Engineering

A natural extension is given by the technique known as generalized maximum likelihood. This technique is based on the following relationship: CGLRT (Y) =

supH,σ2 PY|H,σ2 (Y) supσ2 PY|σ2 (Y)

[3.14]

instead of the optimal relation [3.9]. Intuitively, this approach is to isolate the hypothesis on the pair (H, σ 2 ), which is the more appropriate if H1 is the correct hypothesis, and to isolate the value of σ 2 , which is most likely if the hypothesis H0 is correct and to take the ratio of probabilities of these two hypotheses. This method is highly suboptimal in the sense that it systematically eliminates all hypotheses having a probability less than the most probable hypotheses. Typically, even if a large number of channels H show a high probability PY|H,σ2 (Y), only the maximum hypothesis (and hence a unique H) will be retained in the new relationship CGLRT . Detailed calculations of [3.14], however, lead to a relatively simple decision criterion, i.e. in the case n = 1 [BIA 09]: supH,σ2 PH1 |Y,H,σ2 (Y) [3.15] supσ2 PH0 |Y,σ2 (Y) ⎛

N −1 ⎞−L

N −1 N − 1 max {z } {z } max i i i i ⎠ =⎝ 1 − N N 1 N z z i i i=1 i=1

CGLRT (Y) =

N

[3.16] where z1 , . . . , zN are the eigenvalues of matrix 1/LYYH . Precise details of how to adjust the H1 decision threshold can be found in [BIA 09]. The performance of the generalized maximum likelihood detector is shown in Figure 3.3. The GLRT does not require a priori knowledge of the variance of the additive noise σ 2 and show better performance than the other detectors as it takes advantage of matrix conditioning 1/LYYH . This solution thus provides an adequate substitute of the optimal detector when σ 2 is not known a priori to the secondary receivers of cognitive network. To sum up, matrix model [3.3], more general than the scalar models used in the conventional techniques, presents a mathematical challenge due to the complexity of optimal solutions obtained by explicit calculations. From a practical point of view, many suboptimal techniques, based on a priori knowledge, show a performance close to the characteristics of the optimal Bayesian detector. Currently, the first barrier has been overcome, allowing us to integrate inexpensive source detectors in the CRs that neither require a priori knowledge of the transmission channel nor the signal-tonoise ratio. However, it is conceivable that some external information on the channel may be known a priori to certain users and this knowledge must be integrated into

Cognitive Radio Sensors

53

model [3.3]; in such a situation, a new model must be set up, for which new optimal Bayesian calculations must be carried out.

Figure 3.3. ROC curve where σ 2 is unknown a priori: finite N method, asymptotic method, n = 1, N = 4, L = 8, SNR= 0 dB

In addition, it seems that the detection capabilities mentioned above imply that the secondary users must have the capability to share their signals in a clean and fast manner so as not to interfere with the primary network. If the primary network is dense and occupies a lot of resources, then it seems difficult to imagine a scenario of information exchange between secondary network users, which does not affect the primary network at all. In such a situation, the effective number of users in the secondary network must be restricted. Similarly, if the primary network is sparse but the secondary network contains a large number of users, it is suspected that the enormous amount of information to pass through this network will not be accomplished without disturbing the primary network communications. In such cases, a restriction is imposed on the amount of information exchanged and/or on the total number of opportunistic users. Future research in CR for the detection of holes will require setting up more realistic models, taking into account network density constraints.

3.1.2. Other sensors 3.1.2.1. Recognition of channel bandwidth In 2001, it was shown in [PAL 01, ROL 01] that the channel bandwidth (BWc) of a given standard is completely discriminatory among all existing commercial wireless standards (2G, 3G, digital broadcast, and wireless local area network (WLAN) positioning). The authors used a neural network radial basis function (RBF) to

54

Radio Engineering

compare the power spectral densities (PSDs) of the received signals with reference PSDs given by equation [3.17], of different standards to be identified. The bandwidth pattern is given by the shaping filter and bandwidth. This pattern is specific to each standard and can thus be recognized by a neural network of the type RBF, as illustrated in Figure 3.4. The advantage of a neural network in the search for the BWc is that it carries out pattern recognition on the spectrum of the signals. It therefore takes into account the parameters of bandwidth, modulation, and shaping filter. Moreover, it is resistant to perturbations caused by the transmission channel.

Figure 3.4. Spectral pattern recognition, given by PSDs of the signal, using a neural network RBF

γref (k) = |F ems (

fp fp − k)|2 γmod ( − k) fe fe

[3.17]

where γref is the PSD of the reference signal, F ems the shaping filter (e.g. root Nyquist) of the modulated carrier P of the standard S under consideration, and γmod (fp /fc − k) the PSD of the modulation of the carrier P of the standard S. The received multistandard signal (also called composite signal), sampled at frequency fe , is given by equation [3.18]:

x(kTe ) =

kfp hs,p (kTe )∗ F ems (kTe ) ∗ ms,p (kTe )exp(2jπ ) +bT (kTe ) fe s=1 p=1

S  P 

[3.18] where F ems is the shaping filter, ms,p (t) the modulation on the carrier P, and hs,p (kTe ) the channel response of the carrier P of the standard S. The PSD of this signal is obtained by using an average periodogram. This spectrum therefore contains

Cognitive Radio Sensors

55

a sum of p channels, each of which is a product of different spectral densities of modulations due to shaping filter multiplied by the frequency perturbations of the channel. Using the neural network of Figure 3.4, we have compared this real spectrum with a reference spectra base given by equation [3.17]. An indoor test campaign was conducted on real signals (fixed and mobile). The PSDs of these signals were obtained by means of an average periodogram with eight fast Fourier transforms (FFTs). The neurons’ threshold was fixed to maximize the rate of correct detections on the one hand and to minimize that of the false detections on the other hand. Figure 3.5 shows that for the global system for mobile communication (GSM) neuron a choice of a threshold of 0.002 optimizes the distance between the desired correct detections and the false detections of other systems.

Figure 3.5. Percentage of correct detections of the channel bandwidth recognition sensor according to the threshold for the GSM neuron [ROL 01], and percentage of false detections of the GSM neuron when the received signals are CT2 or PHS. The optimal threshold is the threshold that maximizes distance between correct detections and false detections

3.1.2.2. Single- and multicarrier detection Certain standards, having very similar PSDs shapes, for example digital audio broadcasting (DAB) and digital-enhanced cordless telecommunications (DECT) signals, do not provide satisfactory results with the sensor presented in [PAL 01]. These signals differ by the fact that they are single- or multicarrier. This characteristic can be detected by identifying the guard interval (GI) present in the multicarrier signals. The GI is generally created by copying a part of the end of the OFDM symbol and appending it to the beginning of this symbol. This is how we generate a particular cyclic frequency that can be detected (see section 3.1.1.2). An example of this detection is given in Figure 3.6.

56

Radio Engineering

Figure 3.6. Detection of the GI of an OFDM signal (OFDM symbol 2K; GI/T u = 1/6)

3.1.2.3. Detection of spread spectrum type With the emergence of new IEEE standards, discrimination with both the previous sensors is not sufficient any more. In particular, it is not possible to discriminate Bluetooth and WiFi (IEEE 802.11b) at 2.4 GHz. Indeed, these two standards can coexist in the same band and, at the same time, their spectrum is identical. Yet, they differ by the type of spread spectrum used (frequency hopping for Bluetooth and direct sequence for WiFi). This is the reason behind the proposal of this new sensor. A previous study proposed to make this discrimination after a time/frequency transform [GAN 04, GAN 05]. A more simple solution was proposed in [LOP 09]. This solution uses a Choi–Williams transform (see Figure 3.7) followed by segmentation. Later, on the resulting image, three measurements are taken: the length of time segments, the width of the frequency segments, and the interval between two time segments. By using these three measurements, we can determine whether frequency hopping in the signal is present or not. 3.1.2.4. Other sensors of the lower layer Depending on the required level of the environment knowledge and application needs, it is possible to use information coming from various other sensors. Following the desired degree of independence of equipment and following the a priori knowledge level of the parameters of physical layers considered1, the number and type of sensors could belong to the following list (non-exhaustive): – detection of carrier frequencies and symbol; – recognition of modulation type; 1 In military applications of the interception type, this level of knowledge is practically null and void.

Cognitive Radio Sensors

57

Figure 3.7. Detection of presence of FH: Choi–Williams transform on the received signal; illustration of a frequency hopping with four frequencies

– recognition of coding (convolutional coding, block coding, space time coding, etc.); – recognition of access type; – detection of synchronizations.

3.2. Intermediate layer sensors 3.2.1. Introduction A CR terminal, when launched in an environment characterized by heterogeneous and multiple access networks, is neither initially aware of radio access technologies (RATs) that are accessible to it nor of the RAT better suited to its needs and frequency band. The CR terminal also faces situations in which approaches such as dynamic spectrum allocation (DSA) and flexible spectrum management (FSM) are adopted by regulators (see Chapter 1, section spectrum management). In the latter case, the terminal completely ignores the spectral distribution. Without any a priori information, the terminal needs to scan a significant portion of the spectrum to recognize the existing RAT in its environment and to be able to launch communications. To avoid this search, various solutions have been proposed. Some are based on the use of a common carrier accessible from anywhere like the cognitive pilot channel (CPC; see section 3.2.2); others are based on geolocalization (see section 3.2.2), or on more futuristic blind spectrum recognition (see section 3.2.4).

58

Radio Engineering

Activation of the radio terminal

Determining the geographical location

Reading of the CPC

Extraction of the information corresponding to the network where the terminal is present

Selection of the RAT Figure 3.8. Selection procedure of RAT by using CPC

3.2.2. Cognitive pilot channel The TCPC is a recent concept in the CR domain [COR 06, HOU 06, PER 07]. As indicated by its name, it designates a radio channel through which a CR terminal can recover the place where it is located, the pertinent information regarding frequency bands allocation, RATs, services, etc. The notion of CPC was introduced to simplify the task of the terminal to retrieve this information in a simple and fast manner. When the radio terminal is turned on, it first determines its geographical position as shown in Figure 3.8. In this

Cognitive Radio Sensors

59

context, positioning is related to the geolocalization method presented in section 3.2.3. Positioning allows terminals to decode the channel contents of CPC soon after. The terminal can hence determine all the networks available to it. For each network, it can identify all available operators, their preferred RAT, and thus their frequency bands. The simplification of the selection procedure for RAT using the CPC has several advantages: 1) reduced acquisition time for a communication network; 2) reduced energy consumption by battery; 3) ease of deployment of new spectrum management approaches such as DSA/FSM. A lot of effort remains, however, to be made before the materialization of this concept. The main difficulties faced are the following: – the definition of the frequency of this CPC channel. The frequency can be defined on bands dedicated to each country or even on each region, in the sense of International Telecommunication Union’s (ITU) regions (in-band solution), or on a single frequency for everyone (out-band solution). This solution is still under study in the European Project E3 and has been proposed for various other standardization organizations; – the operators must agree to share information currently protected due to market competition.

3.2.3. Localization-based identification The solution presented henceforth is based on the fact that there exists a known set of standards available to the user, in every geographical location. It is thus necessary to precisely locate the equipment and associate with its geographical position a list of standards to be stored in its database. 3.2.3.1. Geographical location-based systems synthesis The available systems depend on the geographical location of the terminal at the time of transmission. The list of these usable systems can easily be edited due to the frequency maps in any geographical region. Unfortunately, the list varies depending on the movement of the terminal. Each standard is normalized for a certain defined geographic region. It can, however, cover various geographical entities such as a group of countries (European Union), a single country (France), a radius around an antenna, or even the entire world. It should be noted that the definition of a standard in a region does not mean that the receiver will have the right to communicate at any point in this region. This requires an adjustment of the geographical region to

60

Radio Engineering

the user rights. For certain systems of proximity whose frequency map, however, is continental like the DECT, it is preferable to limit the usage area to a very localized coverage. We can divide the standards according to their utilization areas (see Figure 3.9): – Global coverage: normally, it does not require GPS localization. There are, however, exceptions to this. The standards included in this category are universal mobile telecommunication system (UMTS) and S-UMTS. – Continental coverage (as defined by ITU): this kind of coverage is relatively easier to manage. The communications follow the time zones. Yet again, there are exceptions that may complicate management. The standards included in this category are GSM, IS95, PDC (personal digital cellular), DAB, etc. – Regional/national coverage: it must manage the contour. This is the main difficulty in database management. The standards included in this category are Radiocom2000, DVB-T, FM radio, etc. – Local coverage: it is relatively easier to manage, if we know the transmitter center and if the terrain is not of utmost importance (mountains and buildings). This category consists of standards such as DECT, PHS (personal handy phone system), WLAN, and other local area networks. The frequencies of these standards are allocated by continental region but in practice the user will have only partial rights within very limited places. For this coverage, a manual input is often preferable. It indicates the location of the transmitter and its coverage radius.

Figure 3.9. Examples of possible coverage

Cognitive Radio Sensors

61

The knowledge of the frequency alone is not sufficient to distinguish all the systems. For the Hertzian transmissions, the knowledge of the place where the equipment is located completely determines all the systems (and hence the frequencies) that can be used for transmission. Embedding geolocalization (e.g. GPS) in a receiver, associated with a concise table of frequency allocation according to geographic locations (see Figure 3.10), makes it possible for a mobile to have a permanent knowledge of the systems it will be able to connect with. This table can be stored either in the user’s customized card or in the terminal’s memory. In both cases, they must be reprogrammable. Roaming is thus facilitated by this knowledge of the other networks.

Figure 3.10. Receiver architecture

3.2.3.2. Rights of database use and update Many solutions are available to manage database and to verify user access rights to a particular network. These solutions are divided into two general cases. First, the device is free to compete for its operator, whereas in the second case the device is attached to the service provider’s own network or is subjected to the agreements between the operators. Two logical conflicts, however, may arise: the operator that favors its own network and the user who according to his/her criteria of cost or bit rate would like to have a wide range of possibilities. If the device can operate in single-operator mode, the number of standards available in one place could be reduced to about 10 (DECT, GSM900, DCS1800, the radios and TV frequencies of various countries and some local area networks useful for the user, the Globalstar system, UMTS, and S-UMTS). These standards are international and can thus be included in the databases in a straightforward manner. This configuration limits the usefulness of the process as the operator is the operator that provides information to update the

62

Radio Engineering

database and access rights. If, however, the device can be used, in multioperator mode, the question of the database update and access rights does not arise in the same way. For example, a service provider can put at the disposal of all the users an Internet server, containing the frequencies and mapping of existing standards. Access rights then come into play at the time of network entry and payment is made by credit card. Database management is the most subtle point in this method. Indeed, for this method to be effective, it is necessary that we constantly update the equipment’s database. This can be done in several different ways listed below: – A download over air, also called over-the-air reconfiguration (OTAR) [KOU 02]. It uses the reconfiguration by downloading via the aerial route so that a terminal can download the code (binary/bitstream for processor/field programmable gate array, FPGA) regardless of where it is located, to change all or part of its radio processing. The medium used for the download is a radio link of the terminal itself. In the case of an ongoing service (a communication), the data are multiplexed with the code to be downloaded. – A download by the SIM card ID of its user. The user recharges or exchanges his/her card on a regular basis. Its rights of use are also updated according to the location. – A download via a network is identical to the operation by OTAR. – Manual configuration is perhaps useful to allow modifications for unforeseen situations. The management of the coverage is surely the crucial point of the system. Optimization of the management style and size of the database are the heart of the problem to be solved. The system cartography must take up the smallest possible amount of space in the database. Nevertheless, it is greatly simplified if we consider that, practically, only four types of coverage exist (see section 3.2.3.1). If CPC or database-oriented approaches are not adopted or do not exist in a place, stand-alone techniques can be used. The idea is to re-identify all the standards in an area, without connecting to these standards (and then avoid standard switching, network connectivity). The next section presents a blind standard recognition sensor (BSRS) for the same context.

3.2.4. Blind standard recognition sensor 3.2.4.1. General description The BSRS analyzes the received signal in three stages, as mentioned in Figure 3.11. In the first stage, the received broadband signal is analyzed in a coarse manner (e.g. using a radiometer) in order to determine whether frequency bands contain significant energy. This analysis is performed iteratively to select frequency bands with increasing narrowness. In the second stage, a very precise analysis of the selected

Cognitive Radio Sensors

63

bands is performed. This analysis provides access to the information such as BWc, distinction between single- and multicarrier signals, and type of spread spectrum, etc. This analysis is performed with several sensors of the lower layer. Finally, in the third stage, a fusion process of all the information, obtained during the second stage (see section 3.1), is performed, which can decide about the standards that are present in the spectrum.

Figure 3.11. Blind standard recognition sensor [HAC 07]

3.2.4.2. Stage 1: band adaptation The frequency bandwidth sampled in SR technology (see Chapter 7) is very large. As a result, efficient functioning of second-stage sensors is extremely difficult with existing signal processing tools. That is why an adaptation of the bandwidth to be analyzed is performed during this first stage. This adaptation is performed using a classical energy detection followed by filtering and decimation of the energy peaks that are detected. This adaptation is performed in an iterative manner in order to analyze bands of few MHz widths. 3.2.4.3. Stage 2: analysis with lower layer sensors After studying the discriminating characteristics of the parameters in the various standards considered, three sensors were selected to identify the received signal in a predefined list of standards. These three sensors, described in section 3.1, are BWc recognition of a standard, single- and multicarrier detection, and the detection of spread spectrum between frequency hopping and direct sequence. The list of sensors can be extended to other characteristics if these three sensors are put by default for certain standards or to differentiate between future standards.

64

Radio Engineering

3.2.4.4. Stage 3: fusion At the end of the second stage, three information sets are obtained. These will be merged to decide which standard is available. The fusion is performed with some logical rules or by using more powerful tools such as neural networks or Bayesian networks. 3.2.5. Comparison of abovementioned three sensors for standard recognition Methods CPC LBI BSRS Need for a service provider Yes Yes No Content level (1) High Medium Low Dependence of the radio coverage (2) Yes No No Computation complexity Very low Medium Very high Need for normalization Yes Yes No Spectrum utilization Yes Yes No Dependence on the operator Yes Yes No Need for an additional radio link Yes (the CPC itself) Yes (GPS) No Table 3.3. Sensor comparison for standard recognition

Table 3.3 clearly shows that the BSRS, with the exception of the complexity criterion, is better than other propositions on other criteria. Criterion (1) in Table 3.3 indicates that the information provided is more complete with CPC2 than with BSRS. The CPC can provide additional information on standards, operators, services, etc., whereas the BSRS only gives the information about the existence of the standard (in this case, access to more information requires demodulation of the signal). Criterion (2) of Table 3.3 means that the information provided by the method considered depending on the standard coverage. In fact, it is hardly imaginable that the CPC can give clear-cut and reliable information about all WiFi hotspots, while the BSRS can detect these standards. At the same time, the localization-based identification (LBI) can also detect these standards under the assumption that the database is correctly updated. 3.3. Higher layer sensors 3.3.1. Introduction According to our earlier classification (see Table 3.2), higher layer sensors include the application layer sensors. There are close relationships between higher layer

2 The CPC itself is currently being studied at ETSI.

Cognitive Radio Sensors

65

sensors of CR and those used in context aware (CA; see Figure 1.11). The domain of CA is very broad and is defined by taking into account the context of the system (computer, cell phone, embedded systems, etc.). The application layer sensors in CR and those of the CA can be differentiated simply by the radio link. The sensors in CR are exploited to accomplish the main objective of the CR, i.e. to maximize the quality of information transmitted (e.g. QoS, power consumption, and radiating level). Let us discuss the following scenario, inspired by the scenario that was proposed by J. Mitola [MIT 09]. In this scenario, we use the video sensor of face analysis in CA and CR. A soldier progresses into the enemy’s territory in a vehicle where he is involved in an accident. He is injured and is thrown out of his vehicle with his cognitive personal digital assistant (CPDA). This CPDA, by means of a higherlevel energy sensor (CR sensor), comes to know that the nature of energy source has changed: the energy provided by the vehicle is no longer accessible, i.e. it has now to use its own battery. The audio and video sensors (CA sensors) are responsible for recognizing the person handling the CPDA so that it is not used by the enemy. The same video sensor (CR sensor) has to recognize the person handling the CPDA in order to optimize the compression of the video transmission by implementing a source coding adapted to the face of the soldier. It is clear from this example that the same sensor (face recognition) can be used both in CA (biometrics) and CR (source coding). In the next section, we identify the sensors that can potentially be used in CR and illustrate them using different scenarios. Section 3.3.3 discusses the video sensor to adapt the video compression mode according to the radio link constraints or to influence the transmission system parameters. Generally, hereafter, a mobile refers to a cell phone.

3.3.2. Potential sensors All sensors that can be used to achieve the main goal of improving the radio link are intelligent sensors of CR as we saw in the previous scenario. A lot of futuristic scenarios that extend far beyond the scope of these sensors can be envisioned: – A video sensor can be used to determine whether the terminal is located inside or outside a building as proposed in [PAL 07, PAL 09b]. This information may have an impact on the transmission characteristics. – The GPS of the cell phone can be used to detect the nearest antenna and to transmit only in its direction in order to minimize the electromagnetic pollution of the user brain. – The high-level information of the receiver can be accessed at the transmitter level. For example, it is possible at the transmitter level to know the signal restoration elements as well as the resolution and refresh the rate of receiver screen.

66

Radio Engineering

– The cell phone may be able to know if the user is in a car or not. In this case, the car operates as a Faraday cage, the transmission level is important for reaching the base station. One strategy can be to reduce the bit rate and to prohibit, for example, the video communication owing to the fact that it is not desirable to look at a screen while driving. R EMARK.– In the latter scenario, the first strategy would be to remember the good rules for using a cell phone terminal, i.e. use of hands-free options of the equipment while being connected to the antenna exterior to the vehicle. The video sensor is a privileged sensor of the higher layer in CR because of its noninvasive qualities and its flexibility of use. Its most natural exploitation is to assist the image source coder so that it can adapt to the available bandwidth. The following scenario illustrates a very classical situation in CR. A user gets a video clip and sends it to a friend who views it. The management system (e.g. hierarchical and distributed cognitive radio architecture management (HDCRAM), explained in Chapter 5) of Figure 3.12 must adapt the radio equipment to furnish the best transmission quality taking into account the available resources. Depending on information coming from different sensors and decision-making algorithms, the HDCRAM defines the optimal configuration to provide the best service. Video sensor

Video codec

Hardware sensor

Audio sensor

HDCRAM

Audio codec

Bandwidth sensor

Figure 3.12. Hierarchical and distributed cognitive radio architecture management (HDCRAM)

In audio coding, the current cell phone equipment analyzes the received signal to detect the possible presence of a voice message. For example, in MPEG4 compression, if a voice signal is not detected, it uses a generic audio coder such as

Cognitive Radio Sensors

67

the transform-domain-weighted interleaved vector quantization (TwinVQ) developed by Nippon Telegraph and Telephone Corporation (NTT). In the opposite case, a coder adapted to voice signals compression such as code-excited linear predictive (CELP) [SCH 85] can be used to improve the quality of the transmitted signal. In this way, for the same bit rate, the subjective quality of the compressed audio signal is much better than if it had been compressed by the generic coder. In video coding, the same strategy can be applied: a generic coder H.264 (MPEG4-AVC) or a face coder is used based on the video signal contents. It is to be noted that the face coder synthesizes the user face starting from a model: this operation is costly in terms of computation time and cannot be performed in real time except by a graphics processing unit (GPU). We will see that video and audio sensors provide valuable information for the video codec, making it possible to improve or degrade the coding quality of certain regions of the image relative to their importance. Face No

Yes

Codec H264

[–45˚… 45˚] No

Yes

Codec H264

GPU Yes

No Codec H264

Face recognized ? No

Yes

Generic codec for the face

Voice No Codec adapted to the face

Yes Codec adapted to the face – emphasis on the mouth

Figure 3.13. Decision tree

A decision tree is shown in Figure 3.13, which is part of the decision algorithms presented in Chapter 4. When this tree is built (hence perfectly known) and the

68

Radio Engineering

decision space is reduced (as is the case in Figure 3.13), then decisions are evident and easy to make. It is, therefore, a very effective decision-making algorithm. Decisions provided by this algorithm enable the manager to specify the optimal configuration of the video codec taking into account the information provided by different sensors. For the decision tree in Figure 3.13, the ovals represent information coming from different sensors, whereas the leaves give video codec configuration. The first sensor informs the HDCRAM of the possible presence of a face. If a face is not detected, a generic codec (H264) is applied to the entire image. In the opposite case, the orientation of the face is taken into account. If the person does not really look at the camera (e.g. face profile), then we consider that the most relevant information for transmission is not in the face itself: then H264 codec is used. In the contrary case, if there is no GPU available then the H264 coder is used as the face codec. If a GPU exists, it is interesting to know whether the detected face is known to the system or not. If it is not known, then a generic version of the face codec is used: the face is modeled on-line and the transmission bit rate varies throughout the communication. If the face is known, then its model is available and will be transferred through the manager to the receiver and the video codec. Only the highlevel parameters that require little bandwidth and allow the receiver to reconstruct the face from these parameters will be transmitted. The bandwidth in itself will be found to be very heavily reduced. The audio sensor is also used: a detected voice message indicates that the image zone corresponding to the mouth is important and must be enhanced compared with those relative to eyes, skin, and obviously image background. For such an image, the background must be highly compressed since it contains little relevant information intended to be communicated. This can be accomplished using the audiovideo objects (AVO) of MPEG4, which give the option of compressing the objects that constitute the scene, with different ratios. If a voice message is not detected, it means that the most important information is contained in the eyes and the image zone of eyes must be enhanced with respect to other characteristics of the face (nose, mouth, and skin). In this scenario, the image codec sends image information through the HDCRAM. For example, it specifies the throughput that it produces and the data that are important to protect. The information is conveyed through the HDCRAM to the decision algorithms that decide which type of modulation is optimum (e.g. GSM or 802.11 g) and which configurations (channel coding and error-correcting code) must be implemented. In addition to the information that the system has through sensors of the radio equipment such as channel estimation, broadcast quality, estimation of available energy, recognition of the available standard, it sends specific information to the video codec such as the available bit rates or resolutions and images display frequencies on the final receiver. For example, the equipment can identify the GSM as the only means available to

Cognitive Radio Sensors

69

transmit the audio-video message and choose a low-resolution profile for images because the receiver will display the message on a cell phone. Thus, from the information collected, the HDCRAM sends the configuration parameters pertinent to the audio and video codecs through HDCRAM. The following inputs of Figure 3.13 permit the HDCRAM to parameterize the video codec: – GPU detection; – voice message detection; – face detection; – face recognition; – estimation of face orientation. The first sensor is trivial. The second is currently incorporated into commercial equipment. For the third sensor, we use a current face detector [VIO 04]. The market of face analysis itself is extremely active (Canon was a pioneer in the development of its cameras with automatic face detection. Sony, in 2007, released a camera that triggers when the person to be photographed smiles). Currently, there exist no obvious solutions for the last two sensors. Face recognition can be performed in real time in an efficient way if the position of the face is fixed, which is not the case in our application context. The user freely takes a picture of a person who is neither centered nor is his/her face and hence his/her face may appear small or large. Under these difficult conditions, the recognition rate of identification algorithms falls dramatically. To circumvent this problem, precise detection of the face orientation as well as the localization of its features (eyes, nose, and mouth) is necessary. When this information is available, it is possible to synthesize the analyzed face in a standardized form (face front, properly centered, and normalized in size). The synthesized face is then processed by the face recognition algorithms for identification. Active appearance models (AAMs) [COO 98] have been used for efficient face alignment for the last 10 years. We comprehend by alignment of faces, the capability to detect its pose and to locate a set of key points on eyes, nose, and mouth. In order to improve the robustness of conventional AAM of [COO 98] under real conditions of use, new optimization algorithms have been proposed [SEG 09] and [SAT 09]. Work on face alignment therefore allows us to use two sensors: “face identification” and “estimation of face orientation”, for which, at the moment, no obvious solutions are available. 3.3.3. Video sensor and compression In this section, we illustrate, by means of a real scenario (adaptive compression), the close relationship that may exist between video sensors and compression

70

Radio Engineering

algorithms. The video sensor makes it possible to manipulate the codec and transmission system parameters. To compress video data, it is possible to use JPEG2000 coding due to its scalability (spatial and temporal) and ability to compress the images independently. The latter point is very important for CR because it must have an option to change the quality of transmitted images instantaneously, when the HDCRAM makes a request; there must not be a wait of 15 images, for example, in MPEG2 and even more in MPEG4 (all 15 images, MPEG2 encodes an image independent of the video stream, other images result from an interpolation or a prediction with adjacent images). JPEG2000 provides the choice to encode multiple regions of interest (ROIs) with different compression ratios. These regions are linked to different objects in our application context that constitute a face besides the image background. It is to be noted that the notion of ROI also exists in MPEG4 part2. The AAMs define four ROIs in video communication, as illustrated in Figure 3.14. The image background constitutes the first region of interest (ROI-1) and is very hard to compress. Usually, the most important information to be transmitted comes from the mouth region (ROI-4) that will be slightly compressed, while the region of the eyes (ROI-3) will be a little more compressed; the region of the skin (ROI-2) will be slightly less compressed than the image background. Obviously, these compression ratios vary according to the decision to be taken at the level of decision algorithms (see Figure 3.13).

ROI-2 ROI-3

ROI-4

ROI-1

Figure 3.14. Different regions of interest detected by active appearance model

After explaining the AAMs in section 3.3.3.1, the scenario envisioned by the Signal Communication and Embedded Electronics (SCEE) team [NAF 07] to illustrate the interest of CR in the transmission rate optimization is presented in section 3.3.3.2.

Cognitive Radio Sensors

71

3.3.3.1. Active appearance models The AAMs produce a face model from a database of sample face models. The shape of the face is denoted by a vector s that consists of the coordinates of the points characterizing it. Its texture is denoted by a vector g representing the values of the pixels of the image in the form defined by s. To create the model, it performs, on the one hand, a principal component analysis (PCA) on shape vectors, and on the other hand, another PCA on the texture vectors. For image i: si = s¯ + Φs ∗ bs gi = g¯ + Φg ∗ bg

[3.19]

where si and gi are the shape and texture of the face in the image i; s¯ and g¯ the shapes and texture averages of faces; Φs and Φg matrices consisting of eigenvectors of sample shape and textures, respectively; bs and bg vectors representing the projection coefficients of shapes and textures si and gi with respect to their bases. Applying a third PCA, the vector,   bs b= bg we get: b= Φ∗c

[3.20]

where φ is the matrix of dc eigenvectors found by PCA, vector contains the appearance parameters, i.e. projection coefficients of vector b on its own basis vectors. Thus, it is possible to synthesize any face by acting on the appearance parameters that deform at the same time both the texture and the shape of the synthesized face. To align a face in an image, the appearance parameters of the vector c must be adjusted in order to minimize the error between the segmented image (the texture of the input image defined by the shape provided by the vector c) and the texture generated by the model (produced by the vector c). For example, to find a face in the image (Figure 3.15a), the detector gives the approximate position of the center of the face (shown with a white rectangle, Figure 3.15b) from which the active model is initialized. After convergence, the shape of the AAM “glues” to the analyzed facial features, i.e. the position of the eyes, nose, and mouth in the image (Figure 3.15c) and hence the texture of the face (Figure 3.15d) can be given. The face and its characteristic points are therefore correctly positioned. Adaptive histogram equalization over a set of images is performed to improve robustness against illumination changes. The gray levels in the textures are replaced by the contours’ orientation in each pixel [GIR 06].

72

Radio Engineering

3.3.3.2. A real scenario The following scenario illustrates the close collaboration that exists between the video sensor and the codec in a cross-layer perspective. The sensor is responsible for analyzing the face and, in particular, detecting ROI; different parts of the face are compressed more or less by the codec depending on their contribution in terms of information. We analyze the impact that such a coding can have on the transmission chain.

(a)

(b)

(c)

(d)

Figure 3.15. (a) Original image, (b) initialization of AAM, (c) and (d) shape and texture after optimization

A person switches on his terminal and starts a video telephonic conversation. At the beginning of the communication, the face of this person and the background image are transmitted using a traditional compression that requires high bit rate. As time evolves, a face model of the transmitting person is evaluated. After having transmitted this model to the receiver, it is simply enough to send the high-level parameters (orientation of face opening of the mouth, eyes, direction of glance, etc.) to reconstruct the image of the person’s face. It is consequently possible to significantly reduce the volume of data to be transmitted since the face model (texture, shape of mouth, eyes, etc.) is to be sent only once. It is sufficient for the

Cognitive Radio Sensors

73

following transmissions to send high-level parameters characterizing the behavior of this face to reconstruct everything as accurately as possible when a conventional image compressor will be used. Furthermore, it is possible to change the bit rate on the fly if a dynamic reconfiguration is considered. Figure 3.16 illustrates the evolution of the bit rate and reconfiguration over time. Throughput

Transmission of the model of the face and the background Transmission of the model of the mouth Transmission of the eyes model

Stages Transmission of the errors and the parameters of the face, the eyes and the mouth Transmission of the global compressed image

Transmission of the parameters of the face and the error

Figure 3.16. Dynamic reconfiguration of standard according to video codec: in the course of time, as the video codec learns the face model to be transmitted, the required bit rate for transmission decreases more and more and therefore requires less and less use of bandwidth of a standard

The AAMs can be applied to any object. The facial features contained in the various ROI are modeled. The long-term objective is to analyze each ROI with AAM. Therefore, it is necessary to initially construct an AAM of a face (stage 1), then an AAM of the mouth (stage 3), and finally an AAM of the eye (stage 5). Each object (face, eyes, and mouth) is modeled by means of the eigenvectors, represented by the matrices Φs , Φg , and Φ of equations [3.19] and [3.20]. The high-level parameters that will be transmitted are the coordinates of the vector c (equation [3.20]), which permit reconstruction of each of the modeled objects. Let us consider the texture of Figure 3.15d, which is reconstructed from the parameters c of the AAM whose base is a set of sample images of the face of a person who is communicating. During each stage of the modeling procedure, a time period is required to compile images of the object to be analyzed and to implement various AAMs in order to produce eigenvectors that form our model. For this reason, the entire image is compressed in a classical manner (JPEG2000) at the beginning of communication and then transmitted (stage 1).

74

Radio Engineering

Subsequent to the realization of the AAM of the face, it is feasible to transmit the model (eigenvectors) and the background image to the receiver (stage 2) and then send only the model parameters (stage 3). The model of the mouth is evaluated in the third stage, then transmitted in stage 4. The high-level parameters characterizing the behavior of the mouth as well as the modeling of the eyes are performed in stage 5. The AAM of the eyes is transmitted in stage 6 where only the appearance parameters are sent. Finally, in the last stage, only the high-level parameters of the different models are transmitted, so that the throughput is significantly reduced compared to that of the first stage. It is also feasible to transmit an image of good quality throughout the communication stages. The transmission system is reconfigured over a timeline while passing seamlessly from 802.11 to the GSM for the user.

3.3.3.3. Different stages A person switches on his terminal and starts a video telephonic conversation at t0 . The video sensor progressively learns the models of the face, mouth, and eyes. When the learning process is completed, only the concerned high-level parameters of the AAM are transmitted. Various stages of Figure 3.16 are connected as follows: – Stage 1 (t0 to t1 ) - Source coding: the video source is encoded in a conventional manner. The transmitter learns the 3D model of the face of the person; - Radio link: high bit rate transmission, OFDM modulation (802.11) g with a standard error correcting code. – Stage 2 (t1 to t2 ) - Source coding: video codec has completed the face analysis; - Radio link: same type of data as in the previous stage plus the face model and the image background is transmitted. OFDM modulation (standard 802.11 g) with a robust error correcting code for the model and the background. – Stage 3 (t2 to t3 ) - Source coding: codec learns different shapes and textures of the mouth; - Radio link: only the parameters characterizing the size, orientation, and shape of the face are sent so that the receiver can reconstruct the 3D face model on the background of the image already transmitted. The reconstruction errors between the synthesis of the image and the image to be transmitted (primarily at level of eyes and mouth) are also transmitted to improve the image reconstruction at the receiver. A standard of type UMTS is used with a classical error correcting code. – Stage 4 (t3 to t4 ) - Source coding: video codec has completed the analysis of different mouth shapes; - Radio link: the high-level parameters characterizing the face as well as the model of the mouth are transmitted. The UMTS standard will be used with an error correction code particularly robust on AAM of the mouth.

Cognitive Radio Sensors

75

– Stage 5 (t4 to t5 ) - Source coding: codec learns different styles of the person’s eye; - Radio link: the high-level parameters that permit encoding of the face and the mouth, as well as the reconstruction errors (primarily in the eye area), are sent. The UMTS standard is used with a classic error correcting code. – Stage 6 (t5 to t6 ) - Source coding: video codec has finalized the modeling of the eyes; - Radio link: the appearance parameters of AAM concerning the face and the eyes are transmitted. The model of the eyes as well as the reconstruction errors is sent. The UMTS is used with an error correcting code particularly robust on AAM of the eyes. – Stage 7 (from t6 to t7 ) - Source coding: the codec has finished learning the different models; - Radio link: only the appearances parameters of the various AAM and reconstruction errors are transmitted using the GSM standard. In this scenario, it is the video codec that determines the bandwidth through the HDCRAM and hence the modulation type (thus the standard in this example) that will have to be implemented in the radio link. However, as exemplified in Figure 3.12, the HDCRAM can manipulate the video codec to impose a particular output bandwidth on it. For example, in the final stage (stage 7) the HDCRAM may require an extremely reduced bandwidth so that the video codec only transmits high-level parameters of various models and no information about the error. An image thus displayed on the receiver (a GSM cell phone) will be very smooth but with some reconstruction errors when the person has a behavior not modeled during the learning phases of the different models. Note that this scenario has to be refreshed when the video conditions change (e.g. background). 3.4. Conclusion In this chapter, we have proposed a classification of sensors based on a threelayer model: higher, intermediate, and lower. Initially, the conventional sensors of the CR, the most studied in the literature, are described. These belong to the lower layer of the model. The sensors of the physical layer, especially those that propose solutions to detect the free spaces of the spectrum, are detailed. These detectors of “holes” can refine themselves in the future by taking into account the network density constraints. The notion of sensors is then broadened by describing certain sensors of higher and intermediate layers. The sensors of the intermediate layer focus on spectrum analysis to identify the available networks, with significant efforts in blind standard recognition. Finally, we described the higher layer sensors that enable us, among other functions, to compress the audio-visual signal intelligently in order to ensure the best possible reconstruction quality according to the context.

Chapter 4

Decision Making and Learning

4.1. Introduction In spite of scientific advances in the fields of neuropsychology and, more generally, in cognitive sciences during the last 50 years, we are still very far from understanding the physiological mechanisms that explain learning and decision making in the animal kingdom. Although no computer can still claim cognitive qualities similar to human beings, the problem of learning and decision making for an intelligent telecommunications system starts by establishing precise rules that give birth to these functions. The learning for a cognitive system corresponds to a phase of interpretation of stimuli provided by the environment in a language that is understandable by the system and is minimalist in terms of information storage. In this chapter, we will describe a mathematical approach based on Bayesian probabilities and the principle of maximum entropy, which allows learning via cognitive radios (CRs). The advantages and drawbacks of these techniques will be elaborated through examples of channel modeling and channel estimation. After learning, the intelligent device is required to make decisions involving actions that will enable it to adapt itself to its surrounding environment (and sometimes modify this environment). This decision comprises the choice of the action to be performed. This choice will be guided by information acquired by the system and, in particular, in intelligent systems for telecommunications, by information about other network agents. It is particularly essential that each agent knows or has at least a priori

Chapter written by Romain C OUILLET , Mérouane D EBBAH, Hamidou T EMBINE , Wassim J OUINI and Christophe M OY.

78

Radio Engineering

knowledge of the strategy adopted by other network agents, to avoid reaching a neverending situation in which the device decides to take this or that decision owing to the fact that other agents know that it knows that they know ... which set of actions the device can take? The decisions that a set of smart communication devices are to take are well analyzed using game theory via the important work of Von Neumann and Nash. We will discuss in succession the theoretical aspect of decision making and will study, through game theory, the artificial mechanisms of generating autonomous cooperative networks of intelligent devices. In the beginning, this chapter will elucidate the problems of decision making and learning in the context of CR. In particular, the concept of a cognitive agent will be introduced. Then, the constraints related to decision making lead to the introduction of the notion of decision space. Subsequently, decision making and learning will be discussed from the device and network point of view. Finally, as a more concrete example, a state of the art in the case of dynamic adaptation will be proposed and in this context a proposal will be made to classify the related decision-making methods.

4.2. CR equipment: decision and/or learning 4.2.1. Cognitive agent Whatever the context considered, CR equipment can be defined as being an autonomous communication system, well conscious of its environment in addition to its operational capabilities, and able to act on them intelligently [HAY 05, MIT 99a]. Hence, it is a device equipped with sensors that are able to collect different kinds of information and is capable of using this information to adapt to its environment as shown in Figure 1.1. This assumes that the system has elementary cognition capabilities that are: perception (position, spectral environment, etc.), reasoning (data fusion, decision making, learning, memory, etc.), and executive functions (reconfiguration, transmission, etc.). The simplified interaction of the CR equipment with its environment is illustrated in Figure 5.1. Only the intelligent subsystem, i.e. the decision-making engine, will be of interest in the next section. The decisionmaking engine appears like a cognitive agent and can be considered as the brain of the CR equipment. Hence, its objective is to use the collected metrics in order to devise a strategy that satisfies the needs of the user according to the environment in which the equipment operates. This strategy results in commands sent to the rest of the equipment in order to change its operational parameters (modulation, coding, transmission power, band used, etc). As a first approximation, the intelligent agent at each instant depends on:

Decision Making and Learning

79

– information that it collects and is capable of managing. These metrics characterize the operational environment of the equipment; – “objective function” (or utility function) that characterizes the user needs as well as the possible constraints to which the equipment is subjected. The “objective function” may be the union of several criteria to be optimized; – all the actions/commands that it can execute/give. In terms of equipment, it is identical to the parameters that the CR is capable of modifying. To sum up, we can reduce the decision-making problem to a function that has, as inputs, the pieces of information retrieved by sensors as well as those memorized throughout the operation of the cognitive agent, and which has as outputs, commands for the rest of the equipment. Whatever the methods that may be suggested by radio community, the prime objective is to determine a good decision function (explicit or implicit) that leads to a system capable of meeting user needs while adapting itself to its environment. To accomplish its mission successfully, the intelligent agent will have to face several problems inherent in multiobjective decision making, especially the conflicts between objectives and problem modeling.

4.2.2. Conflicting objectives In the context described above, the cognitive agent is a multicriteria decision function under constraints. In the context of CR, this problem of multicriteria optimization is even more complex as different objective functions are sometimes conflicting (for example, minimize the bit error rate (BER), minimize complexity, maximize throughput, maximize spectral efficiency, and minimize power consumption). As a consequence, a set of parameters that could be optimal for one of the criteria may likely deteriorate the system’s performance significantly with respect to another criterion. Consequently, in this kind of problem, most often there does not exist a single solution better than all others, but rather a subset of “good candidates”, which would be a compromise on the performances of the radio equipment with respect to different criteria. Finally, with the increase in the number of parameters to be manipulated and the objectives to be optimized and hence their heterogeneity, the solution space could quickly become too large to permit exact resolution of a problem or even any exhaustive search for solutions.

4.2.3. A modeling part in all approaches In certain approaches recommended in the literature, an important part of modeling is necessary to analyze the performance of the proposed methods. Unfortunately, the solutions found by these methods are not independent of modeling. In view of these remarks, it is conceived that these solutions will be more or less valid depending on

80

Radio Engineering

the accuracy of the chosen model with respect to reality. Consequently, the proposed solutions at the time of resolution of the multicriteria problem will also be more or less adapted to reality. It is unfortunately very difficult to analyze, in a general case, the effects of these biases on the solution quality; nevertheless, we can expect that the system would react “satisfactorily” (though not necessarily optimally) in cases in which the chosen model would be close to reality. On the other hand, in cases where the identified model would be “distant” from reality, it is difficult to predict the performance gap between optimal case and the found solution. An opposite approach would be to consider only minimal assumptions on the environment. This leads to interesting solutions, as we will see at the end, in the case of “multi-armed bandit” (MAB). Nonetheless, conceiving an efficient system will require a compromise between modeling assumptions and expert knowledge.

Figure 4.1. An example of a multiobjective problem in the cognitive context [RON 06]. In the problem described above, the decision engine is interested in several objectives such as throughput, spectral efficiency, rate bit error (BER) or even power consumption. These objectives depend on a certain number of parameters common to the objectives (Tables I and II in this figure). Consequently, values chosen to optimize one of the criteria could lead to significant degradation in system performances with respect to another criterion

4.2.4. Decision making and learning: network equipment In numerous scenarios, the decisions taken by the equipment can be considered local, and hence without any impact on overall behavior of the network. Nevertheless, to diminish the large range of problems encountered by the CR community, we cannot ignore the behavior of network elements that could lead to a significant degradation of transmission performance of all the neighboring devices: use of excessive high power, access to a frequency band without authorization, etc. In these cases, it is necessary to set up basic rules of behavior in order to allow all to evenhandedly benefit from resources offered by the environment.

Decision Making and Learning

81

4.3. Decision design space 4.3.1. Decision constraints An analysis of the literature on dynamic configuration adaptation (DCA) [COL 08, MIT 99a, RIE 04] defines three constraints on which the proper functioning of a CR equipment [JOU 10b] depends. These are: – environmental constraints; – user constraints (or service requested); – equipment constraints (particularly reconfiguration capability). Conceiving the design of CR equipment comes back with the obligation of providing the equipment, necessary intelligence capabilities, allowing it to adapt itself according to its operational objectives and its capabilities. It is therefore essential to offer all the exploration possibilities in the three-dimensional space created by these three constraints. It is to be noted that this approach is equally valid for dynamic spectrum access, which also requires that the CR equipment is able to adapt itself in frequency. 4.3.1.1. Environmental constraints In environmental constraints, we can include not only what is imposed by operating conditions, namely propagation, obstacles, movements, etc., but also what depends on rules for the use of frequencies, the interference tolerance level, etc. It is also true that if the environment imposes too many constraints, then the equipment has no more degree of freedom to adapt itself. However, if the environment does not impose any constraint, the CR equipment can only act within the limits of its own capabilities. Its operation must also remain in accordance with the user’s wishes. 4.3.1.2. User constraints In different usage modes of its telecommunications equipment, user requirements may vary depending on the nature of service requested, the importance of its interactions, or other factors such as power consumption or communication cost. In addition, the operator requirements can also be added to the above, e.g. increase spectral efficiency or use a certain mode that is more profitable at a specific time. However, if we want to apply too many constraints simultaneously, finding the solution to the problem can become impossible because the required objectives may be contradictory. We fully understand that the interests of the user and operator may be different: we want to pay less and the other wants more charges. However, if the user has no particular requirement, it will not be equipped with a CR device because it will not take benefit from it.

82

Radio Engineering

4.3.1.3. Equipment capacity constraints The modifying capabilities are added to the inherent computing power capacities of an equipment so that the device can adapt itself to the environment. This is why CR will be advantageously based on software radio technologies as indicated in Chapters 8, 10, and 11. Flexibility of equipment offers as many new degrees of freedom as its modifiable parameters. 4.3.2. Cognitive radio design space Cognitive radio design space [JOU 10b] is the abstract volume formed by three dimensions: environmental constraints, utilization criteria, and limits of the equipment platform. This space is shown in Figure 4.2. It should be noted that if we consider these three dimensions independent of each other, we obtain an exceedingly large exploration space called virtual space. But as these dimensions may be correlated with each other, this space is curtailed. Indeed, some degrees of freedom on one axis (increase the capacity) may be impossible to exploit if, on another axis, too many parameters are fixed (imposed waveform). In a nutshell, let us note that this space is bounded neither to a time nor to a place. It takes into account all the scenarios considered and also those that might be encountered by radio equipment. As a result, the constraints imposed by the user (through all possible objectives), those fixed by the environment (especially all network constraints depending upon location and time), and constraints intrinsic to the platform define a set of possible decision problems. At each instant, instances of this space will be defined as functions of the exact constraints being met. The cognitive agent will then search for a solution to the problem based on the a priori information that it has due to the functional relationship that links these three elements of space. We will then see that in the case of DCA the decision space found in the literature is the same. However, based on what is supposed to be known to the cognitive agent, techniques proposed to solve the decision problem can vary. 4.4. Decision making and learning from the equipment’s perspective 4.4.1. A priori uncertainty measurements One of the first people to notice the mathematical modeling of learning was Richard Cox [COX 46], who in 1946 designated the domain of probabilities as the most appropriate tool to model the abstract notions of knowledge and learning. In a cognitive approach, the probability theory characterizes not only the study of laws of random events (called most occurrence approach), but also the study of confidence of an agent on deterministic events with incomplete information (called Bayesian approach). Denoting the event by A, and all the information known a priori to

Decision Making and Learning

83

the agent by I (whether this information is correlated to A or not), the degree of knowledge of A given the a priori information I is given by: P (A|I)

[4.1]

If A is completely known, i.e. if the agent is perfectly able to judge whether A is true or not starting from I, then P (A|I) ∈ {0, 1}. In the contrary case, when I does not alone determine the knowledge for the agent of the truthfulness of A, P (A|I) ∈ [0, 1], indicating by this bias that the agent has a certain degree of belief in A. P (A|I) will be particularly close to 1 if I provides evidence of truthfulness of A.

Figure 4.2. Modeling space of cognitive radio [JOU 10b]. The space shown above defines all problems that could be faced by the radio. The volume of this space depends on three dimensions that constrain the decision agent; first, the constraints intrinsic to the equipment (e.g. waveforms that it can generate), then the network-related constraints such as maximum power and authorized interference or bands to be used, etc., and finally, the user constraints that are manifested in the form of criteria to be met insofar as possible. If each dimension is considered independent of the other, modeling space is larger than the limited real space. Indeed, the three dimensions are not independent and the constraints imposed by one of the constituents (equipment, network, user) will have necessarily an impact on all the possibilities

84

Radio Engineering

Cox [COX 46] then demonstrated that this probabilistic approach is consistent with the classical rules, namely the Bayes rule, for two events A and B and an a priori I: P (A|B, I) =

P (B|A, I)P (A|I) P (B|I)

[4.2]

The Bayes rule reflects learning information on the event A when the information B is given to the agent. The transition of probability from P (A|I) to P (A|B, I) is to be considered as the update of our degree of belief on event A when the additional information B is given to the agent. Bayesian learning techniques that are an extension of Cox’s approach are discussed in the context of CR in next section. 4.4.2. Bayesian techniques In this section, we develop the Bayesian approach of learning information about the environment by an opportunistic agent in the cognitive network through the example of modeling and channel estimation. Let us consider a telecommunications equipment in an opportunistic network searching for a multiantenna link within itself, which has nR antennas and a primary network user whose number of antennas nT is known a priori. These two pieces of information naturally make a part of the a priori information I. The cognitive agent then seeks to estimate the multidimensional channel H0 ∈ CnR ×nT . If no stimulus (no data) is received through the link H0 , then the a priori information I gathers little information in general; however, the cognitive agent has a degree of confidence P (H|I) that each possible link H is the effective channel H0 . In particular, it is highly unlikely that each input of H0 is identically equal to 1010 , corresponding to a gain of 200 dB on each channel input. This later channel must be assigned to a low probability. In general, it is desirable that for any a priori information I there exists a natural and systematic way to obtain P (H|I). This method, devised by Jaynes [JAY 82, JAY 89, JAY 03], exists and is commonly known as theory of maximum entropy. It consists of following two steps: – Consider the whole set of probability distributions and eliminate from this set any probability distribution that is inconsistent with the a priori information I. In particular, if I claims that E[H] = 0, every distribution with the non-zero mean is eliminated. – In the remaining set, probability distribution of the maximum entropy is chosen and assigned P (H|I), i.e. P (H|I) is defined by:    P (H|I) = arg max − q(H) log q(H)dH [4.3] q

Decision Making and Learning

85

The choice of the maximum entropy distribution is justified by Jaynes as the choice being the most “honest” in the sense that it does not presume any additional unknown information. It is to be particularly noted that this choice is consistent with the mathematical definition of information by Claude Shannon [SHA 48] and the work of Leon Brillouin [BRI 63] on the close ties between information theory and physics. Especially, note that the uniform distribution of a gas in an open space actually corresponds to the unconstrained maximum entropy distribution and is more likely probable than the probability distribution where all the gas is concentrated at one precise point in space. The cognitive agent is likely to have important a priori information of the propagation channel, in particular, when the agent has strong reasons to believe that the transmitter is in the line of sight (typical case of a network deployed in a low urbanized areas), this information is integrated to I. Similarly, if the agent is placed in an environment where electromagnetic waves are likely to come from one preferred direction, P (H|I) must integrate this information. Work of Debbah and colleagues describes several situations of channel inference under different statistical constraints brought together in the a priori information I [GUI 06]. Let us consider, in particular, the case where the only knowledge available about the channel is its average gain. I integrates this information as: ⎤ ⎡ ⎥ ⎢  |hij |2 ⎥ E⎢ ⎦=E ⎣

[4.4]

1≤i≤nR 1≤j≤nT

Using Lagrangian multipliers, it is possible to evaluate the distribution P (H|I), which has a maximum entropy under the constraint I. This is given by:    P (H|I) = arg max − dHq(H) log q(H) q



nT  nR   i=1 j=1



+ β 1−



 E−

 dH|hij | q(H)

 dHq(H)

2

[4.5]

where γ and β are Lagrangian multipliers associated with the following constraints, respectively: (i) q a second-order moment equal to E and (ii) q a probability distribution. It was shown in [GUI 06] that previous optimization implies: P (hij |I) = e

−(γ|x|2 + nβ+1 n ) R T

[4.6]

86

Radio Engineering

and hence each entry of H is identically distributed. The power constraint E implies that P (H|I) is a multivariate Gaussian distribution with zero mean and variance E/nR InR . Therefore, it seems that a multiantenna channel with independent and identically distributed Gaussian inputs is consistent with the a priori information of the complete knowledge of the second-order moment. It is also proven that the a priori knowledge of the intrinsic correlations between transmit and receive antennas generates a channel distribution consistent with Kronecker’s model, i.e.: 1

1

P (H|I) = R 2 XT 2

[4.7]

where T ∈ CnT ×nT and R ∈ CnR ×nR are known correlation matrices in transmission and reception, respectively, and X is a random matrix of independent and identically distributed inputs with zero mean. Once this a priori distribution is established, any additional stimulus x received by the antennas of the cognitive agent, i.e. typically a signal emitted by the transmitter, must be integrated to the information I that becomes I  = (x, I) and the resulting channel distribution using Bayes rule becomes: P (H|I  ) = P (H|x, I) =

1 P (x|H, I)P (H|I) α

[4.8]

 where α = P (x|H, I)P (H|I)dx is a simple normalization factor. We therefore have an explicit relationship to verify the learning of H by the addition of information x. If the information brought by x is useful, i.e. if it brings additional data to the final knowledge of H, then P (H|I  ) will have a less spread out profile with low probability for all H but, on the other hand, more centered in the neighborhood of H0 with a high probability. In [COU 09] and [COU 10a], Couillard et al. describe the channel estimation operations in detail, starting from a priori P (H|I), which changes the decisions that have already been made and are discussed later in this chapter. We now describe the techniques known as reinforcement techniques that allow us continuous learning when the environment evolves and every action of the agent modifies its the environment and makes it possible to acquire more information a posteriori about this environment. These techniques deviate from the ideal Bayesian techniques described here, but propose robust algorithms of sequential learning. 4.4.3. Reinforcement techniques: general case In many problems encountered in the CR domain (for example, the opportunist access to the spectrum or reconfiguration in an unknown environment), the decision maker faces many choices without a priori knowledge on their performance. Nonetheless, it must find a strategy that enables it to maximize the quality of service offered to the users without disturbing the rest of the network. In this framework,

Decision Making and Learning

87

the decision maker has no alternative but to try different choices available to it, and estimate their performance. This estimate is the reinforcement signal that will enable the decision maker to adapt its behavior to its environment. If the decision maker spends enough time on each of the possible choices, we can easily imagine that it will have sufficient precision on their performance allowing it to make an appropriate decision when time comes. However, if it devotes too much time in estimating the performance of the possible choices, the user will not benefit form this collected information. As a consequence, the decision maker faces a dilemma between the immediate exploitation of choices that seem most profitable (i.e. the choice having the most current estimate) and exploration of other choices in order to improve the performance estimation of the available choices. In the rest of this section, we will separate the case where the chosen action depends on the present state of the decision maker, from the case where the notion of the state does not exist or is considered independent of the decisions to take. 4.4.3.1. Bellman’s equation During the summer of 1949, Richard Bellman, a 28-year-old mathematician at Stanford University, already renowned for his promising work in number theory, was hired as a consultant at RAND Corporation, an institution of research and development founded in 1945 by the U.S. Air Force. He was interested in applications of mathematics. It was suggested that he work on the decision-making process at multiple steps. At that time, research in mathematics was not really appreciated by the Department of Defense and among politicians who also directed the Air Force. Bellman’s first task was to name his work that would gratify his executives. He selected the word programming, which at that time was considered more relevant to planning and scheduling than the programming in sense of algorithms in our time. Then he added the term dynamic to evoke the idea of evolution over time. The terminology of dynamic programming thus served as an umbrella for Bellman to cover his mathematical research activities at the RAND Corporation. Dynamic programming is based on a technique called Bellman’s optimality principle. This general principle states that the solution to a global problem can be obtained by decomposing the problem into subproblems that are simpler to solve. An elementary but conventional example is the computation of the shortest paths (or paths with lower costs) in a graph. A famous example in this context is the traveling salesperson problem. Bellman started working on the theory of optimal control while studying the optimality principle of dynamic programming. This domain deals with the problem of finding a control strategy for a given system in order to satisfy an optimality criterion involving a cost function that depends on state and control variables. For example, consider a car traversing a hilly road. The question is to determine how the driver must drive in order to minimize the total duration of the journey. Here, the control strategy means how the driver must press the accelerator or brake pedal. The system

88

Radio Engineering

consists of the car and the road. The optimality criterion is to minimize the overall length of the route. Control problems generally include auxiliary constraints. In the case of the example considered, it may be the limited quantity of petrol, speed limits, etc. A cost function here is a mathematical expression giving travel time as function of speed, geometric considerations of the road, etc. We introduce value functions at different times or intermediate stages and calculate them starting from the end and then by recursive induction. We will use this optimality principle to learn quality as well as the optimal strategy in the following section. 4.4.3.2. Bellman’s equation to reinforcement techniques Let us consider an entity that perceives its environment through sensors and acts accordingly. Perceptions are used not only to act but also to improve performance of the agent in the future. Let us consider a finite set S, describing the possible states of the agent. In each state s ∈ S, the agent has a finite set of actions denoted by A(s). When the agent chooses an action at at time t, it modifies its environment and therefore perceptions. It goes from state st to state st+1 by receiving a reward rt = r(st , at ). In general, the perceptions and the states do not permit reconstructing the entire environment and the effect of an action on perceptions is a stochastic dynamic process at discrete time. We describe the case in which the process is Markovian of degree 1, i.e. at any time t, the probability of going from s to s depends on s and a only, and not on the previous states and actions: P (st+1 = s | st = s, at = a, st−1 , at−1 , . . . s0 , a0 ) = P (st+1 = s |st = s, at = a) = Psas

[4.9]

This defines a Markov decision process given by the quadruple (S, A, P, r). We call deterministic Markov policy a function π : S −→ A that associates with each state an action to be performed. More generally, we can define a non-deterministic Markov policy π, which, given a state s, associates a probability distribution on the action space: π(s) ∈ Δ(A(s)), where Δ(A(s)) is the set of probabilities A(s). We define the value function of a state s under a deterministic policy π as being the discounted cumulated reward:   v π (s) = E (1 − γ)r0 + (1 − γ)γr1 + (1 − γ)γ 2 r2 + . . . where E is the mathematical expectation of the transition distribution P , and γ a discount parameter (called discount rate). This parameter γ determines the present value of future rewards. Unlike dynamic programming in which the complete model of the environment is assumed to be known, reinforcement learning relaxes a certain

Decision Making and Learning

89

number of assumptions. The agent only knows the perceptions and states and should continuously improve its policy by attempting new actions to better understand the consequences of its actions on the environment. One of the advantages of the reinforcement techniques is that they are rather general models that do not require all the parameters of the environment and allow dynamic adaptation when conditions change. To start with, let us study the reinforcement learning models that are directly constructed from Bellman’s optimality equation (complete model). Then we will see what we can do if this information is unreliable. We define the discounted cumulative reward, also called quality, as follows: ⎛ ⎞  γ t rt | s0 = s, a0 = a⎠ Qπ (s, a) = (1 − γ)E ⎝ t≥0

which can be rewritten as: Qπ (s, a) = (1 − γ)r(s, a) + γ



Psπ(s)s v π (s )

s ∈S

Then the Bellman equation can be written as:  Psπ(s)s v π (s ) v π (s) = (1 − γ)r(s, a) + γ s ∈S

This equation gives a recursive relationship on v π (s) for a state s by binding it to the value function of its successor states. We can evaluate the value function of a Markov policy by solving the linear system of |S| (cardinality of S) equations with |S| unknowns. Note that to solve this system we must know the values of Psπ(s)s . We will describe in the next section how to learn the value function when P is unknown. 4.4.3.3. Value update The method consists of iterations of values to solve Bellman’s optimality equations incrementally and then to construct an optimal policy. This technique generates a sequence v0 , v1 , v2 , . . . of value functions, which converges toward v ∗ . The algorithm is described as follows: – v0 is set arbitrarily (e.g. the null vector); – update the value: vt+1 (s) = (1 − γ)r(s, π(s)) + γ



Psπ(s)s vt (s );

s ∈S

– stop the iteration if the difference maxs |vt+1 (s) − vt (s)| ≤ . Bellman’s optimality equation is:  ∗

v (s) = max a∈A(s)

(1 − γ)r(s, a) + γ

 s ∈S

 ∗



Psas v (s )

90

Radio Engineering

In a finite horizon, this result of Bellman’s can be depicted using a graph. The Markov decision process corresponds to a shortest path problem on a weighted graph. The result says that any subtrajectory of an optimal trajectory is optimal. We can show that this extends to infinite horizon problems. There also exists a generalization of this method that modifies the value function only in a subset of states, at each stage of the algorithm. 4.4.3.4. Iteration algorithm for policies If we know P and r, we can directly solve Bellman equation and obtain v ∗ . We can make an optimal policy starting from Bellman equation by setting:    π ∗ (s) ∈ arg max (1 − γ)r(s, a) + γ Psas v ∗ (s ) a∈A(s)

s ∈S

We say that policy π is better than another policy π  , if the value function obtained  starting from s under the policy π, v π (s) is greater or equal to the value function v π (s) obtained under π  starting from s. We have the following result on the improvement of  policies: policy π is better that π  if Qπ (s, π) ≥ v π (s), ∀ s ∈ S. On the basis of this result, we make an algorithm that improves policy at each step. Since there are only a finite number of states and a finite number of actions in each state, the process stops after a certain number of steps (otherwise we may end up falling on the same action). The algorithm is described as follows: – choose an initial policy π0 . Evaluate this policy by calculating Qπ0 ; – make a new policy: π1 (s) ∈ arg maxa Qπ0 (s, a), ∀ s ∈ S; – by using the result on the improvement of policies, the newly obtained policy π1 is better than π0 . If π1 = π0 , the algorithm stops, if not we start again by taking policy π1 as initial policy. Even if the algorithm stops in a finite number of steps, the computational complexity of Qπ at each stage can be very high. 4.4.3.5. Q-learning Q-learning is a reinforcement learning technique. Its goal is to learn the quality values (Q-values). The iterative method is the following: Qt+1 (st , at )



= Qt (st , at ) + (1 − γ)r(st , at ) + γ

 s ∈S

 



Pst at s max Qt (s , a ) − Qt (st , at )  a

[4.10] In this equation, we replace r(st , at ) by perception rt and the summation over s by the state st+1 , modified by the multiplicative factor αt . Then the learning algorithm of Q can be written as:

Decision Making and Learning

91

– choose arbitrarily Q0 (s, a), ∀s, a,; – extract the current state st , choose an action at ∈ A(st ): initially, all actions of all the states are tested, then exploit the function Q while continuing exploration; – observe the perceived reward rt and the new state st+1 ; – update Qt (st , at ) using:    Q (s , a ) − Q (s , a ) Qt+1 (st , at ) = Qt (st , at ) + αt (1 − γ)rt + γ max t t+1 t t t  a

and decrease the coefficient αt ; – repeat step 2. Assuming that the coefficients αt satisfy:   αt ≥ 0, αt = +∞, α2t < +∞, t≥0

t≥0

we can show that this algorithm converges if for each pair (s, a), the function Q is always updated. Choosing an action: – -threshold: draw uniformly a number λ in the interval [0, 1]. If λ < , then explore a new action at at random (draw uniformly an action at ∈ A(s)). If λ ≥ then exploit it by choosing: at ∈ arg maxa Q(st , a); – choose an action at ∈ A(st ) according to the rule given by:

Q(st ,at ) ; a Q(st ,a)

– choose at according to the Boltzmann–Gibbs distribution given by: 1

e  Q(st ,at ) 1 Q(s ,a) .  t  ae

When is fairly large, the distribution is almost uniform. When tends

to zero, the distribution approaches toward arg maxa Q(st , a). 4.4.4. Reinforcement techniques: slot machine problem 4.4.4.1. An introductory example: analogy with a slot machine Let us suppose that a player enters in a casino one day and there s/he is faced with a certain number of slot machines. S/he then seeks to maximize the collected earnings as a result of a certain number of tries. If this player has complete information on the average earnings of each slot machine, an optimal strategy will be to keep playing with the machine having the highest average earnings. However, in the case so far where the player has no information that s/he could win by playing on one or another machine, s/he has no other choice but to test different machines to estimate their average earnings. This model is also known in the litterature as the MAB problem. If we imagine that these machines are subbands that we would like to access or the

92

Radio Engineering

configurations to be tested, these decision and learning problems in CR are similar to a slot machine problem. This particular problem is known as opportunistic spectrum access (see section 1.3.2.4). 4.4.4.2. Mathematical formalism and fundamental results Let us consider a set K = {1, 2, ..., k, ..., K} of K slot machines among which the decision maker seeks to determine which machine will provide him the highest average gain. At each instant t = 0, 1, 2.., the decision maker plays sequentially with each of the machines following a certain strategy π. At each time a machine k is played, the player collects a reward rt from the machine’s own distribution θk . We will assume that the earnings collected from the same machine are independent and identically distributed. As far as the distributions θk are concerned, they are supposed to be independent, stationary, but not all identical. Finally, we will note any expectation μk = E[θk ] of a distribution θk whatsoever and expectation μ∗ = max{μk } of the distribution associated with the optimal machine. k

In this context, it is possible to define the concept of cumulative loss called “regret” as follows: Rtπ = t.μ∗ −

t−1 

rm

[4.11]

m=0

Under these assumptions, the expected cumulative regret while considering Δk = μ∗ − μk can be written as: E[Rtπ ]

=

K 

Δk .E[Tk (t)]

[4.12]

k=1

We will say that a given strategy is efficient if it minimizes the average cumulative regret. In 1985, Lai and Robbins [LAI 85] showed that whatever the adopted strategy may be, the average cumulated regret is necessarily greater or equal to a logarithmic function of time t. They, moreover, showed that the directing coefficient of this function depends on the distributions θk considered. Consequently, no player on average can expect to obtain a smaller average cumulative regret. Finally, they showed that the strategies capable of achieving, asymptotically, this lower bound of average cumulative regret exist and have explicitly given their forms for certain distributions (Gaussian, Bernoulli, etc.). 4.4.4.3. Upper confidence bound (UCB) algorithms The algorithms presented by Lai and Robbins [LAI 85] make up a part of a large family of algorithms whose principle is as follows:

Decision Making and Learning

93

– associate with each machine an index that will synthesize the information collected on the machine until the instant t; – at instant t + 1, choose the machine that has the largest index; – the decision maker plays the selected machine and obtains a gain rt drawn from the distribution associated with the machine; – update the index of the played machine. Optimal form indices proposed by Lai and Robbins often demand a long and tiresome calculation (it actually calculates the generalized likelihood ratio) and assume memorization of all the gains collected until the instant t. In addition, the results given by Lai and Robbins are valid only asymptotically. We prefer in this chapter to describe suboptimal indices that are very simple to calculate and guarantee an average cumulative regret smaller than some logarithmic function of time, i.e. whatever the instant t considered they are uniform in time. The general form of indices considered is as follows: Bk,t,Tk (t) = X k,Tk (t) + Ak,t,Tk (t)

[4.13]

where the terms that appear in the above equation are defined as follows: – Bk,t,Tk (t) is the index associated with the machine k after having played Tk times, until iteration t. B indexes provide a UCB of the actual result as they are optimistic estimations of the performance of every one of the MABs or slot machines; – X k,Tk (t) is the empirical average reward collected by playing the machine k; – Ak,t,Tk (t) is a bias added and has as a role to overestimate the performance of the machine. In the rest of this section, the results presented are considered only in the case of a set K of distribution bounded by [0, 1]. 4.4.4.4. U CB1 algorithm In the case of UCB1, the bias has the following form:  α. ln(t) Ak,t,Tk (t) = Tk (t)

[4.14]

where α is a positive real number. It is possible to prove that the strategy that always chooses the machine with the highest index has bounded average cumulated regret if α > 1 [AUD 07, AUE 02]: E[Rtπ=UCB1 ] ≤

 4.α . ln(t) Δk

k:Δk >0

[4.15]

94

Radio Engineering

4.4.4.5. U CBV algorithm In the case of UCBV , the bias has the following form:  2ξ.Vk (t). ln(t) 3.c.ξ. ln(t) Ak,t,Tk (t) = + Tk (t) Tk (t)

[4.16]

with c ≥ 1, ξ > 1 and where Vk (t) represents the empirical variance associated with the machine k. It is possible to prove that the strategy that always chooses the machine with the highest index has bounded average cumulated regret [AUD 07]: E[Rtπ=UCBV ] ≤ Cξ



(

k:Δk >0

σk2 + 2). ln(t) Δk

[4.17]

where Cξ is a factor that depends on parameter ξ and σk2 is the variance associated with each machine. 4.4.4.6. Application example: opportunistic spectrum access The past century witnessed a significant part of the spectral resources exclusively dedicated to many services that appeared year after year. With the seemingly unending increase in the need to allocate frequency bands to the emerging wireless applications, the world of radio communications was faced with a shortage of spectral resources. Nevertheless, a recent study showed that in reality these resources were underused (see Figures 1.5 and 1.6). In other words, it is not the abundance of frequency bands which is to be put in question but rather the management of this resource. In order to exploit the underused bands, at a certain moment in a given place, accessible to other services, the CR community devised the distribution of access rights as follows: – Equipment or networks that use a frequency band that had been allocated to them are called primary users. They have priority over the band and have all access rights (within the limit of those delegated by the local regulation authority). – Any equipment or networks that attempt to benefit from a frequency band, momentarily, during the absence of the primary network are called secondary users. They must respect the priority of the primary users. To avoid disturbing the primary networks in their neighborhood, secondary users will need to scan their environment in order to detect the possible activity of a primary user. Based on the collected information, a decision is taken (e.g. spectrum access, change of frequency band, and continue to scan) followed by appropriate actions (e.g. reconfiguration and data transmission). The secondary user device must follow the elementary cognitive cycle (perception–reasoning–action) as mentioned earlier. Hence, the CR technology naturally responds to the specifications. The framework provided by the MAB is one of the models to describe the opportunistic spectrum

Decision Making and Learning

95

access problem from the point of view of the secondary users. Indeed, the machines in this case are just like different frequency bands that the secondary user (i.e. the decision maker) would like to access. Time is assumed to be divided into blocks of defined size (size of a data packet) t = 0, 1, 2... For each new block t, the CR equipment chooses a frequency band, explores this band in search of a primary user, and accesses it in case of availability of the spectrum resource. In this case, the equipment transmits a data packet, otherwise the device awaits the next block. In all the cases, the decision engine determines again at the end of the operations performed during a block, a frequency band to be explored, and the equipment repeats the afore-mentioned cycle. After each cycle, the decision-making engine collects a reward that depends on user needs. This gain could be, for example, channel availability (0 for an occupied channel, 1 for a free channel) or transmission throughput. Hence, during a communication, the CR equipment seeks to maximize these accumulated gains (spectrum access or accumulated throughput). As a result, it is possible to use the tools described above (reinforcement learning in general and UCB algorithms in particular) to address the slot machine problem. The behavior of UCB1 and UCBV algorithms in the case of 10 channels (i.e. frequency bands or machines) [JOU 10a] is illustrated in Figure 4.3. In this particular example, the cognitive agent is interested in the availability of the probed channels. These curves show the proportion of time spent in selecting the channel, which is mostly available on average. It is observed that the two algorithms more and more often select the channel, on average, that is less used by the primary users (and hence, which maximizes the gains of secondary cognitive agent). However, their behavior is slightly different in the way that the UCBV , which uses the empirical variance in the expression of these indices, seems less efficient, on average, at the beginning of the experiment. This phase can be interpreted as being the learning time. Afterward, the UCBV better and better exploits the information collected on different channels. During this second phase, the selection rate of the best channel increases rapidly. Therefore, it implies that the cognitive agent most often selects the best channel.

4.4.5. Artificial intelligence The term “artificial intelligence” might cover all the methods discussed so far, in addition to those that we will present afterward. In fact, by artificial intelligence we mean all the methods that try to give learning and decision-making abilities to a system, hence conferring to it an aptitude to “reason” in order to adapt to its environment intelligently. Nevertheless, we have preferred to separate a few techniques that we will present, thereafter, to put in a specific context related to a fundamental problem in CR i.e. DCA (see section 4.6).

96

Radio Engineering

Figure 4.3. Selection of most available channel to the secondary user in the context of opportunistic spectrum access. The decision-making agent uses UCB algorithms. This curve illustrates the convergence of the cognitive agent toward the channel that seems to be, on average, less used by the primary network. Thus, the secondary user can expect, on average, to maximize the number of spectrum accesses [JOU 10a]

4.5. Decision making and learning from network perspective: game theory 4.5.1. Active or passive decision Decision making can be divided into two approaches: passive decision making that does not result in an action that modifies the environment, and active decision making that results in real action on the environment. In general, passive decisions are made with the objective of choosing an action that does not modify the environment and whose goal is to accelerate learning or selection of a parameter about which the agent has only incomplete information. Among the choices of passive actions, we mention the example of choice relevant to the next information source to be exploited; translated in a human context, this choice is typically made when we want to collect information on weather conditions (Do I have to turn on the television and wait for the weather report? Do I have to look at the sky and evaluate myself? etc.). Knuth [KNU 02] and Knuth et al. [KNU 07] addressed these issues simultaneously from the theoretical perspective, for example, mathematical definition of a question and good choice of the next question to pose [KNU 02] is a reality, e.g. design a robot that recognizes geometric shapes quickly by deciding optimally the choice of the next observation. In the framework of CRs

Decision Making and Learning

97

where a large amount of data must sometimes pass in the opportunist network for its combined learning, it may be important to select the requested information carefully. However, even a few studies are not known to date in this framework of CR. Among the examples of passive decisions, which brings along a choice of a parameter on which only incomplete information is available, we mention the context of channel estimation that, coupled with the learning described above, gives location to ˆ 0 that better estimates the true channel H0 . In [COU 09] and the choice of a channel H [COU 10a], decision methods based on estimators with minimum mean square error (MMSE) are developed in the case of partial or complete knowledge I of temporal coherence and spatial parameters of the channel. The general method is to decide on an error measurement that can be written as a probability distribution function P (H|I):  ˆ [4.18] H0 = f (H)P (H|I)dH In particular, the typical choice of MMSE estimator gives:  ˆ (MMSE) = HP (H|I)dH H 0

[4.19]

In the context of decision making that results in an action from the agent, this action generally affects simultaneously the agent who takes the decision and its environment. This environment is most often composed of other actors who then modify their behavior in response to the action of the first agent. Poor decision making may lead to a radical modification of the environment that will drive the next decision to again modify the environment significantly, eventually resulting in a situation of total instability. This situation occurs typically when several cognitive agents tend to share a resource (a frequency band of transmission, for example) whereas each agent has no pre-established strategy to request a resource; this generally results in corruption of a lot of exchanged data and very low average throughput of the communication. On the contrary, when the two agents have an established strategy and each player knows the strategy of resource sharing of its competitor, non-cooperative decisions can be made to maximize the collective throughput. We will discuss situations in which numerous agents have a well-established strategy for a situation of the game. The next sections of this chapter are dedicated to an introduction to such non-cooperative games in the particular framework of resource sharing. First of all, we introduce the mathematical basics of game theory and the choice of optimal strategies in a multiagent context. 4.5.2. Techniques based on game theory So far we have studied the learning mechanisms for only one agent. In this section, we are interested in possible extensions for several agents. The difficulty

98

Radio Engineering

of learning in a multiagent context lies in the fact that decision making is now interactive. The reward (also called utility, payment, performance, etc.) of an agent depends not only on its own decision but also on the decisions of other agents. When each agent observes the actions taken by the other agents in the prior stage, there exist sophisticated techniques such as Cournot’s groping (best response), the fictitious play (each agent chooses a best response to the frequency of actions taken by others), and the best response dynamics. Learning becomes more complicated when these assumptions on observations are not satisfied. We will describe how to obtain approached values in situations in which no agent is interested in changing its action unilaterally (s/he cannot do better in terms of immediate reward if the others hold on to their choices). Such a situation is known as Cournot equilibrium or Nash equilibrium. Let us consider n agents and denote the set of these agents by N = {1, . . . , n}. Each agent j ∈ N has a set of choices (actions) denoted by Aj . At each time instance t, every agent j chooses an action aj,t ∈ Aj . Agent j receives a reward of rj,t = rj (a1,t , . . . , an,t ). The collection (N, {Aj }j∈N , {rj }j∈N ) is called normalform game or strategic-form game. For an agent j, an action bj is best response to: a−j := (a1 , . . . , aj−1 , aj+1 , . . . , an ) ∈

Aj j  =j

if: rj (a1 , . . . , aj−1 , bj , aj+1 , . . . , an ) bj ∈ arg max  bj

Let us denote by BRj (a−j ) the set of strategies of agent j that are best responses to the actions’ profile a−j . This set plays a vital role in the concept of the Nash solution. In fact, the Nash equilibrium is characterized as a fixed point of this multivalued application. 4.5.2.1. Cournot’s competition and best response Cournot’s competition is a very simple adjustment on the observations of previous actions of other agents. Agents can act simultaneously or by turns. At time t, agent j plays the best response to the actions a−j,t−1 taken by other agents at time t − 1. Therefore, this mechanism requires the knowledge of what other agents have chosen in the previous step. The convergence of this process is not guaranteed. A typical example that illustrates this divergence phenomenon is given by the fixed points of logistics of the functions of type f (x) = r x(1 − x) for r = 4. 4.5.2.2. Fictitious play At each time instance, each agent chooses an action that is the best response to the empirical average of the actions taken by the others. The stationary distributions of

Decision Making and Learning

99

this process are the Nash equilibria of this game. To formalize this,we introduce the average of actions performed by agent j until time t, fj,t (bj ) = 1t tt =1 1l{aj,t =bj } . This average can be calculated at each step by the following recursive equation: fj,t+1 (bj ) = fj,t (bj ) +

1 (1l{aj,t+1 =bj } − fj,t (bj )) t+1

This algorithm converges (in frequency) to a zero-sum game (r1 +r2 = 0) with two agents, to games with common interests or even to potential games. It should be noted that the fictitious play’s algorithm implicitly makes the assumptions that each agent has the information on the strategy played by all other agents in the previous step. It can calculate this average (using the recursive equation, for example) while assuming that others are stationary. This behavior is limited in the sense that the future reward is not taken into account. 4.5.2.3. Reinforcement strategy The reinforcement learning mechanism gives more weight to the strategies that give good rewards while exploring all new actions. Let us consider the case where each agent j has a reference Mj on its reward, a learning rate λj > 0, and it updates its strategy based on its current perception. The agent j updates its strategy as follows: !  xj,t (bj ) + λj sj,t b =bj xj,t (bj ) if sj,t ≥ 0 j xj,t+1 (bj ) = if sj,t < 0 xj,t (bj ) + λj sj,t xj,t (bj ) The term sj,t =

rj,t −Mj supa |rj (a)−Mj |

represents a reference measurement for agent j.

This algorithm has good convergence properties if dimensions are small (say two or three). By changing the scale, its trajectory can be approximated by a system of ordinary differential equations (ODEs). Most of the convergence demonstrations of the learning mechanisms of this type use stochastic approximation techniques. Once the ODEs are obtained, we study vector fields, phase plans, and basins of attraction in order to deduce stability and instability conditions of stationary points. These link learning mechanisms to dynamic games that are well known in evolutionary games (replicator dynamics, better response, Brown–von Neumann–Nash, dynamic projection, etc.). Consider the learning mechanism given by: " #  xj,t+1 (bj ) = xj,t (bj ) + λj,t rj,t 1l{aj,t =bj } − xj,t (bj ) j ∈ {1, . . . , n}, bj ∈ Aj The term xj,t is the strategy of agent j at time t. It can be shown that the above learning mechanism converges to a variant of replication dynamics given by: ⎡ ⎤  x˙ j (bj ) = xj (bj ) ⎣r˜j (bj , x−j ) − xj (aj )˜ rj (aj , x−j )⎦ , j ∈ N, bj ∈ Aj aj ∈Aj

100

Radio Engineering

with r˜j (bj , x−j ) := Ex−j rj (bj , .). It should be noted that these dynamics do not always converge. Especially, they may have limit cycles and chaotic orbits. 4.5.2.4. Boltzmann–Gibbs and coupled learning The Boltzmann–Gibbs distribution finds the equilibrium of a perturbed game in which  the reward functions are replaced by rj (x) + j H(xj ). The function H(xj ) = − bj xj (bj ) log(xj (bj )) is the entropy of the strategy xj . The term j H(xj ) can be interpreted as a penalty or cost associated with strategy xj . When j −→ 0, the penalty term approaches zero, i.e. the agent j exactly maximizes its own reward. Since rewards of actions other than those which it has chosen are not known by agent j, this last estimates the rewards rˆj by updating them at each stage: ⎧ = (1 − λj,t )xj,t + λj,t βj (ˆ rj,t ) ⎨ xj,t+1 μ 1 l (r ˆj,t (bj )) rˆj,t+1 (bj ) = rˆj,t (bj ) + xj,tj,t (bj ) {aj,t =bj } j,t − r ⎩ j ∈ {1, . . . , n}, bj ∈ Aj where the component corresponding to the action aj of vector βj (ˆ rj ) is given by: 1

βj (ˆ rj )(aj ) = 

e j

rˆj (aj ) 1

e j

bj ∈Aj

rˆj (bj )

where the parameters λj and μj are the learning rates of the agent j. It can be shown that under certain assumptions on the choice of learning rates (λj,t , μj,t ), trajectories of this algorithm can be approximated by the solutions of the system of ODE given by: ⎧ rj ) − xj x˙j = βj (ˆ ⎨ d r ˆ (b ) = E ˆj (bj ) x rj − r ⎩ dt j j j ∈ {1, . . . , n}, bj ∈ Aj The term Ex rj represents the expected reward in comparison with the strategy of other agents. 4.5.2.5. Imitation Learning by coupled reinforcement and imitation is a variant of the coupled learning mechanism that modifies the Boltzmann–Gibbs distribution and adds part of imitation, the actions that give best performances are increasingly used with a factor proportional to the probability that this action will be selected. The update of the strategy is obtained by replacing the Boltzmann–Gibbs distribution β by σ defined by: 1

σj (xj,t , rˆj,t )(bj ) = 

xj,t (bj )e j

aj ∈Aj

rˆj,t (bj ) 1

xj,t (aj )e j

rˆj,t (aj )

Decision Making and Learning

101

Likewise, for replication dynamics, the interior stationary points of the asymptotic of this learning mechanism are Nash equilibria of the game when the parameters j tend to zero. 4.5.2.6. Learning in stochastic games The quality of learning algorithms that depend on the state may be extended in certain cases to stochastic games (several Markov decision-making processes are interdependent). For example, in the case of zero-sum games, the term max Q will be replaced by min max Q(st+1 ), which is the value of the game in state st+1 . In the non-zero sum games, max Q can be replaced by Nash (Q(st+1 )), i.e. the reward obtained when Nash equilibrium is played in state st+1 . It is to be noted that reward calculation at Nash equilibrium in each step of the algorithm makes this algorithm complex. The complexity on S and action spaces Aj is of exponential order. 4.6. Brief state of the art: classification of methods for dynamic configuration adaptation DCA [JOU 09, JOU 10b] is a CR problem that arises in a condition when the equipment must choose either a satisfactory or an optimal operating configuration (according to optimization criteria), among K available configurations in order to meet the three constraints in fixing a design space as mentioned above. Analysis of different approaches suggested in the literature that respond to the problem of DCA shows that although all the proposed case studies are based on the same decision space, decision approaches differ from each other depending on the assumptions made on a priori knowledge that the cognitive agent has on functional relationships that coercively connect the three dimensions. This information that models the functional relationship between equipment parameters, “objective” functions, and the environment metrics can be envisioned as a fourth dimension added to the design space. The four dimensions together allow us to identify a tool to solve the defined problem. In fact, if we consider that the designer provides a sufficient set of analytical relations to the cognitive agent to directly infer decision rules, then an expert approach could prove to be sufficient. On the other hand, if the designer does not provide information to the cognitive agent, it will have no other alternative but to learn on its own by interacting with its environment. Figure 4.4 provides a summary of classification of approaches detailed henceforth [JOU 10b]. 4.6.1. The expert approach The expert approach starts from the fact that the more the a priori knowledge is complete, the better the device can exploit it and react to its environment.

102

Radio Engineering

This knowledge is based on the expertise of engineers acquired at the theoretical level as well as by means of a series of measurements. Mitola [MIT 00a] in his thesis creates a list of behavioral rules that are supposed to respond systematically to all the cases that the equipment will encounter during its use. For this, it is necessary to have the ability to represent this knowledge in such a way as to exploit it to control it, by adequately adapting the operations. Mitola defined, for this purpose, a knowledge representation language called radio knowledge representation language. As a result, the decision process becomes very simple, complexity being relayed to the expression of knowledge, at the design level. We can obviously expand this approach by devising ways to acquire new knowledge during the operation.

4.6.2. Exploration-based decision making: genetic algorithms In the case of CR and by considering that the received information provides more or less a good estimate of reality, an approach was proposed in which the cognitive agent’s decision is based on a genetic algorithm [RIE 04, RON 06]. In a general context, a genetic algorithm needs to define the notion of individual (or admissible configuration in our case). The latter is encoded in the form of a chromosome whose different genes represent different manipulatable parameters of the radio equipment. The alleles of the chromosome represent some particular instances of these parameters. Using the fitness function, we have evaluated the adequacy of each individual with respect to the environment encountered. In the case of the CR, this of course needs a priori knowledge of the objective functions and the definition of a fitness function that evaluates overall performance solutions vis-à-vis all objectives. At each generation, a selection operation is conducted to support individuals whose evaluation is the most promising. Finally, a set of random operations whose aim is to diversify the selected individuals produces a new generation from the previous generations. The most common operations are: – cross two parent chromosomes to obtain two child chromosomes. This crossover makes it possible to share their genetic inheritance; – random mutation at the level of parent or child chromosomes. The repetition of this chain of operations can converge to satisfactory solutions with respect to the chosen fitness function. The general model proposed by the team at Virginia Tech is based on two genetic algorithms and an intelligent control system called cognitive system monitor. Genetic algorithms have the tasks of channel modeling on the one hand, and determining

Decision Making and Learning

103

particular configurations on the other hand, in order to determine an adequate solution to the problem faced. The purpose of intelligent control is to coordinate these operations and to implement medium- and long-term learning mechanisms in order to establish new decision rules that would enable the recognition of already analyzed cases.

Figure 4.4. Decision-making methods based on a priori knowledge introduced by the designer [JOU 10b]

Under certain assumptions mentioned in the beginning of this section, the approach based on genetic algorithms is particularly promising to address CR-related problems, since on one hand it makes possible exploration of a large solution space and on the other hand, due to a population of more or less diversified individuals present permanently, it can adapt quickly to any environmental changes. Nevertheless, these advantages have a cost: – The assumptions according to which we know the environment-related analytical models, the various system parameters, as well as the functions to be optimized are not very realistic. Indeed, on the one hand, these models are idealized, and on the other hand, we cannot assume that all the possible situations are known and modeled already. – The fact that these algorithms manipulate a population of individuals and that many operations are necessary, generation after generation makes the decision-making system particularly greedy in terms of the time and computing capacity (and hence energy). In the context of CR in which these resources are primarily limited, it could become constricting. – The evolutionary approach is very sensitive to algorithm parameters (such as size of the population, selection rate, crossover and mutation of one generation to another, and the choice of the stop criterion) and its success will depend on their choices.

4.6.3. Learning approaches: joint exploration and exploitation Analytical approaches may not represent all the complexity of the phenomena entering into game. In order to be able to function in more realistic scenarios, it is

104

Radio Engineering

possible to use learning-based methods. This is the case of neural networks, evolving connectionist systems, statistical learning of the regression models, etc. Insofar as certain data or certain models necessary in decision making are missing, the cognitive agent is seen as being obliged to implement the learning process. Among the techniques found in the literature, we can divide these approaches into three categories: – prediction of the system performance from previously collected data: this approach segregates the learning and decision-making phases. A wide range of techniques come in this category: neural networks and statistical approaches are examples used in the framework of CR. The objective is to extract from the observation phase, a functional link between environment, operational parameters of the equipment, and system performance in order to infer decision rules. In a second phase, the cognitive agent uses these new rules to try to adapt itself to its environment; – dividing the environment according to system experience and expert knowledge: this approach, in view of Colson’s work, is akin to a clustering technique under expert information. Thus, by alternating the learning and operation phases, cognitive agent seeks to enrich its basic decision rules. The suggested general architecture is based on an evolutionary neural network proposed by Kasbov [KAS 98, KAS 07]. In fact, this is particularly a case of evolving connectionist systems. They aim to overcome the weaknesses of conventional neural networks due to a faster learning capacity and more flexible and evolutive neural structure; – monitoring (also control or supervision) with partial information: in this case, the learning and decision-making phases are intermingled. At each iteration, the decision taken makes it possible to collect information on the equipment performance as well as on the environment. This information is immediately incorporated to generate a new decision. The objective is to learn while using the equipment in order to provide the requested service while ensuring improvement in this service with the passage of time. This approach relies on notions of prediction and reinforcement learning. A particular case of this class of problems is that of “MAB” as discussed earlier among the methods of reinforcement learning.

4.7. Conclusion Learning and decision making are two vast domains for which solid mathematical theories (Bayesian probabilities, game theory, decision theory) exist and make it possible to systematically describe optimal methods of learning as well as the optimal choices in terms of decision. However, today we are still far from reproducing the levels of adaptation and intelligence of an animal brain; one of the reasons stems from the complexity of the optimal methods mentioned above when the dimension of the cognitive network becomes too large and that the range of possible decisions

Decision Making and Learning

105

broadens. A natural selection of the subset of “sensible decisions” as well as a natural selection of sufficient parameters contained in each stimulus is necessary to envision the development on a large scale of flexible and fast cognitive methods. Different methodological tools introduced in this chapter (without being exhaustive) to develop cognitive algorithms (Bayesian probabilities, game theory, and reinforcement learning) offer promising solutions to address the problem of decision making and learning for CR, but the problem of a global approach still remains open.

Chapter 5

Cognitive Cycle Management

5.1. Introduction The design of cognitive radio (CR) equipment will not be possible by means of current design tools used for conventional radio devices. The reason being that conceiving the transmission/reception part of such equipment is no longer simply a question of signal processing (radio). In most industrial domains from now on, the system design evolves more and more toward the problem of software/hardware co-design with devices having a plurality of functionalities and operating modes. However, beyond these commonalities, CR equipment design specifically requires us to integrate an architecture to manage the various elements of simplified cognitive cycle (see Figure 1.2), namely: – sensors; – different means to decide reconfiguration; – adaptation capacity of processing in real time. Signal processing is not restricted here to the physical layer only. It includes signal processing of all the layers till the application layer (e.g. image processing as illustrated in section 3.3.3). This reflects the fact that we consider the CR in a broad sense, not restricting it to spectrum management only, as pointed out in Chapter 1. In the context of CR, we must also consider the signal processing of newly

Chapter written by Christophe M OY and Jacques PALICOT .

108

Radio Engineering

added elements, in addition to the radio transmission chain itself. The following is the processing related to: – sensors for bringing additional information of the equipment’s environment; – various decision-making modes that the equipment will support. As a consequence, the CR equipment (terminal or smart PDA) will be subjected to a very large variety of functional modes in comparison with the current radio equipment. Hence, a complexity will be to determine all these modes in deducing the implications in terms of design, and finally to verify them. To circumvent this problem, a solution is to gain in abstraction at the design level. This chapter deals with the study of management architectures that coordinate sensors, and reconfiguration and decision-making actions. However, sensors are more specifically discussed in Chapter 3, reconfiguration methods are described in Chapter 11, and decision making is detailed in Chapter 4. The management architecture proposed here acts somewhat like a glue that enables us to associate all this coherently and efficiently inside a CR equipment. The pioneering work from research activities carried out by the authors and the resulting original solution are presented in this chapter. Currently, we do not see any other comparable approach that is specifically dedicated to the CR design. That is why we have chosen this proposal to be detailed here. We can assert that it integrates considerations that appear unavoidable and, consequently, adumbrate, at least in parts, in the future architecture and design methods of such equipment. However, for further investigation on the topic, refer to the works carried out at UPC in ALOE [GOM 10], in Trinity College in Dublin [NOL 06], and RWTH of Aachen University [PET 07] for instance which also address such a type of management architectures with a different perspective. A high-level design approach is also presented in this chapter. It contributes to equipping the designer with rules for CR equipment design, and particularly for all specific parts in CR that constitute the management architecture of the cognitive cycle. The role of the manager is to assist the designer to define and implement all the elements of this architecture. It is necessary to embed this architecture in the equipment to permit and guarantee the operation of CR equipment according to the required specifications. This chapter is organized as follows. The structure as well as the proposed architecture to manage the cognitive cycle in the CR equipment is described in section 5.2. In the next section a high-level approach for the CR equipment design is introduced. Finally, interfaces of the management architecture of cognitive cycle, which are necessary to be respected during equipment design in order to take some of the expected advantages, are detailed in section 5.4.

Cognitive Cycle Management

109

5.2. Cognitive radio equipment The CR equipment draws numerous benefits from the contributions of software radio (SR). As a result, the CR equipment design implies using the SR design techniques, which are mentioned in Chapters 10 and 11. However, the CR design also involves other aspects beyond SR, which are addressed here.

5.2.1. Composition of cognitive radio equipment We conclude from the cognitive cycle, depicted in Figure 1.2, that CR equipment can be schematically represented, as shown in Figure 5.1. The constituent elements of the intelligent cycle are: – sensors; – smart subsystem; – adaptation means. Smart subsystem Metrics

Analysis Decision

Orders

Learning User Network Environment Hardware Electromagnetic Environment

SDR communication subsystem Sensing means

Application layer …

Adapting means

Multiple physical layers Multiple physical layers

Multiple physical layers

Figure 5.1. Functional diagram of CR equipment initially proposed in [GOD 06]

The communication subsystem is composed of multiple standards (from physical layer to the application layer) running on a flexible platform; hence, it is advantageously designed with a philosophy like SR. It is to be noted that this part must also include an architecture (in software) for reconfiguration management, as discussed in Chapter 11. In general, at the level of the equipment, the CR can be considered as an extension of SR in the sense of autonomous adaptation, hence a smart manager is proposed like an extension of the reconfiguration manager [GOD 06]. As a result, to transform SR equipment into CR equipment, both parts, i.e. sensors and a

110

Radio Engineering

smart subsystem, must be added, as shown in Figure 5.1. The term sensor here refers to a combination of electronic components and algorithms that transform signals into metrics of interest for the cognitive equipment. For example, we can consider channel estimator as a sensor just like any other set of processing that makes it possible to extract the environment metrics. In the classical sense, a thermometer or a level indicator of the power of an equipment’s batteries is also a sensor. The information furnished by the sensors nourishes the intelligent part of the equipment that then, in a broad sense, takes adaptation decisions of the protocol stack, i.e. from the physical layer to the application layer (including and facilitating crosslayer perspectives). This indicates that reconfiguration can be performed in order to adapt the equipment behavior to the situation revealed by its sensors or sent by the network or both. For more information on the salient features of these sensors, see Chapter 3. On the basis of discussions and even differences within the community on the fact that whether it is necessary to put intelligence in the network or in the equipment, it is reasonable to believe that a combination of the two is justified, as mentioned in Chapter 2. According to each situation, an orientation may be favored. However, the intelligence management scheme of the equipment proposed here is compatible simultaneously with a terminal-oriented approach as well as with a CR network management-oriented approach. The complexity of the processing associated with the management to to be performed inside the equipment is of course less in the second case, but the management architecture nevertheless remains necessary even to coordinate less complex actions. 5.2.2. A design proposal for CR equipment: HDCRAM The reconfiguration manager must be taken into account from the very first stage of SR equipment design (see Chapter 10); hence, the CR manager must also be taken into account from the earliest stages of the CR equipment design. Accordingly, the CR equipment must be composed of the following: – a flexible hardware platform; – signal-processing elements (software and/or hardware); – sensor-based signal-processing elements or dedicated circuits; – an infrastructure to simultaneously support management and reconfiguration. Hierarchical and distributed CR architecture management (HDCRAM) provides rules and elements to model such a system, regardless of its hardware composition. From an electrical engineer’s perspective, HDCRAM’s philosophy consists of supporting the management needs of heterogeneous multi-processor platforms, typically composed of a digital signal processor (DSP), general purpose processor

Cognitive Cycle Management

111

(GPP), field programmable gate array (FPGA), and application-specific integrated circuit (ASCI). The first work on HDCRAM architecture was published in [GOD 06], but HDCRAM was not modeled until the work of [GOD 09] was published.

Figure 5.2. Schematic example of the HDCRAM’s architecture deployment for management of signal-processing operators [GOD 06]

As illustrated in Figure 5.2, as well as being mentioned earlier, HDCRAM suggests decomposition of the management of elements of the CR equipment into two parts: the reconfiguration management (ReM) and the cognitive radio management (CRM). In addition, real-time reactivity constraints impose a distribution of managers throughout the equipment rather than having one centralized manager. This conclusion comes from various works carried out for the SR design that are discussed in Chapter 11 [DEL 05a, KOU 02]. In particular, a distributed management not only allows a more effective management of the partial or very localized reconfigurations, but also provides better reactivity at the level of elements to be reconfigured. This is because the controlling elements are co-localized and often partially overlapping with the processing elements themselves (particularly in the

112

Radio Engineering

case of distributed circuits, especially if they are of a different nature). However, the distribution of managers through the equipment is not disordered. Hierarchical management makes it possible to coordinate various actions and to capture the information. Furthermore, the existence of a central manager is not to be reconsidered, because this is the condition to have an overall coherent operation of the equipment. The result of this analysis is the HDCRAM architecture, as presented in Figure 5.2. HDCRAM comprises three levels of hierarchy: – a general manager at level 1: L1, which is unique; – managers at level 2: L2; – managers at level 3: L3, one associated with each possessing an element, also called an operator. Each level comprises a couple formed by a reconfiguration manager (ReM) and a cognitive radio manager (CRM): – level 1: L1_CRM/L1_ReM; – level 2: L2_CRMU/L2_ReMU; – level 3: L3_CRMU/L3_ReMU. Let us recall here that the radio equipment performs a series of operations on all the layers of the open system interconnection (OSI) model that proceeds (during transmission) from the application layer (voice coding or video streaming, for example) to the physical layer, through intermediate layers (framing, for example). It is to be noted here that the HDCRAM approach is not limited to the physical layer only, and hence to the radio operations, but it also integrates the notion of cross-layer optimization. We can represent this sequence of operations in as many signal-processing operators as those seen at the bottom of Figure 5.2. This is true in all wireless communication devices, i.e. devices either transmitting or receiving via a radio link. Unlike a conventional radio design, an SR approach implies deploying the reconfiguration management architecture in more operators, which here takes the form on the ReM side of HDCRAM. If it is about CR equipment, it must also deploy CRM architecture of HDCRAM as well as new operators, i.e. sensors. This indicates that HDCRAM imposes many elements on the simple signal processing operators initially, which of course has a cost. Nevertheless, as indicated in Table 5.1, all the operators are not reconfigurable, and it is therefore not necessary to equip them systematically with reconfiguration managers. Similarly, the operators that are not reconfigurable or do not belong to the sensors do not require management facilities. Consequently, HDCRAM can be deployed in a minimum possible manner, hence limiting the additional cost to the minimum. HDCRAM can thus be deployed step by step in the equipment. Using this approach, it is not necessary to start a design from scratch, going from conventional equipment to CR equipment. In other words, conventional equipment can evolve gradually toward CR equipment, but inevitably it

Cognitive Cycle Management

113

has a cost. In the rest of this chapter, except when explicitly mentioned and for the sake of clarity, we will only consider operators that are required to be equipped with these managers. Operator’s role

Flexible

L3_ReMU

L3_CRMU

No

None

None

Yes

Yes

Yes

No

None

Yes

Yes

Yes

Yes

Radio processing operator

Sensing operator

Table 5.1. Existence conditions of L3_ReMU and L3_CRMU for the next operator that is either reconfigurable or not, and/or sensor

We observe in Figure 5.2 that each couple L3_CRMU/L3_ReMU corresponds to a signal processing operator. Several operators, either because they have a common contribution to a task on a large scale or because they are executed on the same hardware components, can be associated with the same level 2 manager, i.e. L2_CRMU/L2_ReMU. As a result, highly sophisticated CR equipment will contain a large number of level 3 managers, several level 2 managers, and a single level 1 manager which is the central manager of the equipment. Before going into further details about their role, it is necessary to explain their differences and interactions in several ways. First, it is interesting to note that from the side of the CRM intelligence manager, exchanges are carried out from bottom to top, i.e. from level 3 toward level 1. On the other hand, from the side of the reconfiguration manager, the exchanges are carried out from top to bottom, i.e. from level 1 to level 3. Both the managers communicate with each other by horizontal exchange from the intelligent part toward the reconfiguration part, as illustrated in Figure 5.3. Level 3 is intimately linked with the implementation strategy of the operator. It often overlaps with the operator itself. Hence, there is a strong dependence of level 3 on the hardware implementation of the operator. On the other hand, level 2 is emancipated from the way in which the operator or the operators that it manages are implemented. It coordinates the actions between multiple operators through their level 3 management. Level 2 thus incorporates a notion of abstraction that it provides to level 1 while masking the actual execution of operators and providing the information that is just understandable and necessary. Finally, level 1 performs the overall coordination of the equipment.

114

Radio Engineering

Figure 5.3. Interactions between different hierarchical levels and two managers [GOD 06]

5.2.3. HDCRAM and cognitive cycle Figure 5.3 reveals how HDCRAM makes it possible to execute the cognitive cycle of Figure 1.2. Information required for the metric in question that comes from the sensors goes up via the CRM side to level L1, where a decision is taken. This decision is forwarded to the ReM side at level L1, then goes down again on the ReM side, as the reconfiguration manager send the command down to the concerned operators. Owing to this three-level hierarchy, it is possible to accomplish this cycle at various scales, as shown in Figure 5.4. The advantages of this approach are explained below. A small-scale cycle allows us to reconfigure a very high reactivity located at level of the sensor itself. Let us take here the example of a signal-to-noise ratio (SNR) sensor that would be based on a simple algorithm, only reliable at high SNR (greater than 10 dB, for example), and on another more complex algorithm for low SNR (below 5 dB). The sensor may itself adapt its algorithm, for a short duration, according to the change of level, as shown in Figure 5.4(a). Thus, if the SNR changes from 11 to 3 dB, the sensor algorithm has to be changed to its complex version, in order to be able to operate at 3 dB. On the other hand, if the level rises to 9 dB, a new switching algorithm will be performed toward its first version. This transformation can be completely hidden from the higher levels, and hence can be exclusively managed at level 3. However, it can be visualized that the intelligent side sends instantaneous configuration information to level 1. Thus, level 1 can update its information tables related to the computing power used or even the electric power consumed after each configuration.

Cognitive Cycle Management

Figure 5.4. Smart cycle in HDCRAM. (a) Small cycle at level L3, (b) medium cycle consisting of levels L3 and L2, and (c) large cycle integrating all the three levels (L1, L2, and L3)

115

116

Radio Engineering

A cycle, as shown in Figure 5.4(b), is a typical case of adaptive signal processing. Let us take the example of a RAKE receiver that compensates the impulse response of a multipath channel [MOY 98]. In a CR approach, we can consider the following two operators: a channel estimator (sensor’s role) and a multipath combiner (variable coefficient filter). These two operators (via their managers at level 3) are managed by a single level 2 manager that transmits the coefficients’ updates from the channel estimator to the multipath combiner operator. In a classical context of adaptive signal processing, the only use of the channel impulse response that can be made in the entire equipment is that achieved in the 1990s by Moy et al. [MOY 98], i.e. performing a channel impulse response compensation (by maximal ratio combining in a RAKE, for instance). In the new context of CR, we can imagine that the channel impulse response might be used for other purposes. Therefore, the L2_CRMU can transmit this information elsewhere in the equipment via L1. Hence, information on the fact that the channel is in line-of-sight (LOS) or non-LOS (NLOS) can help the positioning of equipment by combining it with the information coming from a GPS. Furthermore, this information could be interesting for other actors in the network (infrastructure or other terminals) and hence L1 can also send this information to the network. We understand with this example how the CR manager makes it possible to share information between several operators of the equipment or several entities of an intelligent network to make cooperative sensing. Finally, in the case of large-scale reconfiguration, the largest intelligent cycle goes up to level 1 in order to change various elements of the protocol chain on separate processing devices (which are not connected at level 2, for example). Typically, it happens in the case of a vertical handover between standards where many processing operators are involved (see Chapter 1). The general manager is responsible not only for large-scale reconfigurations, but also for centralizing the information on overall equipment and its environment, such as diverse policies associated with the location of equipment, contract between the user and the operator, etc. Note that the choice of the scale of the cycle (small, medium, or large) depends on the nature of decisions and reconfigurations, and for the time being considered fixed while designing. The fact that the intelligent cycle is small or large is not dynamically managed in the current HDCRAM approach but could be done in the future.

5.2.4. HDCRAM levels 5.2.4.1. Level L3 At the lowest level of HDCRAM, i.e. level 3, a manager is associated with all operators following the rules given in Table 5.1. It consists of an ReM unit L3_ReMU and an intelligence management unit L3_CRMU. The L3_ReMU is the low-level code that makes it possible to reconfigure all the operators while respecting the real-time

Cognitive Cycle Management

117

constraints (reconfiguration should not result in a loss of real-time execution of the equipment’s signal processing). This code is often embedded in the operator’s code itself. For example, for an operator coded in C language, a simple if/else condition between two possible behaviors of the function can be considered like an ReM strategy. Another more complex example is the use of partial reconfiguration of field programmable gate array (FPGA), to reconfigure a finite impulse response (FIR) filter for instance. For more information on some of the many possibilities of such a case see [DEL 07b]. Again, in another case, the L3_ReMU can be a core processor (such as LEON or MicroBlaze) embedded in an FPGA, which performs reconfiguration routines of the FIR. These two examples show that the level 3 manager is very close to its associated operator and, consequently, both are often programmed in the same language. An L3_ReMU is necessary to manage a reconfigurable operator or sensor. In addition, a sensor has an L3_CRMU that manages the metrics that the sensor provides to the equipment. Depending on the nature of the sensor, the information is simply transmitted by the L3_CRMU to the higher level of cognitive management (which will use this information to make a decision at a higher level of the hierarchy) or it makes a decision to reconfigure the sensor itself, as mentioned earlier in the case of the SNR sensor. This provides the possibility of having an intelligent cycle at multiple scales as already indicated. We note here that the reconfigurable operators always have L3_CRMU (see Table 5.1). Indeed, one of the roles of L3_CRMU is to verify that the operator functions properly after being reconfigured, and in the contrary case, to envision a solution where the simplest example is to return to the previous configuration (back up). On the other hand, for a non-reconfigurable sensor, it is not necessary to have an L3_ReMU. 5.2.4.2. Level L2 An intermediate level L2 is proposed between low-level managers at L3 and highlevel managers at L1. It consists of several pairs (L2_ReMU/L2_CRMU) having several roles. First, they not only offer an abstraction of implementation on the physical components to L1, but also provide the necessary information to it. Thus, to reconfigure a processing operator on a DSP, level 1 management does not require us to have direct access to the binary code that can be of considerable size, but just the information about its local availability (and how it can acquire it in the network otherwise), the size of file, and the parameters of the operator. The level 2 managers then play the role of centralizing, send the adequate code to level 3 managers that are closely associated with the operators, as would be a channel estimator and a RAKE receiver.

118

Radio Engineering

5.2.4.3. Level L1 The level L1 manager is the general manager of the equipment. It is found in the literature in other proposals, but is originally related to ReM only and not to intelligence management. Let us quote in chronological order [KOU 02], the inspiring works from the European project End-to-End Reconfigurability (E2R) [BRA 06] as well as the work on Platform-Hardware Abstraction Layer (P-HAL) of the UPC [REV 05]. The P-HAL project was then generalized under the project FlexNet [FLE 10], where it is open to the community and constantly evolving, particularly in the CR context toward ALOE [GOM 10]. In the case of HDCRAM, level 1 contributes only to the reconfigurations or decision making, implying a broad impact on the equipment’s operation. The most emblematic example is to perform the reconfiguration for a change of standard, since it generally affects a large number of operators. The two cases are well separated. First, where decision making and reconfiguration are not critical in terms of time, i.e. we can afford to take even one second to perform these tasks without being noticed by the user. This is the typical case that occurs when the device is turned on and it starts searching to find free frequency bands. This off-line context is completely within the scope of current technology, provided that management is well worked out, which is exactly the goal of HDCRAM. The second case is the critical case of a change of standard during communication, hence on-line, to perform a vertical handover (inter-standards). The requirements in terms of manager are then exacerbated, and, in this case, it is even more essential to have a powerful manager, which is the main aim of proposing HDCRAM.

5.2.5. Deployment on a hardware platform With a pedagogic aim, we present in this section, an example of HDCRAM deployment on a hardware platform. It is possible to consider the deployment from two angles. First, we can approach it from a purely functional perspective, which is the general case, and this is the approach presented until now and which will be adopted again after this section. Second, we can also consider the HDCRAM’s deployment into a hardware platform perspective, as illustrated in a simplified example. Note that the most common case will probably be a mix of the two approaches. Let us consider the example of the platform shown in Figure 5.5 that consists of a GPP, an FPGA, two DSPs, an analog-to-digital converter (ADC), and a digital-to-analog converter (DAC). A quick implementation of HDCRAM consists of taking into account the general characteristics of the various categories of processing components. As will be explained in Chapter 11, a GPP is particularly suitable for the general manager of the equipment. We can also associate a level 2 manager with each of the processing components, i.e. the GPP, DSP, and FPGA. This can be particularly interesting in cases in which HDCRAM has to support few operators, which happens at the

Cognitive Cycle Management

119

beginning of a step-by-step integration of functions in smart radio equipment. A level 3 manager is also associated with all the sensors and operators, as required earlier. Note that the level 2 manager in the FPGA could advantageously be implemented on a processor core, such as LEON or Nios MicroBlaze (μB). Level 3 managers of ADC and DAC are made up of all the FPGA logic gates that can parameterize these two components through a program, like the program that is also executed on the chosen MicroBlaze core.

Figure 5.5. An example of the schematic deployment of HDCRAM

5.2.6. Examples of intelligent decisions Again, for clarity, let us take some time to explain what is meant by “intelligent decision” inside a piece of equipment. It is possible to imagine completely new functionalities compared to those currently existing. However, already existing functionalities can be considered as intelligent decision making, which will enable us to better understand the global approach presented. Generally, equipment will include

120

Radio Engineering

dozens of acts of decision making, but in the beginning, these will be introduced step by step. That is why it is so important to propose a design approach that allows us a progressive introduction of intelligent aptitude, as mentioned earlier. The CR equipment must be able to adapt its operations according to the changes in its environment. In the broadest sense, the environment concerns what happens outside the equipment (spectrum, impulse response, other radio users, policies in the area, etc.), inside the equipment (power consumption, battery level, processing units’ load, memory usage, etc.), and with the user of the equipment (its position, velocity, preferences, habits, the service which it uses, etc.). Decisions are made on the information received by L3_CRMUs that manage sensors of the equipment as well as information retrieved from the network by HDCRAM. Depending on the impact of the metrics, decisions are made at different levels on the CRM side (L3, L2, or L1), or metrics are transmitted to the network if they are useful for it or even to the other intelligent equipment (in the case of ad hoc networks, for example). The way this should happen is defined (in the current approach) in the design of the equipment. It should be noted that this is not the usual case of considering a reconfiguration of the intelligent cycle itself and is not considered here, although nothing is excluded in the future. One of the most studied topics in the CR domain is the dynamic or opportunistic spectrum access [RIE 04]. In a simplified approach, a secondary terminal before starting its communication must verify that a frequency band is not used by a primary terminal (which, in general, pays for this band). Consequently, the secondary terminal must use its own CR abilities. It must, first of all, perform a spectral analysis to determine whether the desirable band is used by a primary/secondary user. In this case, there exist several possibilities of sensors, depending on the requirements. Let us quote the standards’ recognition sensor [HAC 07] (if the band of interest is used by a predefined standard), the energy detector (radiometer), and the cyclostationarity detector (if the goal is to detect a blank space in the spectrum [GHO 06], i.e. an unoccupied band). Then, based on the spectrum utilization policy (database being updated according to location) at this place (determined by a GPS, for example), a transmission decision is made in the selected band. The radio parameters (carrier frequency, bit rate, etc.) are fixed at appropriate values to ensure the required transmission quality as proposed in [COL 08] and [JOU 09]. Sensors, decision algorithms, and reconfiguration techniques are the new points to be integrated in the CR equipment in comparison with the conventional radios. In the category of already existing functionalities, which will be integrated in the intelligent management architecture, let us take the example of scheduling of the execution tasks on the processing units. It is typically one of the responsibilities of a real-time operating system (RTOS) to carry this out while executing the priorities of the task. The dynamic scheduling adaptation, and in the long-term perspective, the dynamic redeployment of operations on hardware processing units, will also

Cognitive Cycle Management

121

be a part of the equipment’s intelligent decisions. One way to realize this is to embed the partitioning and scheduling algorithms in the equipment on distributed processing units. Such static scheduling approaches exist for optimized deployment on embedded targets such as DSP, FPGA, and GPP, which take into account the execution and communication time [MOY 10]. But a lot of research is still to be carried out to improve them by taking into account the criteria other than time, e.g. power consumption. Similarly, another effort is still to be made to make these methods embeddable in the equipment, i.e. an effort to decrease their execution complexity. In this context, it is up to the intelligence manager to manage the overall power consumption of the equipment. The degree of freedom that can be adjusted for this is to exploit the quality of service (QoS) of the connection, by adapting radio parameters (modulation, channel coding, etc.) in such a way as to slightly degrade the quality of the connection with an objective to save battery power and allow a longer connection to the wireless network. The combination of all of these techniques requires a manager to integrate not only the methods already in use today, such as RTOS, but also the new methods that are still to be defined and studied by researchers. This opens new perspectives (most of which are still to be discovered), for example, in terms of such new services. Hence, the previous example could allow the operators to give new commercial offers where the phone battery would last longer as a result of loss of quality (but undetectable by the user). This would be included in the user profile and the intelligent engine of the equipment would give priority to the energy savings on all other criteria (communication cost, QoS, etc.) as soon as it will get the opportunity. This example implies that the manager has the knowledge of the processing capabilities of each processing unit of its hardware architecture, such as their power consumption, instantaneous activity ratio, memory size and occupancy, etc. It is also one of the roles of an intelligent manager, which can centralize all the information and update it through the metrics raised from the CRM side. A major axis of research in the CR domain for the future is “learning”. At the level of the equipment, numerous possibilities can be envisioned, from the neural networks to the Markov decision-making processes via artificial intelligence [JOU 10b]. It is necessary for CR equipment to permanently estimate its environment even if partial knowledge is obtained. This will be the case anyway, because in the CR context it is imagined to take into account many parameters at once. These learning algorithms, discussed in Chapter 4, will be integrated in the decision-making part of the manager, as discussed in [JOU 09]. One of the challenges, in particular, is the complexity of such learning algorithms as they will be executed in the terminal under severe embedding conditions. Their requirement in terms of computing power will also vary as a function of the refresh rate of information, which will require us to follow the variations of the environment and the application.

122

Radio Engineering

5.3. High-level design approach 5.3.1. Unified modeling language (UML) design approach The CR equipment becomes a complex combination of software–hardware, signal processing, and management (hence, of control indeed), so much so that solutions to design complex systems are completely adapted for CR. The trend in the design of complex systems is clearly oriented toward high-level design solutions. The goal is to tackle problems at a high level of abstraction to somewhat simplify them, and not to confront them in their entirety from the very beginning while being independent of implementation-related details. These details may be considered in the second phase, after having reduced the architectural exploration space. The goal is to model things of a very different nature. It is rather straightforward using UML [OMG 01] of the Object Management Group (OMG) [OMG 10], which has been specifically devised to allow us to model of all sorts of problems, and not only engineering and design-related problems. The advantage of UML is to offer numerous methods of representation in graphical forms, which are easily accessible after restricted learning of associated semantics. According to the comprehension of each domain or profession, these graphs can be tailored. As the CR design is a multidisciplinary task, the use of UML is also an asset to ensure comprehension between diverse professions (software engineers, electronics engineers, and system engineers) and their supervisors who also have another level of comprehension than technicians. UML environments can also automatically generate documentation, which is of primary importance in an industrial context. Since this documentation is taken directly from the design process, it precisely reflects the contents of the device. Anyway, it is not a coincidence that the specifications of software communication architecture [JTR 06] of the SR have been using UML since its origin at the end of the 1990s. Other small-scale projects have also experienced the use of UML for SR modeling [MOY 04] from the early 2000s. Furthermore, UML being an objectoriented design approach corresponds exactly to the SR or CR philosophy that is based on the possibility of changing the behavior of a software processing block (typically an object) while functioning. The model-driven architecture (MDA) approach is used in [ROU 05], which implies a design in two phases: one in which software application (typically radio here, but can be extended as already mentioned) only is modeled and is called platform-independent model (PIM), and the other in which it is modeled while being deployed on a hardware platform called platform-specific model. The reader is referred to Chapter 10 for further details. The use of UML has gradually become a reference in the preliminary design steps rather than generating specification documentation. UML is now used on a much larger scale for CR design than we can imagine. Research efforts are underway to

Cognitive Cycle Management

123

extend it to the generation of the implementation code on targets [DEK 05, KOU 08]. This is based on the use of metamodels and automatic transformations between different levels of metamodeling. In the next section, the metamodeling of HDCRAM for CR is presented. 5.3.2. Metamodeling The goal here is not to apply the MDA approach as such, but the approach is proposed around HDCRAM that is fully compatible with a comprehensive MDA design flow, because the metamodel presented here is purely functional. Consequently, it does not imply any relation with hardware implementation, which corresponds to the PIM level modeling of MDA. This is an arbitrary choice as the first design stage. We can say that a metamodel is a way to fix design rules with an aim to adapt (or orient) the UML semantics for a particular design domain. Figure 5.6 depicts the metamodel of HDCRAM. It is a factorized view of Figure 5.2 in which we find both the managers, CRM on one side and ReM on the other.

Figure 5.6. Metamodel of HDCRAM [GOD 09]

The HDCRAM’s metamodel of Figure 5.6 can be read using the following rules. On the one hand, there is only one level 1 manager made up of L1_ReM and L1_CRM. On the other hand, it may depend on several level 2 managers, and each level 2 manager may be attached to numerous level 3 managers. This is only one aspect of the metamodel that we have chosen to represent here. Other ways of representing the metamodel would be possible, for example, by insisting on the content of the various elements of the above figure. Details about the contents of each element of the metamodel and their methods of data exchange are given in section 5.4.

124

Radio Engineering

5.3.3. An executable metamodel The HDCRAM’s metamodel, being stated in a context of object-oriented modeling of UML type, offers advantages in terms of readability and reusability. In order to maintain great rigor in modeling, HDCRAM is described at the level of metaobject facility (MOF) [OMG 04], which is a modeling level above UML in the pyramid of modeling. This restricts the concepts used to a minimum and conserves a strict formalism, allowing, in particular, the transformations of bijective models as will be discussed later. But these meta-languages support only structural descriptions and do not have operational semantics (of execution). The Rhapsody UML modeling tool proposes an integrated simulator. But for this, it adds all the semantics necessary for this execution to the model to be simulated, which would alter the design model of HDCRAM. Another solution, also used for HDCRAM, is to use an executable language to specify actions in the metamodel. For this purpose the Kermeta [MUL 05] language was selected. This language was conceived by the Triskell research team at INRIA and is in the development process. Using Kermeta, we can describe HDCRAM simultaneously at the structural and the behavioral levels, to create even a domainspecific language (DSL) for CR design. As soon as the metamodel incorporates this language, it is possible to execute the behavioral scenarios of the generated models and to check whether they are in compliance with specifications or the expected behavior. This is performed at a very high level of abstraction, i.e. very far from the contingencies related to the implementation of a real platform. It is merely a very anticipated study of feasibility and behavior of the equipment. The Kermeta language is not associated with a specific execution platform, such as C language for the processors world or VLSI (very large scale integrated) hardware description language (VHDL) for the world of reconfigurable hardware. The use of a syntax, independent of target location, keeps an independence concerning the future implementation and therefore conserves the properties of reusability and portability regardless of the final platform. The goal is to explore the design at a high level of abstraction in order to obtain a comprehension shared by all the actors of the design regardless of their specialty and obtain a comprehensible description of the architecture and its actions. This is an innovative approach and gives the possibility to model a complex system at the behavioral and structural levels very early in the design phase, thanks to simulations of the CR design. 5.3.4. Simulator of cognitive radio architecture The execution language, Kermeta, is not used to perform complex operations. The abstraction level of the metamodel allows us to study HDCRAM at the structural level and the information transfers between its entities, which is our intension here. Other chapters of this book deal with the content of decision-making algorithms (see Chapter 4) or the implementation of the reconfiguration (see Chapters 10 and 11). The goal of the simulator is to detail the mechanisms of operation of HDCRAM

Cognitive Cycle Management

125

through examples. The objective is to clearly specify the required capabilities in terms of reconfiguration and intelligence of HDCRAM, thanks to the simulator. Figure 5.7 shows a slightly more detailed view of the HDCRAM’s metamodel of Figure 5.6, while adding a facet specific to the simulator; here the class HDCRAM_simulator is for simulation purposes only. This figure also shows new elements of HDCRAM’s metamodel, which are: StandardLibrary (corresponding to level L1), FunctionLibrary (corresponding to level L2), and OperatorLibrary (corresponding to level L3). These libraries contain the functionality of CRM and ReM units deployed in an equipment. These are the behaviors that must have different units at different stages according to the use cases. For example, a level 3 manager L3_CRM_snr, associated with a sensor related to SNR, will be able to interpret the metric outcome of the sensor and to classify it in one of the three categories: high, medium, or low. Another example could be a level 2 manager L2_CRM_los, which is able to infer from the impulse response of the channel, the information on the fact that the channel is of type LOS or NLOS, i.e. the equipment is in direct visibility of its correspondent or not. Finally, a last example would be of an operator that identifies white spaces in the spectrum. The operator OPR_SpectrumWhiteSpace must provide its L3_CRM_SpectrumWhiteSpace manager a metric, which the level 3 manager can interpret. It can be understood from the outcome of these examples that, on the one hand, it is not necessary to execute the algorithm that performs these operations in the simulation of HDCRAM, but just its result (the study of the algorithm constitutes another research work in itself, see Chapter 4), and, on the other hand, there must be a library for each element of HDCRAM architecture containing the multiple possible behaviors for this element. The decision-making procedure can vary from extremely complex processing to the simplest decisions. We can give the following examples (a non-exhaustive list) in ascending order of complexity: – simple threshold; – state machine; – neural networks; – optimization heuristics (greedy algorithms [RAU 06b], genetic algorithms [RIE 04], upper confidence bound, UCB, algorithms for multiarmed bandits [JOU 09], etc.); – artificial intelligence [COL 08], etc. The higher the level the decision is made at, the more metrics there are to be taken into account and to combine, and consequently the more complex the algorithm can be. The decision algorithms are presented in Chapter 4.

126

Radio Engineering

Figure 5.7. HDCRAM’s simulator methods [GOD 09]

Cognitive Cycle Management

127

The HDCRAM’s simulator initially performs a deployment of an architecture according to the scenario to be executed, i.e. operators involved and the structure of the associated management architecture. Then the designer starts the execution of the deployed architecture and makes the metrics of the sensing operators vary, in order to verify that the equipment functions properly, i.e. the architecture reacts appropriately according the evolution of metrics. In other words, the simulation makes it possible to verify that all the managers are able to interpret the messages sent to them and in their turn are able to send the right information or right commands to good interlocutors through CRM and ReM architectures. If all the HDCRAM’s interfaces are correctly defined for each scenario that CR equipment must support, then this implies that the designer has successfully identified all exchanges and all tasks that are necessary in the management architecture of this equipment. The metamodel and the HDCRAM’s simulator thus enable us to define the requirements of the equipment manager, and to quickly prove that it functions, very early in the design flow because it is accomplished at a high level of abstraction. The new part in the design of new CR equipment compared to the conventional equipment is that of the manager, from where the interest is developed to add the HDCRAM design approach in the CR equipment’s design flow. This even becomes an element of primary importance.

5.4. HDCRAM’s interfaces (APIs) This section details the constituents and interactions of component classes of the HDCRAM. This includes HDCRAM’s application programming interfaces (APIs). The definition of these APIs allows any designer to manufacture not only CR equipment but also any subpart of the equipment, compatible with another system designed using the HDCRAM model.

5.4.1. Organization of classes of HDCRAM’s metamodel Figure 5.2 is a clear view for researchers working on the definition of HDCRAM. It does not allow us, at a glance, to understand the principles of HDCRAM because it is necessary to explain the relationships between the elements with each new interlocutor. The HDCRAM’s metamodel of Figure 5.6 shows the same concepts in semantics but standardized by the OMG (and therefore understandable by all, by swift learning of UML), which, for example, immediately enable us to see the cardinality associated with each element of the metamodel. Similarly, a designer starting with UML understands that the classes Li_ReM (U) (Li_CRM (U)) inherit from the parent class ReM (CRM). Note that we do not represent all the details of a model on a UML graph in order to allow a selective

128

Radio Engineering

presentation for reasons of clarity. Thus, in Figure 5.7, the choice was made to present more details (in addition to the fact that the class HDCRAM_simulator is not in the metamodel but only in that of the simulator, which differentiates both the metamodels). Notably, methods associated with the parent classes are represented. All this information defines the interfaces between the different constituent elements of HDCRAM. If a designer follows these rules, s/he will be able to insert any operator or sensor in a piece equipment that had already been conceived with HDCRAM. Details of these interfaces are presented in this section. 5.4.1.1. Parent classes Each class of HDCRAM, instantiated during a deployment, except operator class, inherits from either ReM class or CRM class, i.e. its attributes and methods. Therefore, they are considered as HDCRAM’s parent classes and thus provide for any element of HDCRAM, a generic interface that guarantees reusability of components across different equipment. However, in order to adapt at each hierarchical level, the behavior of each child class can be customized. Similarly at each level, in order to adapt to all possible use contexts, the behavior of each class may be different. As a concrete example, let us consider the case of a filter operator that performs FIR processing. Whereas it is different from a convolutional coding operator, for instance, the metamodel specifies a common encapsulation to both operators. 5.4.1.2. Child classes At the highest level of HDCRAM, the manager is the coupled L1_ReM/L1_CRM. L1_CRM carries out intelligent management of the equipment, which includes information capturing (that goes up to the L1) from the environment and decision making in order to send reconfiguration commands to the L1_ReM. The L1_CRM manager is also linked with the network and the user profile of the equipment. At L1, contingencies related to hardware implementation are abstract, i.e. the level 1 manager does not know which signal processing operation of the protocol stack or decision making is executed on which computing units. On the other hand, at the lowest level, level 3 managers of operators are closely associated with the way the operator is implemented. In addition, L3_ReMU as well as L3_CRMU is developed in the same physical environment, so almost all the time in the same programming language as that of the operator it manages. The intermediate management level formed by L2_ReMU and L2_CRMU units makes a gateway between the abstract L1 and the L3. From the CRM side, for example, it translates the upgoing metric in a form tailored for understanding by level 1. As an example, for a temperature expressed in degrees Celsius at level 3, perhaps at level 1, the information required will only be about its type, i.e. hot or cold. On the ReM side, a reconfiguration command of the type which increases the throughput from level 1 may be interpreted as a change in the

Cognitive Cycle Management

129

coding output toward L3_ReM (managing convolutional coder) and the constellation order toward L3_ReM (managing the constellation operator).

5.4.2. ReM APIs Table 5.2 shows the properties of the parent class ReM from which all other Li_ReM(U) managers inherit. In other words, all management elements on the ReM side at least have these properties and more are added to them after each level, as illustrated in Tables 5.3–5.5. These properties consist of attributes and methods that we call API because these are the properties that permit the communication of the elements of HDCRAM with each other. These are means of identification, interconnection, ways to behave, etc. Attributes

Definition

Identifier

Id of the management unit

Initialization

Init or running phase

State

Management unit state (run, reconfigure, error, etc.)

Status

Objective assigned by CRM manager or upper ReM manager

Operations (methods)

Definition

Create

Create lower ReM manager or operator Or invoke associated CRM unit

Finalize

Delete lower ReM manager or operator or associated CRM unit

Reconfigure

Reconfiguration order formatting for lower ReM manager Or reconfiguration execution for an operator

Schedule

Reconfiguration scheduling in tasks table

Send

Send reconfiguration orders to lower ReM levels

Receive

Receive reconfiguration orders from upper ReM levels or associated CRM unit Table 5.2. API of the parent class ReM of HDCRAM

130

Radio Engineering

Attributes

Definition

Associated_L1CRM L1_rem2hdcram

Associated L1_CRM unit Connection between L1_ReM and the HDCRAM_simulator

Standard_reference

Gathers all L2_ReMUs controlled by the L1_ReM Table 5.3. API specific to the L1_ReM

Attributes

Definition

Associated_L2CRMU

Associated L2_CRMU unit

Owner_L1rem

Connection with upper L1_ReM

Function_reference

Gathers all L3_ReMUs controlled by a L2_ReMU Table 5.4. API specific to the L2_ReMU

Attributes

Definition

Associated_L3CRMU

Associated L3_CRMU unit

Bug detected

Error in the operator behavior

Owner_L2remu

Connection with upper L2_ReMU

Target

Hardware implementation target Table 5.5. API specific to the L1_ReM

A class derived from the parent class ReM is characterized by: – an identifier; – an associated unit CRM; – an associated unit ReM (if the parent ReM (U) and child ReMU exist), which is indexed in an ordered collection; – a state that can take the value: idle, run, bug, reconfigure, etc.; – a state that is fixed by the parent ReM (U) or associated CRM (U).

Cognitive Cycle Management

131

Depending on the nature of the child class, other attributes can be added. At each level (L1, L2, and L3), the methods derived from the parent class are refined according to the needs. For example, the “create” is translated as follows: – at level L1: create allows us to create L2_ReMU (s); – at level L2: create allows us to create L3_ReMU (s); – at level L3: create allows us to create Operator. The ReM is responsible for reconfiguration management. This means that it manages, in particular, the scheduling of the reconfiguration and processing tasks, hence the creation and destruction of: – operators on the physical resources; – level 3 managers (also called operator level); – level 2 managers (also called function level); – level 1 managers (also called standard level). Level 3 managers (L3_ReMUs) can modify the behavior of their dependent operators. There are numerous ways to do this. As an example, this can be carried out on a processor by changing function parameters or making the function pointer to jump to a particular location [KOU 02] or in an FPGA by changing function parameters (by adjusting the values of registers) or by partial reconfiguration, i.e. by changing the bitstream of the element [DEL 05a, DEL 07b]. The L2_ReMU managers (L1_ReM) can modify their dependents, i.e. L3_ReMU (L2_ReMU). L2_ReMU can reassign an operator of a processing unit to another (of a DSP to an FPGA, for example) according to the requirements (in terms of computing power, power consumption, etc.). It can also rearrange the data path between its dependent operators, i.e. L3_ReMU. The following tables list the properties added to the various levels of ReM managers: – Table 5.3 for L1_ReM’s API; – Table 5.4 for L2_ReMU’s API; – Table 5.5 for L3_ReMU’s API.

5.4.3. CRM’s APIs In particular, the role of the CRM is to launch the execution of decision algorithms. As CRM are distributed throughout the equipment, HDCRAM can support distributed intelligence. HDCRAM can also manage from very fine grain reconfigurations (operator level) to a complete reconfiguration such as a change of standard. Table 5.6

132

Radio Engineering

details the interfaces or API of parent class CRM, i.e. every element of the CRM side inherits these APIs. This is the responsibility of the reconfiguration manager to reconfigure the equipment and that is why these ReM units have creation (Create) and deletion (Finalize) actions in their APIs, and not the CRM units. Attributes

Definition

Identifier

ID of the management unit

Initialization

Init or running phase

State

Management unit state (run, reconfigure, error, etc.)

Status

Result of the metric(s) interpretation

Operations (methods)

Definition

Capture_metric

Capture one or several metrics

Analyze_metric

Analyze received metric(s)

Define_action

Decide to launch a reconfiguration or not

Schedule

Reconfiguration scheduling in tasks table

Send

Receive

Send reconfiguration orders to associated ReM Or send metric(s) to the upper CRM manager Receive one or several metrics from lower CRM unit(s) or from the operator it manages Table 5.6. API of the parent class CRM of HDCRAM

Any class derived from the parent class CRM is characterized by: – an identifier; – an associated unit ReM; – an associated unit CRM (if the CRM (U), which manages it, and the CRMU that it manages exist), which is referenced in an ordered collection; – a state that can take the value: idle, run, bug, reconfigure, etc.; – a state that represents the result of the interpretation of a metric. According to the specificity of the child class, other attributes can be added to the parent class. The CRM part is responsible for the sensors and decision making in

Cognitive Cycle Management

133

order to adapt the equipment to the environment. When a CRM unit makes a decision, it sends the reconfiguration parameters to its associated ReM unit. After validating the new behavior, it sends information of the new configuration to its associated CRM unit (the highest level in the hierarchy). In case of the failure of reconfiguration process, a CRM unit loads the old version of the reconfigured elements, and analyzes the causes of failure and learns from its failures (this deserves the research work in itself and is not subject of this chapter). A CRM unit is constructed or destructed depending on the commands given by its associated ReM unit. The following tables detail the distribution of APIs of CRM based on management level: – Table 5.7 for API specific to the L1_CRM; – Table 5.8 for API specific to the L2_CRMU; – Table 5.9 for API specific to the L3_CRMU. Attributes Associated_L1ReM

Definition Associated L1_ReM unit

L1_crm2hdcram

Connection between L1_CRM and HDCRAM_simulator

Function_name

Gathers all L2_CRMUs controlled by the L1_CRM

L1_crm2library

Connection between L1_CRM and its library

L2_crmu_metric

Set the objective associated with L2_ReMU Table 5.7. API specific to the L1_CRM

Attributes

Definition

Associated_L2ReMU

Associated L2_ReMU unit

L2_crmu2library L3_crmu_id

Connection between L2_CRMU and its library Enables us to identify the set of L3_CRMU under its control

L3_crmu_status

Gathers the metrics of L3_CRMUs controlled by the L2_CRMU

Owner_L1crm

Connection with the L1_CRM

Table 5.8. API specific to the L2_CRMU

134

Radio Engineering

Attributes

Definition

Associated_L3ReMU

Associated L3_ReMU unit

L3_crmu2library

Connection between L3_CRMU and its library

Owner_L2crmu

Connection with upper L2_CRMU

Owned_operator

Connection with the operator under control Table 5.9. API specific to the L3_CRMU

All these attributes and methods allow us to reproduce the cognitive cycle of Figure 1.2 at each level of the hierarchy, i.e. receive and analyze metric, and make a reconfiguration decision to adapt to the changing environment. The smart cycle is, in fact, distributed across all CRM units of the hierarchy. This is how the specific problems of CR should be responded to where many metrics are taken into account. The distribution of the operations is a solution to support strong real-time constraints imposed by CR. In addition, distributed intelligence means that each entity has a certain autonomy to solve problems that directly influence the response time of the overall equipment. The principal drawback of using a distributed intelligence is that the knowledge of the complete state of the system is dispersed. This is why the level 1 manager is necessary in order to maintain the overall consistency of the behavior of the CR equipment. However, only a synthetic knowledge is necessary for it, and not all the information. This helps us save information exchanges of control, inward and outward of the equipment, thus saving the capacity of wireless networks. 5.4.4. Operators’ APIs Processing operators or sensors must have the API as shown in Table 5.10, so that they can be integrated into the proposed HDCRAM architecture. As the components are described by their interfaces and functionalities, the designers are free to generate classes that inherit from the parent class operator. It is noteworthy that to be consistent with what has been already mentioned, the attributes should remain purely at a functional level. Indeed, the metamodel in this phase is at such a level of abstraction that it does not refer to any implementation on physical components (FPGA, DSP, etc.), programming languages, power consumption, time of execution, etc. The Target attribute of L3_ReMU in Table 5.5 only refers to a logical address and not to an actual hardware characteristic.

Cognitive Cycle Management

Attributes Identifier

Definition ID of the operator

State

Operator unit state (run, reconfigure, error, etc.)

Status

Metric(s) for the associated L3_CRM

Operations (methods) Initialize Reconfigure

135

Definition Reset operator (hardware or software) Reconfiguration of the operator

Run

Run the operator

Stop

Stop the operator Table 5.10. API of the parent class CRM of HDCRAM

5.4.5. Example of deployment scenario in CR equipment During a simulation, the attributes can be declared at each step of the equipment’s operation in the Eclipse environment supporting the implementation of the metamodel, thanks to the executable Kermeta1 language. Therefore, it is possible to follow step by step, the consistency of design with the objectives and consistency of attributes’ values such as lists of operators, the whole hierarchy of managers and operations associated with them, the values of the attributes, and exchange consistency between the management units involved. We also note that this also served for inventors of HDCRAM [GOD 08a] to verify the consistency of the metamodel. Unlike the usual case, indeed, the metamodel here is not just made up of graphic representations for which it is not possible to verify either the proper operation or the 100 completion, but made of code that can be executed, and consequently whose operation can be checked. The example chosen here is that of a CR equipment design scenario capable of performing a recognition of telecommunications standard among all commercial standards of existing wireless communications. Therefore, this equipment uses a blind standard recognition sensor whose principles of operation can be found in section 3.3.5. The goal is to find the standards available around a terminal so that it can make the choice to establish a communication without connecting to the various standards

1 Kermeta is a language of Triskell project of INRIA.

136

Radio Engineering

during the recognition phase. Then the choice can be made based on numerous criteria, such as: – communication cost; – power consumption necessary to perform communication; – minimization of electromagnetic radiation to limit the impact on people or the environment and avoid pollution of the spectrum; – or any other reason. This is one of the expected advantages of CR compared to the current systems. The blind standard recognition sensor is depicted in Figure 5.8. It entails five operators and the necessary HDCRAM management structure composed of L2_ReMU/L2_CRMU pair and each operator of a pair L3_ReMU/L3_CRMU. The five operators are grouped into two modules: searching module and analyzing module. These five operators are the following (for further details, see [GOD 08b]): – Bandwidth_analysis (BWA); – Bandwidth_recognition (BWR); – Cyclostationarity (Cyclo); – Wigner_ville_transform (WVT); – Hole_spectrum (Hspec).

Figure 5.8. The blind standard recognition sensor described in Chapter 3

Figure 5.9 illustrates the values of attributes in the context of a standard recognition scenario, from where comes the name for the level 1 manager scenario_spectrum_access (and the use of functions in the library scenario_spectrum_access). It may be noted that in this phase of execution, the equipment is not in the initialization mode, as shown in Figure 5.9. It is associated with the level 1 reconfiguration manager L1_ReM_scenario_spectrum_access. At level 2, func_fusion performs the fusion of numerous metrics (from different sensors). One

Cognitive Cycle Management

137

of these metrics is related to frequency hopping discrimination in order to identify the Bluetooth standard. It is this metric which is in action while capturing Figure 5.9.

Figure 5.9. L1_CRM level attributes during simulation

Figure 5.10 illustrates how the simulator can verify the APIs of the set of HDCRAM’s elements involved in the scenario. Any equipment whose elements respect the structure and HDCRAM’s APIs is compatible with the other equipment thus conceived. Therefore, it is sufficient to add the scenarios and to integrate in the same equipment all that was identified in the various other scenarios in order to create more and more improved CR equipment. Table 5.11 draws up the list of the elements related to this scenario and expounds in a few words the exchanges through the APIs. Figure 5.10(b) details the attributes of the management unit L2_CRMU func_fusion. Figure 5.10(c) shows the attributes of the unit L3_CRMU managing the BWR sensor at one of the simulation steps. For the purpose of simplicity, the scenario simulated here takes into account only the sensor part of the problem, but of course, the same work can be carried out for the reconfiguration part of the equipment in order to instantiate the processing elements corresponding to the chosen standard. Note that, in this scenario, the real time is not critical, since this happens when the equipment is switched on, and a second or two can be used for this, which leaves plenty of time to make environmental measurements and to perform reconfigurations. To conclude this brief example, Table 5.11 summarizes the necessary APIs for the involved sensors and operators in order to satisfy the behavior of the CR equipment. It is exactly what is expected from a high-level design approach for CR equipment.

138

Radio Engineering

Figure 5.10. Following the properties during simulation

Cognitive Cycle Management

Unit

4 x L3_CRMU Bandwidth_recognition,

Operations

Capture the metric of the corresponding sensor

Analyze_metric

Just a test on the metric value validity

Wigner_ville_transform, Hole_spectrum.

L2_CRMU_func_fusion

Description

Capture_metric

Cyclostationarity, Define_action

139

If there is an error on the metric value, redo Else, transmit metric to L2_CRMU_func_fusion

Capture_metric

Capture the metrics coming from all 4 sensors

Analyze_metric

Test and interpretation of metrics

Define_action

If there is an error on one metric value, redo Else, transmit fusion result to L1_CRM

Capture_metric

Capture the metric coming from L2_CRMU_func_fusion

Analyze_metric

Interpret the metric and deduce from predefined rules, which standard to instantiate

Define_action

Makes the decision of whether using the standard, or waiting for a new metric (if the user preferences expect something else, for instance)

Send_order2L1_ ReM

Send the reconfiguration order to the associated L1_ReM for the new standard instantiation

L1_CRM

Table 5.11. API elements of HDCRAM’s management architecture for the considered scenario

5.5. Conclusion This chapter pinpoints the challenges of the implication of the intelligent cycle in the CR equipment and suggests a solution (HDCRAM) to be taken into account in the design of such an equipment. This concerns adding management capabilities associated with the various elements of the cycle, i.e. means for capturing information about the environment, means of decision making, and finally means of reconfiguration. As this concerns a complex problem, we believe that a high-level design approach is required early in the design, as explained in Chapter 10. In order to

140

Radio Engineering

propose a universal and industrial approach, the use of modeling techniques based on a metamodel and an approach of the MDA type are preferred. This foreshadows what the future design tools of CR equipment will integrate. These proposals are central to the work of the ongoing research. They will thus be followed by extension in future years. We can even affirm that the domain covered in this chapter will be in constant evolution in the years to come, but certain broad lines exposed here have already emanated.

PART 2

Software Radio as Support Technology

Chapter 6

Introduction to Software Radio

6.1. Introduction Although a certain number of authors affirm that cognitive radio (CR) is totally independent of the software radio (SR), the same affirmation is also found in the Federal Communications Commission (FCC) rulemaking [FCC 05]; it seems to us that the SR, in whatever form (ideal or software defined), is the technology that will provide greatest flexibility, necessary in CR. That is why we describe, in this section, the SR as support technology. It is clear that the more the technology will be ideal, the more the CR will be supported effectively. However, even with software-defined radio (SDR), it will be possible to offer a CR with limited features. The outburst in sales of mobile phones and increase in the volume of data transmission (particularly on the Internet) show that not only the users but also the operators have growing needs for anywhere, anytime communications. A significant geographic mobility leads to strict constraints on the terminals (see Figure 6.1). The terminals, based on their localization, must be able to connect to networks having different standards. It seems easier for a fixed terminal (because we can take its time to reconfigure it); however, mobility engendering a network change must be realized by avoiding any communication breaks for the user (handover). The SR is directly related to the evolution of wireless mobile telecommunications. Indeed, the idea of a terminal that can transmit any type of information (voice or data), anywhere and on any network, has successfully progressed. The SR is a key technology in this evolution. It seems clear that for many years to come, the ideal SR will not be really

Chapter written by Jacques PALICOT and Christophe M OY.

144

Radio Engineering

operational and available to offer future services on next-generation networks, and hence, it will be introduced in a step-by-step fashion.

Figure 6.1. Worldwide mobility

In addition to mobility, a keyword to understand the impact of SR in the future is convergence, in terms of networks as well as services. This notion is not new, in the past we have talked about cooperation or integration. Since quite long, this problem is at the heart of studies; but for the first time, technological advancements have reached such a stage that this notion of convergence has become realizable. Another keyword, closely linked to the previous keyword is the notion of reconfigurable network. Again, this concept is very broad; it can range from the notion of intelligence distribution in the network to the notion of ad hoc networks. All these considerations tend to prove that in the networks portion of the dedicated hardware application-specific integrated circuit (ASIC) will decrease to give more and more place to the software (on the programmable or reconfigurable hardware). This leads us to establish the link with the SR. To allow the geographical decompartmentalization related to mobility, the simplest solution is to find a common band worldwide, which would accept several types of data transmission. The standard IMT 2000 was proposed to meet this criterion. The conflicting interests, hence, the freedom and sale of frequencies in some countries have led rather to a proliferation of systems (universal mobile telecommunication system – UMTS, code division multiple access – CDMA2000, etc.) or even better, the coexistence of standards (where the constraints of these standards are reduced to the minimum, e.g. power and bandwidth). The need of multifunction terminals is also important, which explains the strong industrial mobilization to offer new functions on their terminals: radio, television, messaging, Internet services, etc. The notion of universal terminal (UT) is a commonly proposed solution to this challenge (multistandard, multimode, multifunction, etc.), as shown in Figure 6.2. This could be accomplished by stacking the necessary radio interfaces independently,

Introduction to Software Radio

145

but it is clear that this solution is not viable in the long term. The solution to realize this UT is to use a reprogrammable radio interface. As a result, the terminal must be able to download the right software.

Figure 6.2. The universal terminal

6.2. Generalities The SR is not really a new technology, but rather a logical evolution and convergence of digital radio and preexisting software technologies. The concept of SR comes from the military world, where the need for reprogrammable radio parts was felt very early. This resulted in a first realization called SPEAKeasy in the late 1980s. Figure 6.3 illustrates this evolution in the military world [FET 09]. It should be noted here that our work is oriented for civilian telecommunications applications of SR and we will not describe in detail the numerous military efforts on this subject. For details, see [LAC 95]. In 1992, this idea for telecommunications applications became public at the initiative of Joseph Mitola. We must recognize the enormous impact of Mitola’s work to popularize this concept. This resulted in the publication of the special issue of IEEE Communications Magazine of May 95 [MIT 95], which is the most referred journal in the SR domain. Then, many forums and workshops were being created in the late 1990s. Let us quote the Modular Multifunction Information Transfer System (MMITS) forum, which became the SDR Forum in 1999. The first European

146

Radio Engineering

workshop was held in Brussels in 1997, followed by many others. Then, from 1999, all these efforts gained great momentum. Following the significance, granted by telecommunications actors to SR, the Global System for Mobile Communication (GSM) standardization committee decided to make it an inevitable evolution for future mobiles. Then, a certain number of conferences and journal issues were devoted to this subject. Let us quote, for the year 1999: – the Software Radio Workshop in Karlsruhe, March 1999; – the second IBC Mobile Software Forum in London, September 1999; – IEEE Communications Magazine, February and August 1999; – IEEE Personal Communications Magazine, August 1999; – IEEE Selected Areas in Communications, April 1999.

Figure 6.3. Evolution of SR design (taken from Chapter 1 of [FET 09])

For a more complete list, see Appendix A. Of course, there exist numerous internal projects among operators as well as in industries. Let us quote some important efforts started in 2001: – the collaborative project H2U between Erikson and Telenor. This project studies convergence problems between Hiperlan2 and UMTS with a significant SR part; – the creation of Wireless World Research Forum [WWR 10] on the initiative of the large European industrialists; – the creation of the SR project under the influence of free software: GNU Radio [GNU 10].

Introduction to Software Radio

147

Enumerating all the initiatives that followed would be tedious because they are numerous. Appendix B lists a large number of French and European projects related to SR. 6.2.1. Definitions There is no consensus on the exact meaning of the term SR. In addition, many classifications exist, which had more or less success, in their publication. Here, we propose to compare several denominations resulting from diverse classifications. To better understand the literature (it is not evident; however, it is essential), the principal distinction to be made is between SR and SDR. In this section, when we talk about processing, we mean that all protocol layers contained in the equipment are concerned. However, it is difficult to extend on the infrastructure side because of the involved geographical scattering. However, we focus on physical-layer processing because it is considered that the higher-layer processing is already performed on processors in the state of the art. 6.2.1.1. Ideal software radio It means optimal SR, which is ideal, will be the ultimate goal. Its two main features are: – broadband digitization much closer to the antenna; – fully programmable processing architecture (in terms of general processor). An ideal SR should be able to demodulate at reception (to modulate at transmission), all communication standards of one platform to another, using portable software. This gives the first theoretical architecture, as shown in Figure 6.4, which is explained in section 6.4.1. Compared to a conventional radio design where all functions of the radio frequency (RF) front end (channel selection, interference suppression, amplification, and baseband transposition) are realized by analog processing (performed by dedicated hardware), the SR design samples the RF broadband signal directly after filtering and low-noise amplification. Subsequently, the digital processing module carries out the operations of frequency transposition, channel selection, and demodulation with processing carried out by the software. Today, and even in the near future, for all the radio applications, the concept of ideal SR is absolutely unrealizable. Hence, there is an obvious need to make a compromise between the ideal SR and available technology by establishing a fluctuating and evolving border, according to technological advancements. In the subsequent section, we describe SDR. 6.2.1.2. Software-defined radio The SDR corresponds to a digitization of a restricted band of frequencies, often in intermediate frequency (IF) or even in baseband. Its digital-side architecture

148

Radio Engineering

is reprogrammable and/or reconfigurable, and it may even contain digitally programmable ASIC. Its analog-side architecture is conventional and may contain several transmission reception chains, depending on the number of standards to be supported. If possible, certain parts (amplifiers or antennas) can be made common to multiple waveforms. The SDR is thus the SR adapted to currently existing technologies. In other words, it is a realistic SR for one/more radio application/s data. 6.2.1.3. Other interesting classifications The SDR Forum proposes a more detailed classification, in ascending order from the simplest toward the ideal: – hardware (HW) radio; – software-controlled radio; – SDR; – ideal SDR; – ultimate SR. This classification helps us to understand the evolution of radio architectures, from existing to the futuristic architectures. However, it does not carry any added value and is rarely used.

6.2.2. Interests and aftermath for telecom players The SR has emerged as a need, primarily because it is a natural evolution1 in the mobile systems design [MOY 08a] and, secondly, it has opened up new market perspectives and spectrum utilization, well beyond a simple technological evolution, as already mentioned. Let us list briefly the advantages that an SR approach can provide to various actors of the mobile telecommunications market. Of course, this list is non-exhaustive and people can complete it according to their own interests. 6.2.2.1. Designer of terminals and access points The SR comes from designers’ ways of conceiving a radio, especially from multi-standard radio designers. We actually find this etymology in the military communications world because it is the only reason for all precursory U.S. military projects of the 1970s and 1980s, which clearly bring into being the SR domain. Another origin is the research into more effective simulation techniques of signal processing in the mid-1990s. In fact, there are numerous advantages for the designer: – facility in the development of software tools for high-level programming; 1 In the sense of increasing digitization of electronics systems.

Introduction to Software Radio

149

– flexibility in development and validation because it is easier to replace and readjust software than hardware; – reduced development time because the system is based on digital processing components from a third party with their associated tools, thereby avoiding the lengthy development time of ASIC; – a common platform for multiple products or multiple generations of products, which again reduces the time to market; – possibility of correcting errors until the last moment; – especially, a possibility to correct errors even after sale to avoid return to the factory, by reprogramming or by downloading the patch. The maintenance of access points and which infrastructure is of great importance, as well as the exchange of terminal due to malfunctioning, is very expensive. Contrary to what we might think, it is not so rare and all manufacturers are confronted by it. We might expect a real passion among designers. But this calls into question so many design techniques that we are confronted with an apprehension or even a refusal in development teams, in total contradiction to the advantages that could be acquired. 6.2.2.2. Operator and service provider The operator saw the same constraints as the manufacturer, regarding the maintenance of their network infrastructure. A particular aspect touches the exploitation of the access points. A terminal as well as a base station may resort to the SR to benefit from a multistandard connection. However, the terminal rather commutates one connection with the other, whereas a base station will try to rebalance its load in terms of many connections between various standards. The SR will also allow the operator to accelerate the deployment of new standards. The service provider expects the operator to provide it with the best possible network at the best price. The adaptation capabilities offered by SR technologies can bring many advantages to this effect, and thus increase the opportunities to satisfy and attract customers. It should be noted that in an extension toward the CR, the same network capacity will be strongly increased, and, with new radio services, it will automatically cause an increase in the offers and revenues related to mobile telephony. The terminals are becoming increasingly capable of supporting new means of connections due to SR; suppliers can also diversify their offer (mobile telephony, high-throughput networks, positioning). There is no doubt today that we have a very vague idea of what will actually be provided as services in 10 or 15 years. To sum up, the last point that we noted is the larger opening to competition and opportunities for alliances with foreign operators, which use other standards.

150

Radio Engineering

6.2.2.3. End user For identical services, the customer does not wait for any other difference in the offers, except the price. It is the time for personalization, which will be increased in future, even at present ready-made clothes offer limited editions of clothes, so that the person may never get the same as his/her colleague or the neighbor next door. Moreover, nowadays, the user wants to connect to all cellular networks. In addition to its desire to communicate with GSM or UMTS (cellular networks), it also wants the highest throughput when possible (wireless fidelity – WiFi): acquire the ways for audio (frequency modulation – FM, digital audio broadcasting – DAB) and video (digital video broadcasting-handheld – DVB-H) streaming, obtain services associated with positioning (global positioning system – GPS, Galileo), connect the wireless headset to the phone (Bluetooth), and to communicate with its terminal with other devices in the peripherals of its house or car. We can imagine that in a few years, these new means of connection such as UWB will be very promising from the CR perspective [MOY 05]. Especially with CR, the telephone will no longer act as a mobile office, but as a true private secretary that particularly knows the tastes and habits of the users and modifies services and/or applications according to user wishes. This will be possible, due to the degrees of freedom offered by SR, in terms of means of communication. In addition, better knowledge of the environment (due to CR sensors, see Chapter 3) and a better adaptation (due to SR) can be used to limit the level of transmission, and hence to restrict pollution as well as the electromagnetic impact on living beings and their health. The increase in the lifespan of the equipment mainly due to SR technology should avoid the duplication of the terminals and their incessant renewal, participating in a durable approach of equipment utilization and thus with the recycling of these pieces of equipment. 6.3. Major organizations of software radio Today, even after 15 years since the beginning of the SR, numerous efforts all around the world are in progress and hence it is practically impossible to make a state of the research. However, we briefly list all the institutional actors of the SR, classified into different categories, from institutional bodies, such as regulation and standardization, to collaborative projects, through various organizations studying and promoting the SR. 6.3.1. Forums These forums play a major role in accelerating the dissemination and adoption by the international community of the SR.

Introduction to Software Radio

151

6.3.1.1. SDR Forum/Wireless Innovation Forum The SDR Forum2 plays a major role in dissemination within the international community. Joseph Mitola has been present in it from its onset (when it was still named MMITS Forum). It is due to the SDR Forum that the term SR was actually imposed. These are the members of the SDR Forum, and, in particular, Mitola, who was at the heart of special issues of the IEEE Communications Magazine of 1995 and 1999, and for lobbying among standardization (3rd Generation Partnership Project – 3GPP, IEEE) and regulation (FCC) bodies. At the international level, the SDR Forum is the place concerning the SDR business or research activities. Let us recall that Mitola at that time was employed by a consulting firm serving the U.S. government: MITRE Corp. The charter of the SDR Forum proclaims research to support the proliferation of SR. As a complement of this fruitful action, the SDR Forum is also the place designated by the Department of Defense (DoD) so that industrial suppliers of the U.S. Army converge to an important military program: the joint tactical radio system – JTRS. This program aims to motivate manufacturers of communication systems to use standard platforms (low priced) and to find a way to make all these compatible with each other via a unified software architecture: the software communication architecture – SCA [HER 05]. We understand not only the interest of the DoD in terms of economy, but also the possibility to compete for each subpart of the system allowing their interoperability via standardized interfaces in the SCA. This gave birth to a very complex and very inefficient implementation proposal in the late 1990s. Everyone is still trying to make this SCA proposal compatible with the realities of SR design. However, until further notice, all equipment suppliers of the U.S. Army must meet the FCC regulations. A certain alliance of European manufacturers is taking shape to have their market share, with a conviction that it will still show its difference in privileged technological efficiency, e.g. the European projects such as European Security Software Radio (ESSOR) and EUropean software defined radio for wireLEss in joint secuRity operations (EULER) with the ESSOR SCA and European SR Architecture (ESRA) proposals, particularly supported by Thales. The associated strategic considerations do not allow us to predict the evolution that will happen eventually. 6.3.1.2. OMG The Object Management Group (OMG) is to the world of object-oriented programming what the SDR Forum is to the SR, except that OMG produces its own standards with strong industrial involvement to produce associated tools. Although the OMG is a self-proclaimed standardization organization (independent of any country), it refers to the field of modeling. In fact, the OMG is at the origin of unified modeling language (UML) and common object request broker architecture (CORBA),

2 The SDR Forum was renamed the Wireless Innovation Forum in December 2009, http:// www.wirelessinnovation.org.

152

Radio Engineering

for example. The OMG has shown evidence of its effectiveness and the SDR Forum, being not very dynamic with regard to the SCA in 2004, works on the SCA were brought to the OMG. The SCA is indeed based on object-oriented principles; it has been mainly designed with UML and integrates CORBA in its specifications. We cannot really affirm that OMG has solved the problems of the SCA; however, it has helped to sensitize the software community to the SR. This occurred at a time when the OMG also started working on the embedded systems’ specifications, and therefore is closer to electronics. It released a UML profile for SR (the SWradio Profile). A UML profile aims at specializing UML for a particular application domain. Other UML profiles related to the embedded electronics are very valuable in designing pieces of SR equipment, e.g. real-time and scheduling, performance profiles, which have been replaced by the MARTE profile, adopted and made public in mid-2007. 6.3.2. Standardization organizations The standardization organizations, such as 3GPP (Mobile Execution Environment, MExE) and IEEE (802.22), have already integrated SR-related functionalities in some of their standards. The SR and its extension to CR are studied in various organizations: – P1900 in IEEE; – ETSI RSS; – IUT-R8A. The capabilities and functionalities of these standardization bodies are fully described in Appendix C. However, should the SR as such have been standardized? It is a technological opportunity to envision the functioning of wireless systems in a different way. In this sense, certain standards may incorporate certain attainable characteristics due to SR. 6.3.3. Regulators The regulators are particularly interested in the opportunities that may be offered by SR. As a result, new utilization perspectives and potential markets may emerge. However, there is an appropriate forethought. The FCC (United States) is less cautious and has already permitted the opportunistic spectrum access (notice for rule making). The UWB mask defined by FCC [FCC 02a] is also a first gateway to the overuse of unlicensed spectrum. Different groups identified for CR standardization in Appendix C are also interested in the SR. 6.3.4. Some commercial and academic projects Some general working groups in the wireless communications domain have a strong involvement, particularly enunciated in the field of SR, such as the Institute

Introduction to Software Radio

153

of Electronics, Information and Communication Engineers (IEICE) in Japan, the Wireless Word Research Forum (WWRF) in Europe (and beyond). The European Commission is no exception and has supported and still supports many projects. A non-exhaustive list of these projects can be found in Appendix B. The project end-to-end reconfigurability (E2R) and E2R-Phase 2 are, in particular, the two most important civil projects worldwide. Similarly at the French level, the Réseau National de Recherche en Télécommunications (RNRT) and Réseau National des Technologies Logicielles (RNTL) and more recently, the Agence Nationale de la Recherche (ANR) have invested in numerous collaborative projects having a direct link with the SR (a list of European and French projects is given in Appendix B).

6.3.5. Military projects A list of publicly revealed, most significant military projects can be found in Appendix B.

6.4. Hardware architectures 6.4.1. Software-defined radio (ideal) A reference design of SR is depicted in Figure 6.4. The conversions are performed immediately after the antenna. Clearly, the architecture is fully reprogrammable because the signal processor is directly connected to the converters. Analog-to-digital converter (ADC) and digital-to-analog converter (DAC) functions at high frequency and wideband will be very complicated to realize as we will see later, and this constitutes a major difficulty in SR. Today, and even in the near future, the concept of ideal SR is unrealizable with the present day technologies. In particular, wideband antennas are necessary for this kind of design. In addition, the sampling of wideband signal requires very high-performance wideband ADCs (large dynamic range and frequency), which are currently not available. Similarly, the signal processors must be able to perform a certain amount of processing at very high operating frequencies. Most of these difficulties are discussed in detail in Chapter 7.

Figure 6.4. Ideal SR. LNA: low-noise amplifier, DSP: digital signal processor

154

Radio Engineering

These technological difficulties among others (memory size, power consumption, linearity of amplifiers, reprogramming in real time, etc.) show an obvious need to make a compromise between the ideal SR and available technology, by establishing a fluctuating and evolving border, according to technological advancements. This is known as SDR.

6.4.2. Software-defined radio As the ideal architecture of Figure 6.4 is not realizable in the near future, numerous studies have been oriented toward suboptimal architectures. This is what we call SDR, whereas the digitized band is reduced. In the literature, there are mainly three types of SDR, as described in this section (see [FET 09] and [TUT 02]). The pragmatic architecture of Figure 6.5 is an example of SDR architecture. In this architecture, the conventional transmit/receive analog front end (AFE) is cut into two parts. The first part is related to the analog functions that cannot be realized differently: the AFE, described in detail in Chapter 7, a converter followed by a digital front end (DFE) that digitally realizes certain number of old analog functions. DFE is described in detail in Chapter 8. This divisioning is also presented in Fette’s book [FET 09].

Figure 6.5. Pragmatic architecture of SDR

The peculiarity of the SR is to position the flexibility in first priority. Consequently, the hardware architecture of SDR equipment most often uses generalpurpose hardware recourses of low cost and low consumption, which enable the programming and software configuration of functionalities. In SDR architecture, a minimum of analog circuits including the antenna, RF/IF filter, frequency transposition, amplification stage, and finally a data conversion stage are necessary for wideband signal processing, before sampling. After that, the digital part performs demodulation and decoding of the information. This digital processing is most often realized by heterogeneous architectures, composed of processors (digital signal processor –DSP, general purpose processor – GPP), field programmable gate array (FPGA), and ASIC simultaneously, in order to benefit from the qualities of each family in terms of processing [MOY 08a]. In the literature, there are principally three types of SDR architecture that are described in this section.

Introduction to Software Radio

155

6.4.2.1. Direct conversion The direct conversion receiver shown in Figure 6.6 performs a demodulation directly in RF, by multiplying the RF signal by a local oscillator (LO) at the value of channel carrier to demodulate f0 . The same goes the other way around during transmission. This architecture presents a real advantage because it does not require IF conversion. On the other hand, it has two drawbacks: a significant error on the direct current (DC) component resulting from the coupling of the LO and RF input, and sensitivity to powerful transmitters in adjacent channels that saturate the mixers [TRU 00]. This mixing problem between the LO and RF signal is illustrated in Figures 6.7 and 6.8. Figure 6.7 illustrates the mixing of LO through RF path, while Figure 6.8 illustrates the symmetric problem. In these cases, part of the signal will be mixed with itself. This squaring will appear on the DC component. In addition, the LO must be capable of synthesizing all the carrier frequencies of all standards to which the receiver can access. To overcome the problem of the DC component, it is possible to use the quasi-zero IF principle, using an LO not at the value of the channel carrier f0 , but slightly shifted at f0 + 1/2BPc , BPc being the bandwidth of the channel of interest. The rejection of the LO is largely facilitated at the cost of a more important processing being performed in DSP. This architecture is particularly appealing for mobiles and is already used in certain digital enhanced cordless telecommunication (DECT) receivers.

Figure 6.6. Example of SDR with direct conversion at the reception [TUT 02]. The same architecture is used the other way around during transmission. BPF: bandpass filter; LPF: low-pass filter, LNA: low-noise amplifier, LO: local oscillator, A/D: analog to digital

6.4.2.2. SR with low IF To alleviate the problem of ADC, the idea is to realize a lowest possible frequency conversion. This architecture (see Figure 6.9) is a compromise compared to the ideal SR. Indeed, the frequency band down-converted in IF must be larger than the channel bandwidth (base station) to demodulate. In the case of a mobile, demodulation of a channel requires that the IF is twice the channel bandwidth. Within the framework of the project Software Radio Technology (SORT) [HEN 98], a realization of an SDR architecture in low IF based on this principle was tested. This solution digitizes the largest frequency band that can be demodulated (in 1998) in SR using Sigma

156

Radio Engineering

Delta ADC (approximately 2 MHz). It can demodulate one IS95 channel or eight GSM channels. This solution makes it possible to verify the efficiency of converters. Indeed, these are advantageous in terms of the number of effective bits, 16–18 bits, but for relatively low frequencies from 1 to 10 M samples/s. Regarding the future of SR, the major drawback of the low-IF architectures is the reappearance of a frequency conversion.

Figure 6.7. Origin of the appearance of a DC component with direct conversion architecture: interference due to the LO [TUT 02]. BPF: bandpass filter, LNA: low-noise amplifier, LO: local oscillator

Figure 6.8. Origin of the appearance of a DC component with direct conversion architecture: interference due to RF signal [TUT 02]. BPF: bandpass filter, LNA: low-noise amplifier, LO: local oscillator

Figure 6.9. SDR with low IF at reception [FET 09]. This architecture is exactly similar for transmission

Introduction to Software Radio

157

Figure 6.10. Effect of undersampling [PAL 01]. fe : sampling frequency; fmax : maximum RF/IF frequency; B: bandwidth to be digitized

6.4.2.3. Undersampling The idea is to use the ADC as a conversion system. The sampling frequency must be greater than twice the bandwidth to be digitized and equal to a submultiple of the maximum RF frequency. At ADC’s output, we obtain a digitized image of the RF band (see Figure 6.10), if fe follows equation [6.1]: fe = 2f (max)/m

[6.1]

where m is the downsampling ratio. In this case, a selective RF/IF filtering is necessary for isolation of the band without overlapping the adjacent channels. The main advantage of this architecture is that it does not require frequency conversion, but the constraints on the A/D converter are still very stringent. In addition, the undersampling (fe < 2fmax ) also results in a degradation of the signalto-noise ratio. Undersampling requires the same constraints of spurious free dynamic range (SFDR) and bandwidth on the input converters (see section 7.4) as the ideal SR, i.e. an SDFR greater than 100 dB and an input bandwidth greater than the maximum sampling frequency (from 1 GHz for the standard at 900 MHz to 2.5 GHz for wireless local area network (WLAN) at 2.4 GHz) are required. 6.4.2.4. Other architectures The brief presentation of principal architectures does not claim to make an exhaustive list of possible architectures. There are others, which often combine the earlier architectures.

158

Radio Engineering

6.4.2.4.1. Architecture with two parallel paths One example among many others is the architecture with two parallel paths that functions in two successive stages [PAL 03b]. At the first stage, the signal path follows the SR architecture with existing ADC (see path 1 in Figure 6.11). Primarily, this stage analyzes the signal. It is possible with the powerful signal processing algorithms to analyze performance even with a signal sampled with few bits. The result of this analysis stage enables us to commutate on the second path that works in SDR, low-IF architecture and proceeds to the second stage, which consists of demodulating the appropriate signal (see path 2 in Figure 6.11). This architecture is shown in Figure 6.11.

Figure 6.11. SR architecture with two parallel paths

6.4.2.4.2. Multichannel architecture This architecture is a special case of the low-IF architecture when the frequency band, downconverted in IF, contains the multiplexed channels to be processed. It is, without doubt, the architecture closer to the ideal SR. At transmission, this architecture digitally realizes the multiplex of channels considered and passes it in analog, just before the power amplifier. This implies having a high-performance DAC and a highly efficient wideband power amplifier. Under these constraints and in light of Figure 6.12, it is clearly a significant gain in terms of analog components compared to a channel-per-channel architecture. This multichannel architecture is very interesting for base stations and is already used by operators.

Introduction to Software Radio

159

Figure 6.12. Multichannel transmission architecture. The classic solution (a) consists of realizing as much single-channel architecture as needed in parallel, whereas solution (b) generates the digital multiplex up to the level of DAC

At reception this architecture consists of sampling the set of the multiplexed channels. It is particularly suited to the use of parallel processing (using Fourier transforms, for example). The DFE of this type of architecture is presented in Chapter 8. R EMARK.– We have described above the transmitter-side hardware architectures of the terminal, but it is to be noted that all the problems of SR are related to transmission and reception as well as terminals and base stations. 6.5. Conclusion This chapter has presented the SR technology that acts as a support for CR. We have briefly described the principal SDR architectures found in the literature. The technological constraints pointed out in this chapter reveal that the true SR is not feasible today. This is precisely the subject of subsequent chapters, which describe these difficulties in more detail. Chapter 7 is dedicated to the description of the difficulties of the AFE, while Chapter 8 deals with the difficulties of DFE. In these chapters, the authors highlight the ongoing technological advancements and current platforms that provide opportunities for SR, as described in Chapter 11. For further details about SR, refer to the works of the authors [ARS 07, FET 09, HEN 02b, KEN 05, MIT 00b, ROU 09, TUT 02].

Chapter 7

Transmitter/Receiver Analog Front End

7.1. Introduction Starting from software-defined radio (SDR) architecture, this chapter deals with the receiver radio frequency (RF) (or analog) front end. It is therefore assumed that some analog part is retained in the receiver. Thus, three fundamental parts are studied in this chapter: antennas, power amplifiers, and converters. The SR technology actually imposes new constraints on the three steps (bandwidth, linearity, dynamic) and it is therefore necessary to rethink the design. However, this chapter does not address mixers, frequency synthesizers, and filters.

7.2. Antennas 7.2.1. Introduction To analyze the electromagnetic environment and adapt its operation, the cognitive radio (CR) system is brought to discriminate the radio signals spectrally and spatially. Spatial discrimination is to select a signal according to its direction of arrival. It requires a directive antenna of the array type, an antenna capable of steering its beam(s) in the desired direction(s). Spectral discrimination requires, in turn, using a broadband or a multiband antenna. The frequency selection of the signals is performed using band-pass filters or directly by the antenna if the latter is tunable. Figure 7.1 shows the associated architectures.

Chapter written by Renaud L OISON, Raphaël G ILLARD, Yves L OUËT and Gilles T OURNEUR.

162

Radio Engineering

Figure 7.1. Possible architectures for an RF front end for spectral discrimination

The implementation of these antennas is constrained by the context: at the base station or mobile terminal. 7.2.2. For base stations 7.2.2.1. Constraints on spatial discrimination Spatial discrimination requires a directional antenna capable of steering the beam in the desired direction. Array antennas [HAN 98], consisting of a combination of elementary sources whose radiations are combined with an appropriate weight in amplitude and phase, are particularly well suited for the generation of these features. They make it possible to achieve large radiating apertures (necessary for obtaining high directivity and therefore a narrow beam) while making it possible to have electronic control over the radiation pattern through the commands applied to the elementary sources constituting the array. A multibeam antenna can even go further by dealing independently with multiple signals arriving from different directions [MIU 97]. In this case, it combines a single radiating aperture with a beam-forming network responsible for distributing the various signals. Smart antennas [SAR 03] are able to adjust the radiation pattern according to the environment. The typical architecture of the receiver is presented in Figure 7.2. The receiver consists of an array in which the digitized signals at the output of the N elements, xi (k), are combined with complex weights Wi to form the overall output signal y(k). The reconfigurability is obtained by recalculating the weights in real time through an appropriate algorithm and taking into account the received signals. To do this, the algorithm uses as input the information contained in each of the N individual output

Transmitter/Receiver Analog Front End

163

signals and the overall output signal (which may eventually be completed by the known data, for example, the position of fixed jammers). It then modifies the weights to maximize the signal-to-noise ratio (SNR), cancels out the contributions coming from certain parasitic directions or other appropriate processing [RAZ 99]. Following the same principle, several distinct output signals can be generated simultaneously by using several different sets of weights. This results in an intelligent multibeam antenna that is able to manage communication with different receivers. The application of intelligent base stations is discussed in [PER 01].

Figure 7.2. Architecture of a smart antenna at the receiver

164

Radio Engineering

7.2.2.2. Constraints on the spectral discrimination Spectral discrimination requires the use of a wideband (or multiband) antenna associated with a duplexer separating the signals at different frequencies. In practice, the direct design of the broadband antenna arrays faces two antagonistic constraints: – the size L of a broadband source is constrained by the low frequencies: L ∝ λmax where λmax = c/fmin is the wavelength at the minimum frequency fmin in free space. It may be quite large when the minimum frequency is low; – on the contrary, spacing between the sources of an array (interelement spacing d) is limited by the higher frequencies: d